Feed aggregator

Swabbing tests have proved to be an effective method of diagnosis for a wide range of diseases. Potential occupational health hazards and reliance on healthcare workers during traditional swabbing procedures can be mitigated by self-administered swabs. Hence, we report possible methods to apply closed kinematic chain theory to develop a self-administered viral swab to collect respiratory specimens. The proposed sensorized swab models utilizing hollow polypropylene tubes possess mechanical compliance, simple construction, and inexpensive components. In detail, the adaptation of the slider-crank mechanism combined with concepts of a deployable telescopic tubular mechanical system is explored through four different oral swab designs. A closed kinematic chain on suitable material to create a developable surface allows the translation of simple two-dimensional motion into more complex multi-dimensional motion. These foldable telescopic straws with multiple kirigami cuts minimize components involved in the system as the characteristics are built directly into the material. Further, it offers a possibility to include soft stretchable sensors for realtime performance monitoring. A variety of features were constructed and tested using the concepts above, including 1) tongue depressor and cough/gag reflex deflector; 2) changing the position and orientation of the oral swab when sample collection is in the process; 3) protective cover for the swabbing bud; 4) a combination of the features mentioned above.

The dawn of the robot revolution is already here, and it is not the dystopian nightmare we imagined. Instead, it comes in the form of social robots: Autonomous robots in homes and schools, offices and public spaces, able to interact with humans and other robots in a socially acceptable, human-perceptible way to resolve tasks related to core human needs. 

To design social robots that “understand” humans, robotics scientists are delving into the psychology of human communication. Researchers from Cornell University posit that embedding the sense of touch in social robots could teach them to detect physical interactions and gestures. They describe a way of doing so by relying not on touch but on vision.

A USB camera inside the robot captures shadows of hand gestures on the robot’s surface and classifies them with machine-learning software. They call this method ShadowSense, which they define as a modality between vision and touch, bringing “the high resolution and low cost of vision-sensing to the close-up sensory experience of touch.” 

Touch-sensing in social or interactive robots is usually achieved with force sensors or capacitive sensors, says study co-author Guy Hoffman of the Sibley School of Mechanical and Aerospace Engineering at Cornell University. The drawback to his group’s approach has been that, even to achieve coarse spatial resolution, many sensors are needed in a small area.

However, working with non-rigid, inflatable robots, Hoffman and his co-researchers installed a consumer-grade USB camera to which they attached a fisheye lens for a wider field of vision. 

“Given that the robot is already hollow, and has a soft and translucent skin, we could do touch interaction by looking at the shadows created by people touching the robot,” says Hoffman. They used deep neural networks to interpret the shadows. “And we were able to do it with very high accuracy,” he says. The robot was able to interpret six different gestures, including one- or two-handed touch, pointing, hugging and punching, with an accuracy of 87.5 to 96 percent, depending on the lighting.

This is not the first time that computer vision has been used for tactile sensing, though the scale and application of ShadowSense is unique. “Photography has been used for touch mainly in robotic grasping,” says Hoffman. By contrast, Hoffman and collaborators wanted to develop a sense that could be “felt” across the whole of the device. 

The potential applications for ShadowSense include mobile robot guidance using touch, and interactive screens on soft robots. A third concerns privacy, especially in home-based social robots. “We have another paper currently under review that looks specifically at the ability to detect gestures that are further away [from the robot’s skin],” says Hoffman. This way, users would be able to cover their robot’s camera with a translucent material and still allow it to interpret actions and gestures from shadows. Thus, even though it’s prevented from capturing a high-resolution image of the user or their surrounding environment, using the right kind of training datasets, the robot can continue to monitor some kinds of non-tactile activities. 

In its current iteration, Hoffman says, ShadowSense doesn’t do well in low-light conditions. Environmental noise, or shadows from surrounding objects, also interfere with image classification. Relying on one camera also means a single point of failure. “I think if this were to become a commercial product, we would probably [have to] work a little bit better on image detection,” says Hoffman.

As it was, the researchers used transfer learning—reusing a pre-trained deep-learning model in a new problem—for image analysis. “One of the problems with multi-layered neural networks is that you need a lot of training data to make accurate predictions,” says Hoffman. “Obviously, we don’t have millions of examples of people touching a hollow, inflatable robot. But we can use pre-trained networks trained on general images, which we have billions of, and we only retrain the last layers of the network using our own dataset.”

Demand for battery-making metals is projected to soar as more of the world’s cars, buses, and ships run on electricity. The coming mining boom is raising concerns of environmental damage and labor abuses—and it’s driving a search for more sustainable ways of making batteries and cutting-edge electronics.

Artificial intelligence could help improve the way battery metals are mined, or replace them altogether. KoBold Metals is developing an AI agent to find the most desirable ore deposits in the least problematic locations. IBM Research, meanwhile, is harnessing AI techniques to identify alternative materials that already exist and also develop new chemistries.

KoBold, a mining exploration startup, says its technology could reduce the need for costly and invasive exploration missions, which often involve scouring the Earth many times over to find rare, high-quality reserves. 

“All the stuff poking out of the ground has already been found,” said Kurt House, co-founder and CEO of the San Francisco Bay area company. “At the same time, we’ve realized we need to massively change the energy system, which requires all these new minerals.”

KoBold is partnering with Stanford University’s Center for Earth Resource Forecasting to develop an AI agent that can make decisions about how and where explorers should focus their work. The startup is mainly looking for copper, cobalt, nickel, and lithium—metals key to making electric vehicle batteries as well as solar panels, smartphones, and many other devices.

Jef Caers, a professor of geological sciences at Stanford, said the idea is to accelerate the decision-making process and enable explorers to evaluate multiple sites at once. He likened the AI agent to a self-driving car: The vehicle not only gathers and processes data about its surrounding environment but also acts upon that information to, say, navigate traffic or change speeds.

“We can’t wait another 10 or 20 years to make more discoveries,” Caers said. “We need to make them in the next few years if we want to have an impact on [climate change] and go away from fossil fuels.”

Light-duty cars alone will have a significant need for metals. The global fleet of battery-powered cars could expand from 7.5 million in 2019 to potentially 2 billion cars by 2050 as countries work to reduce greenhouse gas emissions, according to a December paper in the journal Nature. Powering those vehicles would require 12 terawatt-hours of annual battery capacity—roughly 10 times the current U.S. electricity generating capacity—and mean a “drastic expansion” of metal supply chains, the paper’s authors said. 

Photo: KoBold Metals Patrick Redmond of KoBold Metals evaluates a prospective cobalt-copper mining site in Zambia.

Almost all lithium-ion batteries use cobalt, a material that is primarily supplied by the Democratic Republic of Congo, where young children and adults often work in dangerous conditions. Copper, another important EV material, requires huge volumes of water to mine, yet much of the global supply comes from water-scarce regions near Chile’s Atacama desert. 

For mining companies, the challenge is to expand operations without wreaking havoc in the name of sustainable transportation. 

KoBold’s AI-driven approach begins with its data platform, which stores all available forms of information about a particular area, including soil samples, satellite-based hyperspectral imaging, and century-old handwritten drilling reports. The company then applies machine learning methods to make predictions about the location of compositional anomalies—that is, unusually high concentrations of ore bodies in the Earth’s subsurface.

Working with Stanford, KoBold is refining sequential decision-making algorithms to determine how explorers should next proceed to gather data. Perhaps they should fly a plane over a site or collect drilling samples; maybe companies should walk away from what is likely to be a dud. Such steps are currently risky and expensive, and companies move slowly to avoid wasting resources. 

The AI agent could make such decisions roughly 20 times faster than humans can, while also reducing the rate of false positives in mining exploration, Caers said. “This is completely new within the Earth sciences,” he added.

Image: KoBold Metals An AI visualization by KoBold Metals depicts a plot of predictions from the borehole electromagnetic model, with true values on the left and predictions on the right.

KoBold, which is backed by the Bill Gates-led Breakthrough Energy Ventures, is already exploring three sites in Australia, North America, and Sub-Saharan Africa. Field data collected this year will provide the first validations of the company’s predictions, House said.

As the startup searches for metals, IBM researchers are searching for solvents and other materials to reduce the use of battery ingredients such as cobalt and lithium.

Research teams are using AI techniques to identify and test solvents that offer higher safety and performance potential than current lithium-ion battery options. The project focuses on existing and commercially available materials that can be tested immediately. A related research effort, however, aims to create brand-new molecules entirely.

Using “generative models,” experts train AI to learn the molecular structure of known materials, as well as characteristics such as viscosity, melting point, or electronic conductivity.  

“For example, if we want a generative model to design new electrolyte materials for batteries— such as electrolyte solvents or appropriate monomers to form ion-conducting polymers—we should train the AI with known electrolyte material data,” Seiji Takeda and Young-hye Na of IBM Research said in an email. 

Once the AI training is completed, researchers can input a query such as “design a new molecular electrolyte material that meets the characteristics of X, Y, and Z,” they said. “And then the model designs a material candidate by referring to the structure-characteristics relation.”

IBM has already used this AI-boosted approach to create new molecules named called photoacid generators that could eventually help produce more environmentally friendly computing devices. Researchers also designed polymer membranes that apparently absorbs carbon dioxide better than membranes currently used in carbon capture technologies.

Designing a more sustainable battery “is our next challenge,” Takeda and Na said.

The development of biodegradable soft robotics requires an appropriate eco-friendly source of energy. The use of Microbial Fuel Cells (MFCs) is suggested as they can be designed completely from soft materials with little or no negative effects to the environment. Nonetheless, their responsiveness and functionality is not strictly defined as in other conventional technologies, i.e. lithium batteries. Consequently, the use of artificial intelligence methods in their control techniques is highly recommended. The use of neural networks, namely a nonlinear autoregressive network with exogenous inputs was employed to predict the electrical output of an MFC, given its previous outputs and feeding volumes. Thus, predicting MFC outputs as a time series, enables accurate determination of feeding intervals and quantities required for sustenance that can be incorporated in the behavioural repertoire of a soft robot.

In this paper, we present a generalized modeling tool for predicting the output force profile of vacuum-powered soft actuators using a simplified geometrical approach and the principle of virtual work. Previous work has derived analytical formulas to model the force-contraction profile of specific actuators. To enhance the versatility and the efficiency of the modelling process we propose a generalized numerical algorithm based purely on geometrical inputs, which can be tailored to the desired actuator, to estimate its force-contraction profile quickly and for any combination of varying geometrical parameters. We identify a class of linearly contracting vacuum actuators that consists of a polymeric skin guided by a rigid skeleton and apply our model to two such actuators-vacuum bellows and Fluid-driven Origami-inspired Artificial Muscles-to demonstrate the versatility of our model. We perform experiments to validate that our model can predict the force profile of the actuators using its geometric principles, modularly combined with design-specific external adjustment factors. Our framework can be used as a versatile design tool that allows users to perform parametric studies and rapidly and efficiently tune actuator dimensions to produce a force-contraction profile to meet their needs, and as a pre-screening tool to obviate the need for multiple rounds of time-intensive actuator fabrication and testing.

In my experience, there are three different types of consumer drone pilots. You’ve got people for whom drones are a tool for taking pictures and video, where flying the drone is more or less just a necessary component of that. You’ve also got people who want a drone that can be used to take pictures or video of themselves, where they don’t want to be bothered flying the drone at all. Then you have people for whom flying the drone itself is the appealing part; people who like flying fast and creatively because it’s challenging, exciting, and fun. And that typically means flying in First Person View, or FPV, where it feels like you’re a tiny little human sitting inside of a virtual cockpit in your drone.

For that last group of folks, the barrier to entry is high. Or rather, the barriers are high, because there are several. Not only is the equipment expensive, you often have to build your own system comprising the drone, FPV goggles, and accompanying transmitter and receiver. And on top of that, it takes a lot of skill to fly an FPV drone well; all of the inevitable crashes just add to the expense.

Today, DJI is announcing a new consumer first-person view drone system that includes everything you need to get started. You get an expertly designed and fully integrated high-speed FPV drone, a pair of FPV goggles with exceptional image quality and latency that’s some of the best we’ve ever seen, plus a physical controller to make it all work. Most importantly, though, there’s on-board obstacle avoidance plus piloting assistance that means even a complete novice can be zipping around with safety and confidence on day one.

Because the point of an FPV drone is to let you fly from a first-person viewpoint. The drone has a forward-facing camera that streams video to a pair of goggles in real time. This experience is a unique one, and there’s only so much that I can do to describe it, but it turns flying a drone into a much more personal, visceral, immersive thing. With an FPV drone, it feels much more like you are the drone. 

The Drone Photos: DJI DJI’s FPV drone is basically a battery with a camera and some motors attached.

DJI’s FPV drone itself is a bit of a chonker, as far as drones go. It’s optimized for going very fast while giving you a good first-person view, and no concessions are given to portability. It weighs 800g (of which 300 is the battery), and doesn’t fold up even a little bit, although the props are easy to remove.

Photo: Evan Ackerman/IEEE Spectrum Efficient design, but not small or portable.

Top speed is a terrifying 140 km/h, albeit in a mode  that you have to unlock (more on that later), and it’ll accelerate from a hover to 100 km/h in two seconds. Battery life maxes out at 20 minutes, but in practice you’ll get more like 10-15 minutes depending on how you fly. The camera on the front records in 4K at 60 FPS on an electronically stabilized tilt-only gimbal, and there’s a microSD card slot for local recording.

We’re delighted to report that the DJI FPV drone also includes some useful sensors that will make you significantly less likely to embed it in the nearest tree. These sensors include ground detection to keep the drone at a safe altitude, as well as forward-looking stereo-based obstacle detection that works well enough for the kinds of obstacles that cameras are able to see. 

The Goggles Photos: DJI You’ll look weird with these on, but they work very, very well.

What really makes this drone work are the FPV goggles along with the radio system that connects to the drone. The goggles have two tiny screens in them, right in front of your eyeballs. Each screen can display 1440 x 810p at up to 120 fps (which looks glorious), and it’ll do so while the drone is moving at 140 km/h hundreds of meters away. It’s extremely impressive, with quality that’s easily good enough to let you spot (and avoid) skinny little tree branches. But even more important than quality is latency— the amount of time it takes for the video to be captured by the drone, compressed, sent to the goggles, decompressed, and displayed. The longer this takes, the less you’re able to trust what you’re seeing, because you know that you’re looking at where the drone used to be rather than where it actually is. DJI’s FPV system has a latency that’s 28ms or better, which is near enough to real-time that it feels just like real-time, and you can fly with absolute confidence that the control inputs you’re giving and what you’re seeing through the goggles are matched up. 

Photo: Evan Ackerman/IEEE Spectrum The goggles are adjustable to fit most head sizes.

The goggles are also how you control all of the drone options, no phone necessary. You can attach your phone to the goggles with a USB-C cable for firmware updates, but otherwise, a little joystick and some buttons on the top of the goggles lead you to an intuitive interface for things like camera options, drone control options, and so on. A microSD card slot on the goggles lets you record the downlinked video, although it’s not going to be the same quality as what’s recorded on-board the drone. 

Don’t get your hopes up on comfort where the goggles are concerned. They’re fine, but that’s it. My favorite thing about them is that they don’t weigh much, because the battery that powers them has a cable so that you can keep it in your pocket rather than hanging it off the goggles themselves. Adjustable straps mean the goggles will fit most people, and there’s inter-eye distance adjustments for differently shaped faces. My partner, who statistically is smaller than 90% of adult women, found that the google inter-eye distance adjustment was almost, but not quite, adequate for her. Fortunately, the goggles not being super comfortable isn’t a huge deal because you won’t be wearing them for that long, unless you invest in more (quite expensive) battery packs.

The Controller Photos: DJI All the controls you want, none that you don’t. Except one.

The last piece of the kit is the controller, which is fairly standard as far as drone controllers go. The sticks unscrew and stow in the handles, and you can also open up the back panels to adjust the stick tension, which experienced FPV pilots will probably want to do.

Before we get to how the drone flies, a quick word on safe operation— according to the FAA, if you’re flying an FPV drone, you’ll need a spotter who keeps the drone in view at all times. And while the range of the DJI FPV drone is (DJI claims) up to 10km, here in the United States you’re not allowed to fly it higher than 400ft AGL, or farther away than your spotter can see, without an FAA exemption. Also, as with all drones, you’ll need to find places that are both safe and legal to fly. The drone will prevent you from taking off in restricted areas that DJI knows about, but it’s on you to keep the drone from flying over people, or otherwise being dangerous and/or annoying.

In Flight— Normal Mode Photos: Evan Ackerman/IEEE Spectrum It's not the most graceful looking drone, but the way it flies makes you not care.

DJI has helpfully equipped the FPV drone with three flight modes: Normal, Sport, and Manual. These three modes are the primary reason why this drone is not a terrible idea for almost everyone— Normal mode is a lot of fun, and both safe and accessible for FPV novices. Specifically, Normal mode brings the top speed of the drone down to a still rather quick 50 km/h, and will significantly slow the drone if the front sensors think you’re likely to hit something. As with DJI’s other drones, if you start getting into trouble you can simply let go of the control sticks, and the drone will bring itself to a halt and hover. This makes it very beginner friendly, and (as an FPV beginner myself) I didn’t find it at all stressful to fly. With most drones (and especially drones that cost as much as this one does) fear of crashing is a tangible thing always sitting at the back of your mind. That feeling is not gone with the DJI FPV drone, but it’s reduced so much that the experience can really just be fun.

To be clear, not crashing into stuff is not enough to make FPV flying an enjoyable experience. Arguably the best feature of DJI’s FPV drone is how much help it gives you behind the scenes to make flying effortless.

When flying a conventional drone, you’ve got four axes of control to work with, allowing the drone to pitch, roll, yaw, move vertically up and down, or do any combination of those things. Generally, a drone won’t couple these axes for you in an intelligent way. For example, if you want to go left with a drone, you can either roll left, which will move the drone left without looking left, or yaw left, which will cause the drone to look left without actually moving. To gracefully fly a drone around obstacles at speed, you need to fuse both of these inputs together, which is a skill that can take a long time to master. I have certainly not mastered it.

The drone does exactly what you think it should do, and it works beautifully.

For most people, especially beginners, it’s much more intuitive for the drone to behave more like an airplane when you want it to turn. That is, when you push the left stick (traditionally the yaw control), you want the drone to begin to roll while also yawing in the same direction and increasing throttle to execute a lovely, sweeping turn. And this is exactly what DJI FPV does—  thanks to a software option called coordinated turns that’s on by default, the drone does exactly what you think it should do, and it works beautifully. 

I could tell you how well this works for me, someone who has flown non-FPV drones for years, but my partner has flown a drone only one single time before, when she spent five minutes with the controller of my Parrot Anafi a few years ago and got it to go up and down and occasionally sideways a little bit. But within literally two minutes, she was doing graceful figure 8s with the DJI FPV drone. The combination of the FPV view and the built-in coordinated turns makes it just that intuitive.

In Flight— Sport Mode

Once you’re comfortable with Normal mode, Sport mode (which you can select at any time with a toggle switch on the controller) bumps up the speed of the drone from 50 km/h to 97 km/h. More importantly, the obstacle avoidance no longer slows the drone down for you, although it does give you escalating warnings when it thinks you’re going to hit something. As you get more comfortable with the drone, you’ll find that the obstacle avoidance tends to be on the paranoid side, which is as it should be. Once you’ve practiced a bit and you want to (say) fly between two trees that are close together, Sport mode will let you do that without slowing down.

Along with a higher top speed, Sport mode will also make the drone literally scream. When in flight it makes a loud screaming noise, especially when you ask a lot from the motors, like with rapid direction changes or gaining altitude at speed. This happens in Normal mode, but gets markedly louder in Sport mode. This doesn’t matter, really, except that if you’re flying anywhere near other people, they’re likely to find it obnoxious. 

I was surprised by how not-puking I was during high-speed FPV flight in Sport mode. I tend to suffer from motion sickness, but I had no trouble with the drone, as long as I kept my head still. Even a small head movement while the drone was in flight could lead to an immediate (although minor) wave of nausea, which passed as soon as I stopped moving. My head sometimes subconsciously moved along with the motion of the drone, to the point where after a few minutes of flying I’d realize that I’d ended up staring sideways at the sky like an idiot, so if you can just manage to keep your head mostly still and relaxed in a comfortable position, you’ll be fine. 

A word on speed— even though Sport mode has a max speed of only (only?) 97 km/h, coming from flying a Mavic Pro, it feels very, very fast. In tight turns, the video coming through the goggles sometimes looked like it was playing back at double speed. You could always ask for more speed, I suppose, and Manual mode gives it to you. But I could see myself being perfectly happy to fly in Sport mode for a long, long time, since it offers both speed and some additional safety.

The following video includes clips of Normal and Sport mode, to give you an idea of how smooth the drone moves and how fast it can go, along with a comparison between the standard 1080p recorded by the drone and what gets shown in the goggles. As you watch the video, remember that I’m not an FPV pilot. I’ve never flown an FPV drone before, and what you’re seeing is the result of less than an hour of total flight time.

In Flight— Manual Mode

There is one final mode that the DJI FPV drone comes with: Manual mode. Manual mode is for pilots who don’t need or want any hand-holding. You get full control over all axes as well as an unrestricted top speed of 140 km/h. Manual mode must be deliberately enabled in menus in the goggles (it’s not an available option by default), and DJI suggests spending some time in their included simulator before doing so. I want to stress that Manual mode doesn’t just disable the coordinated turns function, making control of the drone more like a traditional camera drone— if that’s something you want, there’s an option to do that in Normal and Sport mode. Manual mode is designed for people with drone racing experience, and enabling it turns the FPV drone into a much different thing, as I found out.

My test of Manual mode ended approximately 15 seconds after it began due to high speed contact with the ground. I wouldn’t call what happened a “crash,” in the sense that I didn’t fly the drone into an obstacle— there was a misunderstanding or a lack of information or an accidental input or some combination of those things that led to the drone shutting all of its motors of at about 150 feet up and then falling to the ground. 

Photo: Evan Ackerman/IEEE Spectrum The crash broke the drone’s two rear arms, but the expensive parts all seem perfectly fine.

I’d planned to just fly the drone in Manual mode a little bit, with plenty of altitude and over a big open space, primarily to get a sense of how much faster it is in Manual mode than in normal mode. After taking off in Normal mode and giving the drone a lot of room, I switched over to Manual mode. Immediately, the drone began to move in a much less predictable way, and after about five seconds of some tentative control inputs to see if I could get a handle on it, I felt uncomfortable enough to want the drone to stop itself.

The first thing I did was stop giving any control inputs. In Normal and Sport mode, the drone will respond to no input (centered sticks) by bringing itself to a hover. This doesn’t happen in Manual mode, and the drone kept moving. The second thing I did was push what I thought was the emergency stop button, which would switch the drone back to Normal mode and engage a thoughtfully included emergency stop mode to bring it to a stable hover as quickly as possible. I hadn’t yet needed an emergency stop in Normal or Sport mode, since just taking my fingers off of the sticks worked just fine before. What I learned post-crash was that in Manual mode, the button that says “Stop” on it (which is one of the easiest to press buttons on the controller since your right index finger naturally rests there) gains a new emergency shut-off functionality that causes the drone to disable all of its motors, whereupon it will then follow a ballistic trajectory until the inevitable happens no matter how much additional button pushing or frantic pleading you do. 

I certainly take some responsibility for this. When the DJI FPV drone showed up, it included a quick start guide and a more detailed reviewer’s guide, but neither of those documents had detailed information about what all the buttons on the controller and headset did. This sometimes happens with review units— they can be missing manuals and stuff if they get sent to us before the consumer packaging is complete. Anyway, having previous experience with DJI’s drones, I just assumed that which buttons did what would just be obvious, which was 100% my bad, and it’s what led to the crash.

Also, I recognize why it’s important to have an emergency shut-off, and as far as I know, most (if not all) of DJI’s other consumer drones include some way of remotely disabling the motors. It should only be possible to do this deliberately, though, which is why their other drones require you to use a combination of inputs that you’re very unlikely to do by accident. Having what is basically a self-destruct button on the controller where you naturally rest a finger just seems like a bad idea— I pushed it on purpose thinking it did something different, but there are all kinds of reasons why a pilot might push it accidentally. And if you do, that’s it, your drone is going down. 

DJI, to their credit, was very understanding about the whole thing, but more importantly, they pointed out that accidental damage like this would be covered under DJI Care Refresh, which will completely replace the drone if necessary. This should give new pilots some piece of mind, if you’re willing to pay for the premium. Even if you’re not, the drone is designed to be at least somewhat end-user repairable.

Fundamentally, I’m glad Manual mode is there. DJI made the right choice by including it so that your skill won’t outgrow the capabilities of the drone. I just wish that the transition to Manual mode was more gradual, like if there was a Sport Plus mode that unlocked top speed while maintaining other flight assistance features. Even without that, FPV beginners really shouldn’t feel like Manual mode needs to be a goal— Normal mode is fun, and Sport mode is even more fun, with the added piece of mind that you’ve got options if things start to get out of your control. And if things do get out of your control in Manual mode, for heaven’s sake, push the right button.

I mean, the left button.

Is This The Right Drone for You? Photo: Evan Ackerman/IEEE Spectrum My partner, who is totally not interested in drones, actually had fun flying this one.

DJI’s FPV drone kit costs $1,299, which includes the drone, goggles, one battery, and all necessary chargers and cabling. Two more batteries and a charging hub, which you’ll almost certainly want, adds $299. This is a lot of money, even for a drone, so the thing to ask yourself is whether an FPV drone is really what you’re looking for. Yes, it’ll take good quality pictures and video, but if that’s what you’re after, DJI has lots of other drones that are cheaper and more portable and have some smarts to make them better camera platforms. And of course there’s the Skydio 2, which has some crazy obstacle avoidance and autonomy if you don’t want to have to worry about flying at all. I have a fantasy that one day, all of this will be combined into one single drone, but we’re not there yet. 

If you’re sure you want to get into FPV flying, DJI’s kit seems like a great option, with the recognition that this is an expensive, equipment-intensive sport. There are definitely ways of doing it for cheaper, but you’ll need to more or less build up the system yourself, and it seems unlikely that you’d end up with the same kind of reliable performance and software features that DJI’s system comes with. The big advantage of DJI’s FPV kit is that you can immediately get started with a system that works brilliantly out of the box, in a way that’s highly integrated, highly functional, high performing while being fantastic for beginners and leaving plenty of room to grow.

In my experience, there are three different types of consumer drone pilots. You’ve got people for whom drones are a tool for taking pictures and video, where flying the drone is more or less just a necessary component of that. You’ve also got people who want a drone that can be used to take pictures or video of themselves, where they don’t want to be bothered flying the drone at all. Then you have people for whom flying the drone itself is the appealing part— people who like flying fast and creatively because it’s challenging and exciting and fun. And that typically means flying in First Person View, or FPV, where it feels like you’re a tiny little human sitting inside of a virtual cockpit in your drone.

For that last group of folks, the barrier to entry is high. Or rather, the barriers are high, because there are several. Not only is the equipment expensive, you often have to build your own system of drone, FPV goggles, and transmitter and receiver. And on top of that, it takes a lot of skill to fly an FPV drone well, and all of the inevitable crashes just add to the expense.

Today, DJI is announcing a new consumer first-person view drone system that includes everything you need to get started. You get an expertly designed and fully integrated high-speed FPV drone, a pair of FPV goggles with exceptional image quality and latency that’s some of the best we’ve ever seen, plus a physical controller to make it all work. Most importantly, though, there’s on-board obstacle avoidance plus piloting assistance that means even a complete novice can be zipping around with safety and confidence on day one.

An FPV drone is a drone that you fly from a first-person viewpoint. The drone has a forward-facing camera that streams video to a pair of goggles in real time. This experience is a unique one, and there’s only so much that I can do to describe it, but it turns flying a drone into a much more personal, visceral, immersive thing. With an FPV drone, it feels much more like you are the drone. 

The Drone Photos: DJI DJI’s FPV drone is basically a battery with a camera and some motors attached.

DJI’s FPV drone itself is a bit of a chonker, as far as drones go. It’s optimized for going very fast while giving you a good first-person view, and no concessions are given to portability. It weighs 800g (of which 300 is the battery), and doesn’t fold up even a little bit, although the props are easy to remove.

Photo: Evan Ackerman/IEEE Spectrum Efficient design, but not small or portable.

Top speed is a terrifying 140 km/h, albeit in a mode  that you have to unlock (more on that later), and it’ll accelerate from a hover to 100 km/h in two seconds. Battery life maxes out at 20 minutes, but in practice you’ll get more like 10-15 minutes depending on how you fly. The camera on the front records in 4K at 60 FPS on an electronically stabilized tilt-only gimbal, and there’s a microSD card slot for local recording.

We’re delighted to report that the DJI FPV drone also includes some useful sensors that will make you significantly less likely to embed it in the nearest tree. These sensors include ground detection to keep the drone at a safe altitude, as well as forward-looking stereo-based obstacle detection that works well enough for the kinds of obstacles that cameras are able to see. 

The Goggles Photos: DJI You’ll look weird with these on, but they work very, very well.

What really makes this drone work are the FPV goggles along with the radio system that connects to the drone. The goggles have two tiny screens in them, right in front of your eyeballs. Each screen can display 1440 x 810p at up to 120 fps (which looks glorious), and it’ll do so while the drone is moving at 140 km/h hundreds of meters away. It’s extremely impressive, with quality that’s easily good enough to let you spot (and avoid) skinny little tree branches. But even more important than quality is latency— the amount of time it takes for the video to be captured by the drone, compressed, sent to the goggles, decompressed, and displayed. The longer this takes, the less you’re able to trust what you’re seeing, because you know that you’re looking at where the drone used to be rather than where it actually is. DJI’s FPV system has a latency that’s 28ms or better, which is near enough to real-time that it feels just like real-time, and you can fly with absolute confidence that the control inputs you’re giving and what you’re seeing through the goggles are matched up. 

Photo: Evan Ackerman/IEEE Spectrum The goggles are adjustable to fit most head sizes.

The goggles are also how you control all of the drone options, no phone necessary. You can attach your phone to the goggles with a USB-C cable for firmware updates, but otherwise, a little joystick and some buttons on the top of the goggles lead you to an intuitive interface for things like camera options, drone control options, and so on. A microSD card slot on the goggles lets you record the downlinked video, although it’s not going to be the same quality as what’s recorded on-board the drone. 

Don’t get your hopes up on comfort where the goggles are concerned. They’re fine, but that’s it. My favorite thing about them is that they don’t weigh much, because the battery that powers them has a cable so that you can keep it in your pocket rather than hanging it off the goggles themselves. Adjustable straps mean the goggles will fit most people, and there’s inter-eye distance adjustments for differently shaped faces. My partner, who statistically is smaller than 90% of adult women, found that the google inter-eye distance adjustment was almost, but not quite, adequate for her. Fortunately, the goggles not being super comfortable isn’t a huge deal because you won’t be wearing them for that long, unless you invest in more (quite expensive) battery packs.

The Controller Photos: DJI All the controls you want, none that you don’t. Except one.

The last piece of the kit is the controller, which is fairly standard as far as drone controllers go. The sticks unscrew and stow in the handles, and you can also open up the back panels to adjust the stick tension, which experienced FPV pilots will probably want to do.

Before we get to how the drone flies, a quick word on safe operation— according to the FAA, if you’re flying an FPV drone, you’ll need a spotter who keeps the drone in view at all times. And while the range of the DJI FPV drone is (DJI claims) up to 10km, here in the United States you’re not allowed to fly it higher than 400ft AGL, or farther away than your spotter can see, without an FAA exemption. Also, as with all drones, you’ll need to find places that are both safe and legal to fly. The drone will prevent you from taking off in restricted areas that DJI knows about, but it’s on you to keep the drone from flying over people, or otherwise being dangerous and/or annoying.

In Flight— Normal Mode Photos: Evan Ackerman/IEEE Spectrum It's not the most graceful looking drone, but the way it flies makes you not care.

DJI has helpfully equipped the FPV drone with three flight modes: Normal, Sport, and Manual. These three modes are the primary reason why this drone is not a terrible idea for almost everyone— Normal mode is a lot of fun, and both safe and accessible for FPV novices. Specifically, Normal mode brings the top speed of the drone down to a still rather quick 50 km/h, and will significantly slow the drone if the front sensors think you’re likely to hit something. As with DJI’s other drones, if you start getting into trouble you can simply let go of the control sticks, and the drone will bring itself to a halt and hover. This makes it very beginner friendly, and (as an FPV beginner myself) I didn’t find it at all stressful to fly. With most drones (and especially drones that cost as much as this one does) fear of crashing is a tangible thing always sitting at the back of your mind. That feeling is not gone with the DJI FPV drone, but it’s reduced so much that the experience can really just be fun.

To be clear, not crashing into stuff is not enough to make FPV flying an enjoyable experience. Arguably the best feature of DJI’s FPV drone is how much help it gives you behind the scenes to make flying effortless.

When flying a conventional drone, you’ve got four axes of control to work with, allowing the drone to pitch, roll, yaw, move vertically up and down, or do any combination of those things. Generally, a drone won’t couple these axes for you in an intelligent way. For example, if you want to go left with a drone, you can either roll left, which will move the drone left without looking left, or yaw left, which will cause the drone to look left without actually moving. To gracefully fly a drone around obstacles at speed, you need to fuse both of these inputs together, which is a skill that can take a long time to master. I have certainly not mastered it.

The drone does exactly what you think it should do, and it works beautifully.

For most people, especially beginners, it’s much more intuitive for the drone to behave more like an airplane when you want it to turn. That is, when you push the left stick (traditionally the yaw control), you want the drone to begin to roll while also yawing in the same direction and increasing throttle to execute a lovely, sweeping turn. And this is exactly what DJI FPV does—  thanks to a software option called coordinated turns that’s on by default, the drone does exactly what you think it should do, and it works beautifully. 

I could tell you how well this works for me, someone who has flown non-FPV drones for years, but my partner has flown a drone only one single time before, when she spent five minutes with the controller of my Parrot Anafi a few years ago and got it to go up and down and occasionally sideways a little bit. But within literally two minutes, she was doing graceful figure 8s with the DJI FPV drone. The combination of the FPV view and the built-in coordinated turns makes it just that intuitive.

In Flight— Sport Mode

Once you’re comfortable with Normal mode, Sport mode (which you can select at any time with a toggle switch on the controller) bumps up the speed of the drone from 50 km/h to 97 km/h. More importantly, the obstacle avoidance no longer slows the drone down for you, although it does give you escalating warnings when it thinks you’re going to hit something. As you get more comfortable with the drone, you’ll find that the obstacle avoidance tends to be on the paranoid side, which is as it should be. Once you’ve practiced a bit and you want to (say) fly between two trees that are close together, Sport mode will let you do that without slowing down.

Along with a higher top speed, Sport mode will also make the drone literally scream. When in flight it makes a loud screaming noise, especially when you ask a lot from the motors, like with rapid direction changes or gaining altitude at speed. This happens in Normal mode, but gets markedly louder in Sport mode. This doesn’t matter, really, except that if you’re flying anywhere near other people, they’re likely to find it obnoxious. 

I was surprised by how not-puking I was during high-speed FPV flight in Sport mode. I tend to suffer from motion sickness, but I had no trouble with the drone, as long as I kept my head still. Even a small head movement while the drone was in flight could lead to an immediate (although minor) wave of nausea, which passed as soon as I stopped moving. My head sometimes subconsciously moved along with the motion of the drone, to the point where after a few minutes of flying I’d realize that I’d ended up staring sideways at the sky like an idiot, so if you can just manage to keep your head mostly still and relaxed in a comfortable position, you’ll be fine. 

A word on speed— even though Sport mode has a max speed of only (only?) 97 km/h, coming from flying a Mavic Pro, it feels very, very fast. In tight turns, the video coming through the goggles sometimes looked like it was playing back at double speed. You could always ask for more speed, I suppose, and Manual mode gives it to you. But I could see myself being perfectly happy to fly in Sport mode for a long, long time, since it offers both speed and some additional safety.

The following video includes clips of Normal and Sport mode, to give you an idea of how smooth the drone moves and how fast it can go, along with a comparison between the standard 1080p recorded by the drone and what gets shown in the goggles. As you watch the video, remember that I’m not an FPV pilot. I’ve never flown an FPV drone before, and what you’re seeing is the result of less than an hour of total flight time.

In Flight— Manual Mode

There is one final mode that the DJI FPV drone comes with: Manual mode. Manual mode is for pilots who don’t need or want any hand-holding. You get full control over all axes as well as an unrestricted top speed of 140 km/h. Manual mode must be deliberately enabled in menus in the goggles (it’s not an available option by default), and DJI suggests spending some time in their included simulator before doing so. I want to stress that Manual mode doesn’t just disable the coordinated turns function, making control of the drone more like a traditional camera drone— if that’s something you want, there’s an option to do that in Normal and Sport mode. Manual mode is designed for people with drone racing experience, and enabling it turns the FPV drone into a much different thing, as I found out.

My test of Manual mode ended approximately 15 seconds after it began due to high speed contact with the ground. I wouldn’t call what happened a “crash,” in the sense that I didn’t fly the drone into an obstacle— there was a misunderstanding or a lack of information or an accidental input or some combination of those things that led to the drone shutting all of its motors of at about 150 feet up and then falling to the ground. 

Photo: Evan Ackerman/IEEE Spectrum The crash broke the drone’s two rear arms, but the expensive parts all seem perfectly fine.

I’d planned to just fly the drone in Manual mode a little bit, with plenty of altitude and over a big open space, primarily to get a sense of how much faster it is in Manual mode than in normal mode. After taking off in Normal mode and giving the drone a lot of room, I switched over to Manual mode. Immediately, the drone began to move in a much less predictable way, and after about five seconds of some tentative control inputs to see if I could get a handle on it, I felt uncomfortable enough to want the drone to stop itself.

The first thing I did was stop giving any control inputs. In Normal and Sport mode, the drone will respond to no input (centered sticks) by bringing itself to a hover. This doesn’t happen in Manual mode, and the drone kept moving. The second thing I did was push what I thought was the emergency stop button, which would switch the drone back to Normal mode and engage a thoughtfully included emergency stop mode to bring it to a stable hover as quickly as possible. I hadn’t yet needed an emergency stop in Normal or Sport mode, since just taking my fingers off of the sticks worked just fine before. What I learned post-crash was that in Manual mode, the button that says “Stop” on it (which is one of the easiest to press buttons on the controller since your right index finger naturally rests there) gains a new emergency shut-off functionality that causes the drone to disable all of its motors, whereupon it will then follow a ballistic trajectory until the inevitable happens no matter how much additional button pushing or frantic pleading you do. 

I certainly take some responsibility for this. When the DJI FPV drone showed up, it included a quick start guide and a more detailed reviewer’s guide, but neither of those documents had detailed information about what all the buttons on the controller and headset did. This sometimes happens with review units— they can be missing manuals and stuff if they get sent to us before the consumer packaging is complete. Anyway, having previous experience with DJI’s drones, I just assumed that which buttons did what would just be obvious, which was 100% my bad, and it’s what led to the crash.

Also, I recognize why it’s important to have an emergency shut-off, and as far as I know, most (if not all) of DJI’s other consumer drones include some way of remotely disabling the motors. It should only be possible to do this deliberately, though, which is why their other drones require you to use a combination of inputs that you’re very unlikely to do by accident. Having what is basically a self-destruct button on the controller where you naturally rest a finger just seems like a bad idea— I pushed it on purpose thinking it did something different, but there are all kinds of reasons why a pilot might push it accidentally. And if you do, that’s it, your drone is going down. 

DJI, to their credit, was very understanding about the whole thing, but more importantly, they pointed out that accidental damage like this would be covered under DJI Care Refresh, which will completely replace the drone if necessary. This should give new pilots some piece of mind, if you’re willing to pay for the premium. Even if you’re not, the drone is designed to be at least somewhat end-user repairable.

Fundamentally, I’m glad Manual mode is there. DJI made the right choice by including it so that your skill won’t outgrow the capabilities of the drone. I just wish that the transition to Manual mode was more gradual, like if there was a Sport Plus mode that unlocked top speed while maintaining other flight assistance features. Even without that, FPV beginners really shouldn’t feel like Manual mode needs to be a goal— Normal mode is fun, and Sport mode is even more fun, with the added piece of mind that you’ve got options if things start to get out of your control. And if things do get out of your control in Manual mode, for heaven’s sake, push the right button.

I mean, the left button.

Is This The Right Drone for You? Photo: Evan Ackerman/IEEE Spectrum My partner, who is totally not interested in drones, actually had fun flying this one.

DJI’s FPV drone kit costs $1,299, which includes the drone, goggles, one battery, and all necessary chargers and cabling. Two more batteries and a charging hub, which you’ll almost certainly want, adds $299. This is a lot of money, even for a drone, so the thing to ask yourself is whether an FPV drone is really what you’re looking for. Yes, it’ll take good quality pictures and video, but if that’s what you’re after, DJI has lots of other drones that are cheaper and more portable and have some smarts to make them better camera platforms. And of course there’s the Skydio 2, which has some crazy obstacle avoidance and autonomy if you don’t want to have to worry about flying at all. I have a fantasy that one day, all of this will be combined into one single drone, but we’re not there yet. 

If you’re sure you want to get into FPV flying, DJI’s kit seems like a great option, with the recognition that this is an expensive, equipment-intensive sport. There are definitely ways of doing it for cheaper, but you’ll need to more or less build up the system yourself, and it seems unlikely that you’d end up with the same kind of reliable performance and software features that DJI’s system comes with. The big advantage of DJI’s FPV kit is that you can immediately get started with a system that works brilliantly out of the box, in a way that’s highly integrated, highly functional, high performing while being fantastic for beginners and leaving plenty of room to grow.

Haru is a social, affective robot designed to support a wide range of research into human–robot communication. This article analyses the design process for Haru beta, identifying how both visual and performing arts were an essential part of that process, contributing to ideas of Haru’s communication as a science and as an art. Initially, the article examines how a modified form of Design Thinking shaped the work of the interdisciplinary development team—including animators, performers and sketch artists working alongside roboticists—to frame Haru’s interaction style in line with sociopsychological and cybernetic–semiotic communication theory. From these perspectives on communication, the focus is on creating a robot that is persuasive and able to transmit precise information clearly. The article moves on to highlight two alternative perspectives on communication, based on phenomenological and sociocultural theories, from which such a robot can be further developed as a more flexible and dynamic communicative agent. The various theoretical perspectives introduced are brought together by considering communication across three elements: encounter, story and dance. Finally, the article explores the potential of Haru as a research platform for human–robot communication across various scenarios designed to investigate how to support long-term interactions between humans and robots in different contexts. In particular, it gives an overview of plans for humanities-based, qualitative research with Haru.

Photo: CIA Museum

CIA roboticists designed Catfish Charlie to take water samples undetected. Why they wanted a spy fish for such a purpose remains classified.

In 1961, Tom Rogers of the Leo Burnett Agency created Charlie the Tuna, a jive-talking cartoon mascot and spokesfish for the StarKist brand. The popular ad campaign ran for several decades, and its catchphrase “Sorry, Charlie” quickly hooked itself in the American lexicon.

When the CIA’s Office of Advanced Technologies and Programs started conducting some fish-focused research in the 1990s, Charlie must have seemed like the perfect code name. Except that the CIA’s Charlie was a catfish. And it was a robot.

More precisely, Charlie was an unmanned underwater vehicle (UUV) designed to surreptitiously collect water samples. Its handler controlled the fish via a line-of-sight radio handset. Not much has been revealed about the fish’s construction except that its body contained a pressure hull, ballast system, and communications system, while its tail housed the propulsion. At 61 centimeters long, Charlie wouldn’t set any biggest-fish records. (Some species of catfish can grow to 2 meters.) Whether Charlie reeled in any useful intel is unknown, as details of its missions are still classified.

For exploring watery environments, nothing beats a robot

The CIA was far from alone in its pursuit of UUVs nor was it the first agency to do so. In the United States, such research began in earnest in the 1950s, with the U.S. Navy’s funding of technology for deep-sea rescue and salvage operations. Other projects looked at sea drones for surveillance and scientific data collection.

Aaron Marburg, a principal electrical and computer engineer who works on UUVs at the University of Washington’s Applied Physics Laboratory, notes that the world’s oceans are largely off-limits to crewed vessels. “The nature of the oceans is that we can only go there with robots,” he told me in a recent Zoom call. To explore those uncharted regions, he said, “we are forced to solve the technical problems and make the robots work.”

Image: Thomas Wells/Applied Physics Laboratory/University of Washington An oil painting commemorates SPURV, a series of underwater research robots built by the University of Washington’s Applied Physics Lab. In nearly 400 deployments, no SPURVs were lost.

One of the earliest UUVs happens to sit in the hall outside Marburg’s office: the Self-Propelled Underwater Research Vehicle, or SPURV, developed at the applied physics lab beginning in the late ’50s. SPURV’s original purpose was to gather data on the physical properties of the sea, in particular temperature and sound velocity. Unlike Charlie, with its fishy exterior, SPURV had a utilitarian torpedo shape that was more in line with its mission. Just over 3 meters long, it could dive to 3,600 meters, had a top speed of 2.5 m/s, and operated for 5.5 hours on a battery pack. Data was recorded to magnetic tape and later transferred to a photosensitive paper strip recorder or other computer-compatible media and then plotted using an IBM 1130.

Over time, SPURV’s instrumentation grew more capable, and the scope of the project expanded. In one study, for example, SPURV carried a fluorometer to measure the dispersion of dye in the water, to support wake studies. The project was so successful that additional SPURVs were developed, eventually completing nearly 400 missions by the time it ended in 1979.

Working on underwater robots, Marburg says, means balancing technical risks and mission objectives against constraints on funding and other resources. Support for purely speculative research in this area is rare. The goal, then, is to build UUVs that are simple, effective, and reliable. “No one wants to write a report to their funders saying, ‘Sorry, the batteries died, and we lost our million-dollar robot fish in a current,’ ” Marburg says.

A robot fish called SoFi

Since SPURV, there have been many other unmanned underwater vehicles, of various shapes and sizes and for various missions, developed in the United States and elsewhere. UUVs and their autonomous cousins, AUVs, are now routinely used for scientific research, education, and surveillance.

At least a few of these robots have been fish-inspired. In the mid-1990s, for instance, engineers at MIT worked on a RoboTuna, also nicknamed Charlie. Modeled loosely on a blue-fin tuna, it had a propulsion system that mimicked the tail fin of a real fish. This was a big departure from the screws or propellers used on UUVs like SPURV. But this Charlie never swam on its own; it was always tethered to a bank of instruments. The MIT group’s next effort, a RoboPike called Wanda, overcame this limitation and swam freely, but never learned to avoid running into the sides of its tank.

Fast-forward 25 years, and a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled SoFi, a decidedly more fishy robot designed to swim next to real fish without disturbing them. Controlled by a retrofitted Super Nintendo handset, SoFi could dive more than 15 meters, control its own buoyancy, and swim around for up to 40 minutes between battery charges. Noting that SoFi’s creators tested their robot fish in the gorgeous waters off Fiji, IEEE Spectrum’s Evan Ackerman noted, “Part of me is convinced that roboticists take on projects like these...because it’s a great way to justify a trip somewhere exotic.”

SoFi, Wanda, and both Charlies are all examples of biomimetics, a term coined in 1974 to describe the study of biological mechanisms, processes, structures, and substances. Biomimetics looks to nature to inspire design.

Sometimes, the resulting technology proves to be more efficient than its natural counterpart, as Richard James Clapham discovered while researching robotic fish for his Ph.D. at the University of Essex, in England. Under the supervision of robotics expert Huosheng Hu, Clapham studied the swimming motion of Cyprinus carpio, the common carp. He then developed four robots that incorporated carplike swimming, the most capable of which was iSplash-II. When tested under ideal conditions—that is, a tank 5 meters long, 2 meters wide, and 1.5 meters deep—iSpash-II obtained a maximum velocity of 11.6 body lengths per second (or about 3.7 m/s). That’s faster than a real carp, which averages a top velocity of 10 body lengths per second. But iSplash-II fell short of the peak performance of a fish darting quickly to avoid a predator.

Of course, swimming in a test pool or placid lake is one thing; surviving the rough and tumble of a breaking wave is another matter. The latter is something that roboticist Kathryn Daltorio has explored in depth.

Daltorio, an assistant professor at Case Western Reserve University and codirector of the Center for Biologically Inspired Robotics Research there, has studied the movements of cockroaches, earthworms, and crabs for clues on how to build better robots. After watching a crab navigate from the sandy beach to shallow water without being thrown off course by a wave, she was inspired to create an amphibious robot with tapered, curved feet that could dig into the sand. This design allowed her robot to withstand forces up to 138 percent of its body weight.

Photo: Nicole Graf

This robotic crab created by Case Western’s Kathryn Daltorio imitates how real crabs grab the sand to avoid being toppled by waves.

In her designs, Daltorio is following architect Louis Sullivan’s famous maxim: Form follows function. She isn’t trying to imitate the aesthetics of nature—her robot bears only a passing resemblance to a crab—but rather the best functionality. She looks at how animals interact with their environments and steals evolution’s best ideas.

And yet, Daltorio admits, there is also a place for realistic-looking robotic fish, because they can capture the imagination and spark interest in robotics as well as nature. And unlike a hyperrealistic humanoid, a robotic fish is unlikely to fall into the creepiness of the uncanny valley.

In writing this column, I was delighted to come across plenty of recent examples of such robotic fish. Ryomei Engineering, a subsidiary of Mitsubishi Heavy Industries, has developed several: a robo-coelacanth, a robotic gold koi, and a robotic carp. The coelacanth was designed as an educational tool for aquariums, to present a lifelike specimen of a rarely seen fish that is often only known by its fossil record. Meanwhile, engineers at the University of Kitakyushu in Japan created Tai-robot-kun, a credible-looking sea bream. And a team at Evologics, based in Berlin, came up with the BOSS manta ray.

Whatever their official purpose, these nature-inspired robocreatures can inspire us in return. UUVs that open up new and wondrous vistas on the world’s oceans can extend humankind’s ability to explore. We create them, and they enhance us, and that strikes me as a very fair and worthy exchange.

This article appears in the March 2021 print issue as “Catfish, Robot, Swimmer, Spy.”

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference] RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China

Let us know if you have suggestions for next week, and enjoy today's videos.

Shiny robotic cat toy blimp!

I am pretty sure this is Google Translate getting things wrong, but the About page mentions that the blimp will “take you to your destination after appearing in the death of God.”

[ NTT DoCoMo ] via [ RobotStart ]

If you have yet to see this real-time video of Perseverance landing on Mars, drop everything and watch it.

During the press conference, someone commented that this is the first time anyone on the team who designed and built this system has ever seen it in operation, since it could only be tested at the component scale on Earth. This landing system has blown my mind since Curiosity.

Here's a better look at where Percy ended up:

[ NASA ]

The fact that Digit can just walk up and down wet, slippery, muddy hills without breaking a sweat is (still) astonishing.

[ Agility Robotics ]

SkyMul wants drones to take over the task of tying rebar, which looks like just the sort of thing we'd rather robots be doing so that we don't have to:

The tech certainly looks promising, and SkyMul says that they're looking for some additional support to bring things to the pilot stage.

[ SkyMul ]

Thanks Eohan!

Flatcat is a pet-like, playful robot that reacts to touch. Flatcat feels everything exactly: Cuddle with it, romp around with it, or just watch it do weird things of its own accord. We are sure that flatcat will amaze you, like us, and caress your soul.

I don't totally understand it, but I want it anyway.

[ Flatcat ]

Thanks Oswald!

This is how I would have a romantic dinner date if I couldn't get together in person. Herman the UR3 and an OptiTrack system let me remotely make a romantic meal!

[ Dave's Armoury ]

Here, we propose a novel design of deformable propellers inspired by dragonfly wings. The structure of these propellers includes a flexible segment similar to the nodus on a dragonfly wing. This flexible segment can bend, twist and even fold upon collision, absorbing force upon impact and protecting the propeller from damage.

[ Paper ]

Thanks Van!

In the 1970s, The CIA​ created the world's first miniaturized unmanned aerial vehicle, or UAV, which was intended to be a clandestine listening device. The Insectothopter was never deployed operationally, but was still revolutionary for its time.

It may never have been deployed (not that they'll admit to, anyway), but it was definitely operational and could fly controllably.

[ CIA ]

Research labs are starting to get Digits, which means we're going to get a much better idea of what its limitations are.

[ Ohio State ]

This video shows the latest achievements for LOLA walking on undetected uneven terrain. The robot is technically blind, not using any camera-based or prior information on the terrain.

[ TUM ]

We define "robotic contact juggling" to be the purposeful control of the motion of a three-dimensional smooth object as it rolls freely on a motion-controlled robot manipulator, or “hand.” While specific examples of robotic contact juggling have been studied before, in this paper we provide the first general formulation and solution method for the case of an arbitrary smooth object in single-point rolling contact on an arbitrary smooth hand.

[ Paper ]

Thanks Fan!

A couple of new cobots from ABB, designed to work safely around humans.

[ ABB ]

Thanks Fan!

It's worth watching at least a little bit of Adam Savage testing Spot's new arm, because we get to see Spot try, fail, and eventually succeed at an autonomous door-opening behavior at the 10 minute mark.

[ Tested ]

SVR discusses diversity with guest speakers Dr. Michelle Johnson from the GRASP Lab at UPenn; Dr Ariel Anders from Women in Robotics and first technical hire at Robust.ai; Alka Roy from The Responsible Innovation Project; and Kenechukwu C. Mbanesi and Kenya Andrews from Black in Robotics. The discussion here is moderated by Dr. Ken Goldberg—artist, roboticist and Director of the CITRIS People and Robots Lab—and Andra Keay from Silicon Valley Robotics.

[ SVR ]

RAS presents a Soft Robotics Debate on Bioinspired vs. Biohybrid Design.

In this debate, we will bring together experts in Bioinspiration and Biohybrid design to discuss the necessary steps to make more competent soft robots. We will try to answer whether bioinspired research should focus more on developing new bioinspired material and structures or on the integration of living and artificial structures in biohybrid designs.

[ RAS SoRo ]

IFRR presents a Colloquium on Human Robot Interaction.

Across many application domains, robots are expected to work in human environments, side by side with people. The users will vary substantially in background, training, physical and cognitive abilities, and readiness to adopt technology. Robotic products are expected to not only be intuitive, easy to use, and responsive to the needs and states of their users, but they must also be designed with these differences in mind, making human-robot interaction (HRI) a key area of research.

[ IFRR ]

Vijay Kumar, Nemirovsky Family Dean and Professor at Penn Engineering, gives an introduction to ENIAC day and David Patterson, Pardee Professor of Computer Science, Emeritus at the University of California at Berkeley, speaks about the legacy of the ENIAC and its impact on computer architecture today. This video is comprised of lectures one and two of nine total lectures in the ENIAC Day series.

There are more interesting ENIAC videos at the link below, but we'll highlight this particular one, about the women of the ENIAC, also known as the First Programmers.

[ ENIAC Day ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference] RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China

Let us know if you have suggestions for next week, and enjoy today's videos.

Shiny robotic cat toy blimp!

I am pretty sure this is Google Translate getting things wrong, but the About page mentions that the blimp will “take you to your destination after appearing in the death of God.”

[ NTT DoCoMo ] via [ RobotStart ]

If you have yet to see this real-time video of Perseverance landing on Mars, drop everything and watch it.

During the press conference, someone commented that this is the first time anyone on the team who designed and built this system has ever seen it in operation, since it could only be tested at the component scale on Earth. This landing system has blown my mind since Curiosity.

Here's a better look at where Percy ended up:

[ NASA ]

The fact that Digit can just walk up and down wet, slippery, muddy hills without breaking a sweat is (still) astonishing.

[ Agility Robotics ]

SkyMul wants drones to take over the task of tying rebar, which looks like just the sort of thing we'd rather robots be doing so that we don't have to:

The tech certainly looks promising, and SkyMul says that they're looking for some additional support to bring things to the pilot stage.

[ SkyMul ]

Thanks Eohan!

Flatcat is a pet-like, playful robot that reacts to touch. Flatcat feels everything exactly: Cuddle with it, romp around with it, or just watch it do weird things of its own accord. We are sure that flatcat will amaze you, like us, and caress your soul.

I don't totally understand it, but I want it anyway.

[ Flatcat ]

Thanks Oswald!

This is how I would have a romantic dinner date if I couldn't get together in person. Herman the UR3 and an OptiTrack system let me remotely make a romantic meal!

[ Dave's Armoury ]

Here, we propose a novel design of deformable propellers inspired by dragonfly wings. The structure of these propellers includes a flexible segment similar to the nodus on a dragonfly wing. This flexible segment can bend, twist and even fold upon collision, absorbing force upon impact and protecting the propeller from damage.

[ Paper ]

Thanks Van!

In the 1970s, The CIA​ created the world's first miniaturized unmanned aerial vehicle, or UAV, which was intended to be a clandestine listening device. The Insectothopter was never deployed operationally, but was still revolutionary for its time.

It may never have been deployed (not that they'll admit to, anyway), but it was definitely operational and could fly controllably.

[ CIA ]

Research labs are starting to get Digits, which means we're going to get a much better idea of what its limitations are.

[ Ohio State ]

This video shows the latest achievements for LOLA walking on undetected uneven terrain. The robot is technically blind, not using any camera-based or prior information on the terrain.

[ TUM ]

We define "robotic contact juggling" to be the purposeful control of the motion of a three-dimensional smooth object as it rolls freely on a motion-controlled robot manipulator, or “hand.” While specific examples of robotic contact juggling have been studied before, in this paper we provide the first general formulation and solution method for the case of an arbitrary smooth object in single-point rolling contact on an arbitrary smooth hand.

[ Paper ]

Thanks Fan!

A couple of new cobots from ABB, designed to work safely around humans.

[ ABB ]

Thanks Fan!

It's worth watching at least a little bit of Adam Savage testing Spot's new arm, because we get to see Spot try, fail, and eventually succeed at an autonomous door-opening behavior at the 10 minute mark.

[ Tested ]

SVR discusses diversity with guest speakers Dr. Michelle Johnson from the GRASP Lab at UPenn; Dr Ariel Anders from Women in Robotics and first technical hire at Robust.ai; Alka Roy from The Responsible Innovation Project; and Kenechukwu C. Mbanesi and Kenya Andrews from Black in Robotics. The discussion here is moderated by Dr. Ken Goldberg—artist, roboticist and Director of the CITRIS People and Robots Lab—and Andra Keay from Silicon Valley Robotics.

[ SVR ]

RAS presents a Soft Robotics Debate on Bioinspired vs. Biohybrid Design.

In this debate, we will bring together experts in Bioinspiration and Biohybrid design to discuss the necessary steps to make more competent soft robots. We will try to answer whether bioinspired research should focus more on developing new bioinspired material and structures or on the integration of living and artificial structures in biohybrid designs.

[ RAS SoRo ]

IFRR presents a Colloquium on Human Robot Interaction.

Across many application domains, robots are expected to work in human environments, side by side with people. The users will vary substantially in background, training, physical and cognitive abilities, and readiness to adopt technology. Robotic products are expected to not only be intuitive, easy to use, and responsive to the needs and states of their users, but they must also be designed with these differences in mind, making human-robot interaction (HRI) a key area of research.

[ IFRR ]

Vijay Kumar, Nemirovsky Family Dean and Professor at Penn Engineering, gives an introduction to ENIAC day and David Patterson, Pardee Professor of Computer Science, Emeritus at the University of California at Berkeley, speaks about the legacy of the ENIAC and its impact on computer architecture today. This video is comprised of lectures one and two of nine total lectures in the ENIAC Day series.

There are more interesting ENIAC videos at the link below, but we'll highlight this particular one, about the women of the ENIAC, also known as the First Programmers.

[ ENIAC Day ]

Now that DeepMind has taught AI to master the game of Go—and furthered its advantage in chess—they’ve turned their attention to another board game: Diplomacy. Unlike Go, it is seven-player, it requires a combination of competition and cooperation, and on each turn players make moves simultaneously, so they must reason about what others are reasoning about them, and so on.

“It’s a qualitatively different problem from something like Go or chess,” says Andrea Tacchetti, a computer scientist at DeepMind. In December, Tacchetti and collaborators presented a paper at the NeurIPS conference on their system, which advances the state of the art, and may point the way toward AI systems with real-world diplomatic skills—in negotiating with strategic or commercial partners or simply scheduling your next team meeting. 

Diplomacy is a strategy game played on a map of Europe divided into 75 provinces. Players build and mobilize military units to occupy provinces until someone controls a majority of supply centers. Each turn, players write down their moves, which are then executed simultaneously. They can attack or defend against opposing players’ units, or support opposing players’ attacks and defenses, building alliances. In the full version, players can negotiate. DeepMind tackled the simpler No-Press Diplomacy, devoid of explicit communication. 

Historically, AI has played Diplomacy using hand-crafted strategies. In 2019, the Montreal research institute Mila beat the field with a system using deep learning. They trained a neural network they called DipNet to imitate humans, based on a dataset of 150,000 human games. DeepMind started with a version of DipNet and refined it using reinforcement learning, a kind of trial-and-error. 

Exploring the space of possibility purely through trial-and-error would pose problems, though. They calculated that a 20-move game can be played nearly 10868 ways—yes, that’s 10 with 868 zeroes after it.

So they tweaked their reinforcement-learning algorithm. During training, on each move, they sample likely moves of opponents, calculate the move that works best on average across these scenarios, then train their net to prefer this move. After training, it skips the sampling and just works from what its learning has taught it. “The message of our paper is: we can make reinforcement learning work in such an environment,” Tacchetti says. One of their AI players versus six DipNets won 30 percent of the time (with 14 percent being chance). One DipNet against seven of theirs won only 3 percent of the time.

In April, Facebook will present a paper at the ICLR conference describing their own work on No-Press Diplomacy. They also built on a human-imitating network similar to DipNet. But instead of adding reinforcement learning, they added search—the techniques of taking extra time to plan ahead and reason about what every player is likely to do next. On each turn, SearchBot computes an equilibrium, a strategy for each player that the player can’t improve by switching only its own strategy. To do this, SearchBot evaluates each potential strategy for a player by playing the game out a few turns (assuming everyone chooses subsequent moves based on the net’s top choice). A strategy consists not of a single best move but a set of probabilities across 50 likely moves (suggested by the net), to avoid being too predictable to opponents. 

Conducting such exploration during a real game slows SearchBot down, but allows it beat DipNet by an even greater margin than DeepMind’s system does. SearchBot also played anonymously against humans on a Diplomacy website and ranked in the top 2 percent of players. “This is the first bot that’s demonstrated to be competitive with humans,” says Adam Lerer, a computer scientist at Facebook and paper co-author.

“I think the most important point is that search is often underestimated,” Lerer says. One of his Facebook collaborators, Noam Brown, implemented search in a superhuman poker bot. Brown says the most surprising finding was that their method could find equilibria, a computationally difficult task.

“I was really happy when I saw their paper,” Tacchetti says, “because of just how different their ideas were to ours, which means that there’s so much stuff that we can try still.” Lerer sees a future in combining reinforcement learning and search, which worked well for DeepMind’s AlphaGo.

Both teams found that their systems were not easily exploitable. Facebook, for example, invited two top human players to each play 35 straight games against SearchBot, probing for weaknesses. The humans won only 6 percent of the time. Both groups also found that their systems didn’t just compete, but also cooperated, sometimes supporting opponents. “They get that in order to win, they have to work with others,” says Yoram Bachrach, from the DeepMind team.

That’s important, Bachrach, Lerer, and Tacchetti say, because games that combine competition and cooperation are much more realistic than purely competitive games like Go. Mixed motives occur in all realms of life: driving in traffic, negotiating contracts, and arranging times to Zoom. 

How close are we to AI that can play Diplomacy with “press,” negotiating all the while using natural language?

“For Press Diplomacy, as well as other settings that mix cooperation and competition, you need progress,” Bachrach says, “in terms of theory of mind, how they can communicate with others about their preferences or goals or plans. And, one step further, you can look at the institutions of multiple agents that human society has. All of this work is super exciting, but these are early days.”

The human ability of keeping balance during various locomotion tasks is attributed to our capability of withstanding complex interactions with the environment and coordinating whole-body movements. Despite this, several stability analysis methods are limited by the use of overly simplified biped and foot structures and corresponding contact models. As a result, existing stability criteria tend to be overly restrictive and do not represent the full balance capabilities of complex biped systems. The proposed methodology allows for the characterization of the balance capabilities of general biped models (ranging from reduced-order to whole-body) with segmented feet. Limits of dynamic balance are evaluated by the Boundary of Balance (BoB) and the associated novel balance indicators, both formulated in the Center of Mass (COM) state space. Intermittent heel, flat, and toe contacts are enabled by a contact model that maps discrete contact modes into corresponding center of pressure constraints. For demonstration purposes, the BoB and balance indicators are evaluated for a whole-body biped model with segmented feet representative of the human-like standing posture in the sagittal plane. The BoB is numerically constructed as the set of maximum allowable COM perturbations that the biped can sustain along a prescribed direction. For each point of the BoB, a constrained trajectory optimization algorithm generates the biped’s whole-body trajectory as it recovers from extreme COM velocity perturbations in the anterior–posterior direction. Balance capabilities for the cases of flat and segmented feet are compared, demonstrating the functional role the foot model plays in the limits of postural balance. The state-space evaluation of the BoB and balance indicators allows for a direct comparison between the proposed balance benchmark and existing stability criteria based on reduced-order models [e.g., Linear Inverted Pendulum (LIP)] and their associated stability metrics [e.g., Margin of Stability (MOS)]. The proposed characterization of balance capabilities provides an important benchmarking framework for the stability of general biped/foot systems.

Sensory feedback is essential for the control of soft robotic systems and to enable deployment in a variety of different tasks. Proprioception refers to sensing the robot’s own state and is of crucial importance in order to deploy soft robotic systems outside of laboratory environments, i.e. where no external sensing, such as motion capture systems, is available. A vision-based sensing approach for a soft robotic arm made from fabric is presented, leveraging the high-resolution sensory feedback provided by cameras. No mechanical interaction between the sensor and the soft structure is required and consequently the compliance of the soft system is preserved. The integration of a camera into an inflatable, fabric-based bellow actuator is discussed. Three actuators, each featuring an integrated camera, are used to control the spherical robotic arm and simultaneously provide sensory feedback of the two rotational degrees of freedom. A convolutional neural network architecture predicts the two angles describing the robot’s orientation from the camera images. Ground truth data is provided by a motion capture system during the training phase of the supervised learning approach and its evaluation thereafter. The camera-based sensing approach is able to provide estimates of the orientation in real-time with an accuracy of about one degree. The reliability of the sensing approach is demonstrated by using the sensory feedback to control the orientation of the robotic arm in closed-loop.

Researchers continue to devise creative ways to explore the extent to which people perceive robots as social agents, as opposed to objects. One such approach involves asking participants to inflict ‘harm’ on a robot. Researchers are interested in the length of time between the experimenter issuing the instruction and the participant complying, and propose that relatively long periods of hesitation might reflect empathy for the robot, and perhaps even attribution of human-like qualities, such as agency and sentience. In a recent experiment, we adapted the so-called ‘hesitance to hit’ paradigm, in which participants were instructed to hit a humanoid robot on the head with a mallet. After standing up to do so (signaling intent to hit the robot), participants were stopped, and then took part in a semi-structured interview to probe their thoughts and feelings during the period of hesitation. Thematic analysis of the responses indicate that hesitation not only reflects perceived socialness, but also other factors including (but not limited to) concerns about cost, mallet disbelief, processing of the task instruction, and the influence of authority. The open-ended, free responses participants provided also offer rich insights into individual differences with regards to anthropomorphism, perceived power imbalances, and feelings of connection toward the robot. In addition to aiding understanding of this measurement technique and related topics regarding socialness attribution to robots, we argue that greater use of open questions can lead to exciting new research questions and interdisciplinary collaborations in the domain of social robotics.

Many robot exploration algorithms that are used to explore office, home, or outdoor environments, rely on the concept of frontier cells. Frontier cells define the border between known and unknown space. Frontier-based exploration is the process of repeatedly detecting frontiers and moving towards them, until there are no more frontiers and therefore no more unknown regions. The faster frontier cells can be detected, the more efficient exploration becomes. This paper proposes several algorithms for detecting frontiers. The first is called Naïve Active Area (NaïveAA) frontier detection and achieves frontier detection in constant time by only evaluating the cells in the active area defined by scans taken. The second algorithm is called Expanding-Wavefront Frontier Detection (EWFD) and uses frontiers from the previous timestep as a starting point for searching for frontiers in newly discovered space. The third approach is called Frontier-Tracing Frontier Detection (FTFD) and also uses the frontiers from the previous timestep as well as the endpoints of the scan, to determine the frontiers at the current timestep. Algorithms are compared to state-of-the-art algorithms such as Naïve, WFD, and WFD-INC. NaïveAA is shown to operate in constant time and therefore is suitable as a basic benchmark for frontier detection algorithms. EWFD and FTFD are found to be significantly faster than other algorithms.

Flexible endoscopy involves the insertion of a long narrow flexible tube into the body for diagnostic and therapeutic procedures. In the gastrointestinal (GI) tract, flexible endoscopy plays a major role in cancer screening, surveillance, and treatment programs. As a result of gas insufflation during the procedure, both upper and lower GI endoscopy procedures have been classified as aerosol generating by the guidelines issued by the respective societies during the COVID-19 pandemic—although no quantifiable data on aerosol generation currently exists. Due to the risk of COVID-19 transmission to healthcare workers, most societies halted non-emergency and diagnostic procedures during the lockdown. The long-term implications of stoppage in cancer diagnoses and treatment is predicted to lead to a large increase in preventable deaths. Robotics may play a major role in this field by allowing healthcare operators to control the flexible endoscope from a safe distance and pave a path for protecting healthcare workers through minimizing the risk of virus transmission without reducing diagnostic and therapeutic capacities. This review focuses on the needs and challenges associated with the design of robotic flexible endoscopes for use during a pandemic. The authors propose that a few minor changes to existing platforms or considerations for platforms in development could lead to significant benefits for use during infection control scenarios.

Over the last half decade or so, the commercialization of autonomous robots that can operate outside of structured environments has dramatically increased. But this relatively new transition of robotic technologies from research projects to commercial products comes with its share of challenges, many of which relate to the rapidly increasing visibility that these robots have in society.

Whether it's because of their appearance of agency, or because of their history in popular culture, robots frequently inspire people’s imagination. Sometimes this is a good thing, like when it leads to innovative new use cases. And sometimes this is a bad thing, like when it leads to use cases that could be classified as irresponsible or unethical. Can the people selling robots do anything about the latter? And even if they can, should they?

Roboticists understand that robots, fundamentally, are tools. We build them, we program them, and even the autonomous ones are just following the instructions that we’ve coded into them. However, that same appearance of agency that makes robots so compelling means that it may not be clear to people without much experience with or exposure to real robots that a robot itself isn’t inherently good or bad—rather, as a tool, a robot is a reflection of its designers and users.

This can put robotics companies into a difficult position. When they sell a robot to someone, that person can, hypothetically, use the robot in any way they want. Of course, this is the case with every tool, but it’s the autonomous aspect that makes robots unique. I would argue that autonomy brings with it an implied association between a robot and its maker, or in this case, the company that develops and sells it. I’m not saying that this association is necessarily a reasonable one, but I think that it exists, even if that robot has been sold to someone else who has assumed full control over everything it does.

“All of our buyers, without exception, must agree that Spot will not be used to harm or intimidate people or animals, as a weapon or configured to hold a weapon”  —Robert Playter, Boston Dynamics

Robotics companies are certainly aware of this, because many of them are very careful about who they sell their robots to, and very explicit about what they want their robots to be doing. But once a robot is out in the wild, as it were, how far should that responsibility extend? And realistically, how far can it extend? Should robotics companies be held accountable for what their robots do in the world, or should we accept that once a robot is sold to someone else, responsibility is transferred as well? And what can be done if a robot is being used in an irresponsible or unethical way that could have a negative impact on the robotics community?

For perspective on this, we contacted folks from three different robotics companies, each of which has experience selling distinctive mobile robots to commercial end users. We asked them the same five questions about the responsibility that robotics companies have regarding the robots that they sell, and here’s what they had to say:

Do you have any restrictions on what people can do with your robots? If so, what are they, and if not, why not?

Péter Fankhauser, CEO, ANYbotics:

We closely work together with our customers to make sure that our solution provides the right approach for their problem. Thereby, the target use case is clear from the beginning and we do not work with customers interested in using our robot ANYmal outside the intended target applications. Specifically, we strictly exclude any military or weaponized uses and since the foundation of ANYbotics it is close to our heart to make human work easier, safer, and more enjoyable.

Robert Playter, CEO, Boston Dynamics:

Yes, we have restrictions on what people can do with our robots, which are outlined in our Terms and Conditions of Sale. All of our buyers, without exception, must agree that Spot will not be used to harm or intimidate people or animals, as a weapon or configured to hold a weapon. Spot, just like any product, must be used in compliance with the law. 

Ryan Gariepy, CTO, Clearpath Robotics:

We do have strict restrictions and KYC processes which are based primarily on Canadian export control regulations. They depend on the type of equipment sold as well as where it is going. More generally, we also will not sell or support a robot if we know that it will create an uncontrolled safety hazard or if we have reason to believe that the buyer is unqualified to use the product. And, as always, we do not support using our products for the development of fully autonomous weapons systems.

More broadly, if you sell someone a robot, why should they be restricted in what they can do with it?

Péter Fankhauser, ANYbotics: We see the robot less as a simple object but more as an artificial workforce. This implies to us that the usage is closely coupled with the transfer of the robot and both the customer and the provider agree what the robot is expected to do. This approach is supported by what we hear from our customers with an increasing interest to pay for the robots as a service or per use.

Robert Playter, Boston Dynamics: We’re offering a product for sale. We’re going to do the best we can to stop bad actors from using our technology for harm, but we don’t have the control to regulate every use. That said, we believe that our business will be best served if our technology is used for peaceful purposes—to work alongside people as trusted assistants and remove them from harm’s way. We do not want to see our technology used to cause harm or promote violence. Our restrictions are similar to those of other manufacturers or technology companies that take steps to reduce or eliminate the violent or unlawful use of their products. 

Ryan Gariepy, Clearpath Robotics: Assuming the organization doing the restricting is a private organization and the robot and its software is sold vs. leased or “managed,” there aren't strong legal reasons to restrict use. That being said, the manufacturer likewise has no obligation to continue supporting that specific robot or customer going forward. However, given that we are only at the very edge of how robots will reshape a great deal of society, it is in the best interest for the manufacturer and user to be honest with each other about their respective goals. Right now, you're not only investing in the initial purchase and relationship, you're investing in the promise of how you can help each other succeed in the future.

“If a robot is being used in a way that is irresponsible due to safety: intervene! If it’s unethical: speak up!” —Péter Fankhauser, ANYbotics What can you realistically do to make sure that people who buy your robots use them in the ways that you intend?

Péter Fankhauser, ANYbotics: We maintain a close collaboration with our customers to ensure their success with our solution. So for us, we have refrained from technical solutions to block unintended use.

Robert Playter, Boston Dynamics: We vet our customers to make sure that their desired applications are things that Spot can support, and are in alignment with our Terms and Conditions of Sale. We’ve turned away customers whose applications aren’t a good match with our technology. If customers misuse our technology, we’re clear in our Terms of Sale that their violations may void our warranty and prevent their robots from being updated, serviced, repaired, or replaced. We may also repossess robots that are not purchased, but leased. Finally, we will refuse future sales to customers that violate our Terms of Sale.

Ryan Gariepy, Clearpath Robotics: We typically work with our clients ahead of the purchase to make sure their expectations match reality, in particular on aspects like safety, supervisory requirements, and usability. It's far worse to sell a robot that'll sit on a shelf or worse, cause harm, then to not sell a robot at all, so we prefer to reduce the risk of this situation in advance of receiving an order or shipping a robot.

How do you evaluate the merit of edge cases, for example if someone wants to use your robot in research or art that may push the boundaries of what you personally think is responsible or ethical?

Péter Fankhauser, ANYbotics: It’s about the dialog, understanding, and figuring out alternatives that work for all involved parties and the earlier you can have this dialog the better.

Robert Playter, Boston Dynamics: There’s a clear line between exploring robots in research and art, and using the robot for violent or illegal purposes. 

Ryan Gariepy, Clearpath Robotics: We have sold thousands of robots to hundreds of clients, and I do not recall the last situation that was not covered by a combination of export control and a general evaluation of the client's goals and expectations. I'm sure this will change as robots continue to drop in price and increase in flexibility and usability.

“You're not only investing in the initial purchase and relationship, you're investing in the promise of how you can help each other succeed in the future.” —Ryan Gariepy, Clearpath Robotics What should roboticists do if we see a robot being used in a way that we feel is unethical or irresponsible?

Péter Fankhauser, ANYbotics: If it’s irresponsible due to safety: intervene! If it’s unethical: speak up!

Robert Playter, Boston Dynamics: We want robots to be beneficial for humanity, which includes the notion of not causing harm. As an industry, we think robots will achieve long-term commercial viability only if people see robots as helpful, beneficial tools without worrying if they’re going to cause harm.

Ryan Gariepy, Clearpath Robotics: On a one off basis, they should speak to a combination of the user, the supplier or suppliers, the media, and, if safety is an immediate concern, regulatory or government agencies. If the situation in question risks becoming commonplace and is not being taken seriously, they should speak up more generally in appropriate forums—conferences, industry groups, standards bodies, and the like.

As more and more robots representing different capabilities become commercially available, these issues are likely to come up more frequently. The three companies we talked to certainly don’t represent every viewpoint, and we did reach out to other companies who declined to comment. But I would think (I would hope?) that everyone in the robotics community can agree that robots should be used in a way that makes people’s lives better. What “better” means in the context of art and research and even robots in the military may not always be easy to define, and inevitably there’ll be disagreement as to what is ethical and responsible, and what isn’t.

We’ll keep on talking about it, though, and do our best to help the robotics community to continue growing and evolving in a positive way. Let us know what you think in the comments.

Over the last half decade or so, the commercialization of autonomous robots that can operate outside of structured environments has dramatically increased. But this relatively new transition of robotic technologies from research projects to commercial products comes with its share of challenges, many of which relate to the rapidly increasing visibility that these robots have in society.

Whether it's because of their appearance of agency, or because of their history in popular culture, robots frequently inspire people’s imagination. Sometimes this is a good thing, like when it leads to innovative new use cases. And sometimes this is a bad thing, like when it leads to use cases that could be classified as irresponsible or unethical. Can the people selling robots do anything about the latter? And even if they can, should they?

Roboticists understand that robots, fundamentally, are tools. We build them, we program them, and even the autonomous ones are just following the instructions that we’ve coded into them. However, that same appearance of agency that makes robots so compelling means that it may not be clear to people without much experience with or exposure to real robots that a robot itself isn’t inherently good or bad—rather, as a tool, a robot is a reflection of its designers and users.

This can put robotics companies into a difficult position. When they sell a robot to someone, that person can, hypothetically, use the robot in any way they want. Of course, this is the case with every tool, but it’s the autonomous aspect that makes robots unique. I would argue that autonomy brings with it an implied association between a robot and its maker, or in this case, the company that develops and sells it. I’m not saying that this association is necessarily a reasonable one, but I think that it exists, even if that robot has been sold to someone else who has assumed full control over everything it does.

“All of our buyers, without exception, must agree that Spot will not be used to harm or intimidate people or animals, as a weapon or configured to hold a weapon”  —Robert Playter, Boston Dynamics

Robotics companies are certainly aware of this, because many of them are very careful about who they sell their robots to, and very explicit about what they want their robots to be doing. But once a robot is out in the wild, as it were, how far should that responsibility extend? And realistically, how far can it extend? Should robotics companies be held accountable for what their robots do in the world, or should we accept that once a robot is sold to someone else, responsibility is transferred as well? And what can be done if a robot is being used in an irresponsible or unethical way that could have a negative impact on the robotics community?

For perspective on this, we contacted folks from three different robotics companies, each of which has experience selling distinctive mobile robots to commercial end users. We asked them the same five questions about the responsibility that robotics companies have regarding the robots that they sell, and here’s what they had to say:

Do you have any restrictions on what people can do with your robots? If so, what are they, and if not, why not?

Péter Fankhauser, CEO, ANYbotics:

We closely work together with our customers to make sure that our solution provides the right approach for their problem. Thereby, the target use case is clear from the beginning and we do not work with customers interested in using our robot ANYmal outside the intended target applications. Specifically, we strictly exclude any military or weaponized uses and since the foundation of ANYbotics it is close to our heart to make human work easier, safer, and more enjoyable.

Robert Playter, CEO, Boston Dynamics:

Yes, we have restrictions on what people can do with our robots, which are outlined in our Terms and Conditions of Sale. All of our buyers, without exception, must agree that Spot will not be used to harm or intimidate people or animals, as a weapon or configured to hold a weapon. Spot, just like any product, must be used in compliance with the law. 

Ryan Gariepy, CTO, Clearpath Robotics:

We do have strict restrictions and KYC processes which are based primarily on Canadian export control regulations. They depend on the type of equipment sold as well as where it is going. More generally, we also will not sell or support a robot if we know that it will create an uncontrolled safety hazard or if we have reason to believe that the buyer is unqualified to use the product. And, as always, we do not support using our products for the development of fully autonomous weapons systems.

More broadly, if you sell someone a robot, why should they be restricted in what they can do with it?

Péter Fankhauser, ANYbotics: We see the robot less as a simple object but more as an artificial workforce. This implies to us that the usage is closely coupled with the transfer of the robot and both the customer and the provider agree what the robot is expected to do. This approach is supported by what we hear from our customers with an increasing interest to pay for the robots as a service or per use.

Robert Playter, Boston Dynamics: We’re offering a product for sale. We’re going to do the best we can to stop bad actors from using our technology for harm, but we don’t have the control to regulate every use. That said, we believe that our business will be best served if our technology is used for peaceful purposes—to work alongside people as trusted assistants and remove them from harm’s way. We do not want to see our technology used to cause harm or promote violence. Our restrictions are similar to those of other manufacturers or technology companies that take steps to reduce or eliminate the violent or unlawful use of their products. 

Ryan Gariepy, Clearpath Robotics: Assuming the organization doing the restricting is a private organization and the robot and its software is sold vs. leased or “managed,” there aren't strong legal reasons to restrict use. That being said, the manufacturer likewise has no obligation to continue supporting that specific robot or customer going forward. However, given that we are only at the very edge of how robots will reshape a great deal of society, it is in the best interest for the manufacturer and user to be honest with each other about their respective goals. Right now, you're not only investing in the initial purchase and relationship, you're investing in the promise of how you can help each other succeed in the future.

“If a robot is being used in a way that is irresponsible due to safety: intervene! If it’s unethical: speak up!” —Péter Fankhauser, ANYbotics What can you realistically do to make sure that people who buy your robots use them in the ways that you intend?

Péter Fankhauser, ANYbotics: We maintain a close collaboration with our customers to ensure their success with our solution. So for us, we have refrained from technical solutions to block unintended use.

Robert Playter, Boston Dynamics: We vet our customers to make sure that their desired applications are things that Spot can support, and are in alignment with our Terms and Conditions of Sale. We’ve turned away customers whose applications aren’t a good match with our technology. If customers misuse our technology, we’re clear in our Terms of Sale that their violations may void our warranty and prevent their robots from being updated, serviced, repaired, or replaced. We may also repossess robots that are not purchased, but leased. Finally, we will refuse future sales to customers that violate our Terms of Sale.

Ryan Gariepy, Clearpath Robotics: We typically work with our clients ahead of the purchase to make sure their expectations match reality, in particular on aspects like safety, supervisory requirements, and usability. It's far worse to sell a robot that'll sit on a shelf or worse, cause harm, then to not sell a robot at all, so we prefer to reduce the risk of this situation in advance of receiving an order or shipping a robot.

How do you evaluate the merit of edge cases, for example if someone wants to use your robot in research or art that may push the boundaries of what you personally think is responsible or ethical?

Péter Fankhauser, ANYbotics: It’s about the dialog, understanding, and figuring out alternatives that work for all involved parties and the earlier you can have this dialog the better.

Robert Playter, Boston Dynamics: There’s a clear line between exploring robots in research and art, and using the robot for violent or illegal purposes. 

Ryan Gariepy, Clearpath Robotics: We have sold thousands of robots to hundreds of clients, and I do not recall the last situation that was not covered by a combination of export control and a general evaluation of the client's goals and expectations. I'm sure this will change as robots continue to drop in price and increase in flexibility and usability.

“You're not only investing in the initial purchase and relationship, you're investing in the promise of how you can help each other succeed in the future.” —Ryan Gariepy, Clearpath Robotics What should roboticists do if we see a robot being used in a way that we feel is unethical or irresponsible?

Péter Fankhauser, ANYbotics: If it’s irresponsible due to safety: intervene! If it’s unethical: speak up!

Robert Playter, Boston Dynamics: We want robots to be beneficial for humanity, which includes the notion of not causing harm. As an industry, we think robots will achieve long-term commercial viability only if people see robots as helpful, beneficial tools without worrying if they’re going to cause harm.

Ryan Gariepy, Clearpath Robotics: On a one off basis, they should speak to a combination of the user, the supplier or suppliers, the media, and, if safety is an immediate concern, regulatory or government agencies. If the situation in question risks becoming commonplace and is not being taken seriously, they should speak up more generally in appropriate forums—conferences, industry groups, standards bodies, and the like.

As more and more robots representing different capabilities become commercially available, these issues are likely to come up more frequently. The three companies we talked to certainly don’t represent every viewpoint, and we did reach out to other companies who declined to comment. But I would think (I would hope?) that everyone in the robotics community can agree that robots should be used in a way that makes people’s lives better. What “better” means in the context of art and research and even robots in the military may not always be easy to define, and inevitably there’ll be disagreement as to what is ethical and responsible, and what isn’t.

We’ll keep on talking about it, though, and do our best to help the robotics community to continue growing and evolving in a positive way. Let us know what you think in the comments.

Pages