IEEE Spectrum Robotics

IEEE Spectrum Robotics recent content
Subscribe to IEEE Spectrum Robotics feed

iRobot has released several new robots over the last few years, including the i7 and s9 vacuums. Both of these models are very fancy and very capable, packed with innovative and useful features that we’ve been impressed by. They’re both also quite expensive—with dirt docks included, you’re looking at US $800 for the i7+, and a whopping $1,100 for the s9+. You can knock a couple hundred bucks off of those prices if you don’t want the docks, but still, these vacuums are absolutely luxury items.

If you just want something that’ll do some vacuuming so that you don’t have to, iRobot has recently announced a new Roomba option. The Roomba i3 is iRobot’s new low to midrange vacuum, starting at $400. It’s not nearly as smart as the i7 or the s9, but it can navigate (sort of) and make maps (sort of) and do some basic smart home integration. If that sounds like all you need, the i3 could be the robot vacuum for you.

iRobot calls the i3 “stylish,” and it does look pretty neat with that fabric top. Underneath, you get dual rubber primary brushes plus a side brush. There’s limited compatibility with the iRobot Home app and IFTTT, along with Alexa and Google Home. The i3 is also compatible with iRobot’s Clean Base, but that’ll cost you an extra $200, and iRobot refers to this bundle as the i3+.

The reason that the i3 only offers limited compatibility with iRobot’s app is that the i3 is missing the top-mounted camera that you’ll find in more expensive models. Instead, it relies on a downward-looking optical sensor to help it navigate, and it builds up a map as it’s cleaning by keeping track of when it bumps into obstacles and paying attention to internal sensors like a gyro and wheel odometers. The i3 can localize directly on its charging station or Clean Base (which have beacons on them that the robot can see if it’s close enough), which allows it to resume cleaning after emptying it’s bin or recharging. You’ll get a map of the area that the i3 has cleaned once it’s finished, but that map won’t persist between cleaning sessions, meaning that you can’t do things like set keep-out zones or identify specific rooms for the robot to clean. Many of the more useful features that iRobot’s app offers are based on persistent maps, and this is probably the biggest gap in functionality between the i3 and its more expensive siblings.

According to iRobot senior global product manager Sarah Wang, the kind of augmented dead-reckoning-based mapping that the i3 uses actually works really well: “Based on our internal and external testing, the performance is equivalent with our products that have cameras, like the Roomba 960,” she says. To get this level of performance, though, you do have to be careful, Wang adds. “If you kidnap i3, then it will be very confused, because it doesn’t have a reference to know where it is.” “Kidnapping” is a term that’s used often in robotics to refer to a situation in which an autonomous robot gets moved to an unmapped location, and in the context of a home robot, the best example of this is if you decide that you want your robot to vacuum a different room instead, so you pick it up and move it there.

iRobot used to make this easy by giving all of its robots carrying handles, but not anymore, because getting moved around makes things really difficult for any robot trying to keep track of where it is. While robots like the i7 can recover using their cameras to look for unique features that they recognize, the only permanent, unique landmark that the i3 can for sure identify is the beacon on its dock. What this means is that when it comes to the i3, even more than other Roomba models, the best strategy, is to just “let it do its thing,” says iRobot senior principal system engineer Landon Unninayar.

Photo: iRobot The Roomba i3 is iRobot’s new low to midrange vacuum, starting at $400.

If you’re looking to spend a bit less than the $400 starting price of the i3, there are other options to be aware of as well. The Roomba 614, for example, does a totally decent job and costs $250. It’s scheduling isn’t very clever, it doesn’t make maps, and it won’t empty itself, but it will absolutely help keep your floors clean as long as you don’t mind being a little bit more hands-on. (And there’s also Neato’s D4, which offers basic persistent maps—and lasers!—for $330.)

The other thing to consider if you’re trying to decide between the i3 and a more expensive Roomba is that without the camera, the i3 likely won’t be able to take advantage of nearly as many of the future improvements that iRobot has said it’s working on. Spending more money on a robot with additional sensors isn’t just buying what it can do now, but also investing in what it may be able to do later on, with its more sophisticated localization and ability to recognize objects. iRobot has promised major app updates every six months, and our guess is that most of the cool new stuff is going to show in the i7 and s9. So, if your top priority is just cleaner floors, the i3 is a solid choice. But if you want a part of what iRobot is working on next, the i3 might end up holding you back. 

aside.inlay.xlrg { display: none; } aside.inlay.pullquote.xlrg { display: block; }

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online] IROS 2020 – October 25-29, 2020 – [Online] CYBATHLON 2020 – November 13-14, 2020 – [Online] ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Rongzhong Li, who is responsible for the adorable robotic cat Nybble, has an updated and even more adorable quadruped that's more robust and agile but only costs around US $200 in kit form on Kickstarter.

Looks like the early bird options are sold out, but a full kit is a $225 pledge, for delivery in December.

[ Kickstarter ]

Thanks Rz!

I still maintain that Stickybot was one of the most elegantly designed robots ever.

[ Stanford ]

With the unpredictable health crisis of COVID-19 continuing to place high demands on hospitals, PAL Robotics have successfully completed testing of their delivery robots in Barcelona hospitals this summer. The TIAGo Delivery and TIAGo Conveyor robots were deployed in Hospital Municipal of Badalona and Hospital Clínic Barcelona following a winning proposal submitted to the European DIH-Hero project. Accerion sensors were integrated onto the TIAGo Delivery Robot and TIAGo Conveyor Robot for use in this project.

[ PAL Robotics ]

Energy Robotics, a leading developer of software solutions for mobile robots used in industrial applications, announced that its remote sensing and inspection solution for Boston Dynamics’s agile mobile robot Spot was successfully deployed at Merck’s thermal exhaust treatment plant at its headquarters in Darmstadt, Germany. Energy Robotics equipped Spot with sensor technology and remote supervision functions to support the inspection mission.

Combining Boston Dynamics’ intuitive controls, robotic intelligence and open interface with Energy Robotics’ control and autonomy software, user interface and encrypted cloud connection, Spot can be taught to autonomously perform a specific inspection round while being supervised remotely from anywhere with internet connectivity. Multiple cameras and industrial sensors enable the robot to find its way around while recording and transmitting information about the facility’s onsite equipment operations.

Spot reads the displays of gauges in its immediate vicinity and can also zoom in on distant objects using an externally-mounted optical zoom lens. In the thermal exhaust treatment facility, for instance, it monitors cooling water levels and notes whether condensation water has accumulated. Outside the facility, Spot monitors pipe bridges for anomalies.

Among the robot’s many abilities, it can detect defects of wires or the temperature of pump components using thermal imaging. The robot was put through its paces on a comprehensive course that tested its ability to handle special challenges such as climbing stairs, scaling embankments and walking over grating.

Energy Robotics ]

Thanks Stefan!

Boston Dynamics really should give Dr. Guero an Atlas just to see what he can do with it.

[ DrGuero ]

World's First Socially Distanced Birthday Party: Located in London, the robotic arm was piloted in real time to light the candles on the cake by the founder of Extend Robotics, Chang Liu, who was sat 50 miles away in Reading. Other team members in Manchester and Reading were also able to join in the celebration as the robot was used to accurately light the candles on the birthday cake.

[ Extend Robotics ]

The Robocon in-person competition was canceled this year, but check out Tokyo University's robots in action:

[ Robocon ]

Sphero has managed to pack an entire Sphero into a much smaller sphere.

[ Sphero ]

Squishy Robotics, a small business funded by the National Science Foundation (NSF), is developing mobile sensor robots for use in disaster rescue, remote monitoring, and space exploration. The shape-shifting, mobile, senor robots from UC-Berkeley spin-off Squishy Robotics can be dropped from airplanes or drones and can provide first responders with ground-based situational awareness during fires, hazardous materials (HazMat) release, and natural and man-made disasters.

[ Squishy Robotics ]

Meet Jasper, the small girl with big dreams to FLY. Created by UTS Animal Logic Academy in partnership with the Royal Australian Air Force to encourage girls to soar above the clouds. Jasper was created using a hybrid of traditional animation techniques and technology such as robotics and 3D printing. A KUKA QUANTEC robot is used during the film making to help the Australian Royal Airforce tell their story in a unique way. UTS adapted their High Accurate robot to film consistent paths, creating a video with physical sets and digital characters.

[ AU AF ]

Impressive what the Ghost Robotics V60 can do without any vision sensors on it.

[ Ghost Robotics ]

Is your job moving tiny amounts of liquid around? Would you rather be doing something else? ABB’s YuMi got you.

[ Yumi ]

For his PhD work at the Media Lab, Biomechatronics researcher Roman Stolyarov developed a terrain-adaptive control system for robotic leg prostheses. as a way to help people with amputations feel as able-bodied and mobile as possible, by allowing them to walk seamlessly regardless of the ground terrain.

[ MIT ]

This robot collects data on each cow when she enters to be milked. Milk samples and 3D photos can be taken to monitor the cow’s health status. The Ontario Dairy Research Centre in Elora, Ontario, is leading dairy innovation through education and collaboration. It is a state-of-the-art 175,000 square foot facility for discovery, learning and outreach. This centre is a partnership between the Agricultural Research Institute of Ontario, OMAFRA, the University of Guelph and the Ontario dairy industry.

[ University of Guleph ]

Australia has one of these now, should the rest of us panic?

[ Boeing ]

Daimler and Torc are developing Level 4 automated trucks for the real world. Here is a glimpse into our closed-course testing, routes on public highways in Virginia, and self-driving capabilities development. Our year of collaborating on the future of transportation culminated in the announcement of our new truck testing center in New Mexico.

[ Torc Robotics ]

aside.inlay.xlrg { display: none; } aside.inlay.pullquote.xlrg { display: block; }

We’ve been keeping a close watch on GITAI since early last year—what caught our interest initially is the history of the company, which includes a bunch of folks who started in the JSK Lab at the University of Tokyo, won the DARPA Robotics Challenge Trials as SCHAFT, got swallowed by Google, narrowly avoided being swallowed by SoftBank, and are now designing robots that can work in space.

The GITAI YouTube channel has kept us more to less up to date on their progress so far, and GITAI has recently announced the next step in this effort: The deployment of one of their robots on board the International Space Station in 2021.

Photo: GITAI GITAI’s S1 is a task-specific 8-degrees-of-freedom arm with an integrated sensing and computing system and 1-meter reach.

GITAI has been working on a variety of robots for space operations, the most sophisticated of which is a humanoid torso called G1, which is controlled through an immersive telepresence system. What will be launching into space next year is a more task-specific system called the S1, which is an 8-degrees-of-freedom arm with an integrated sensing and computing system that can be wall-mounted and has a 1-meter reach.

The S1 will be living on board a commercially funded, pressurized airlock-extension module called Bishop, developed by NanoRacks. Mounted on the inside of the Bishop module, the S1 will have access to a task board and a small assembly area, where it will demonstrate common crew intra-vehicular activity, or IVA—tasks like flipping switches, turning knobs, and managing cables. It’ll also do some in-space assembly, or ISA, attaching panels to create a solar array.

Here’s a demonstration of some task board activities, conducted on Earth in a mockup of Bishop:

GITAI says that “all operations conducted by the S1 GITAI robotic arm will be autonomous, followed by some teleoperations from Nanoracks’ in-house mission control.” This is interesting, because from what we’ve seen until now, GITAI has had a heavy emphasis on telepresence, with a human in the loop to get stuff done. As GITAI’s founder and CEO Sho Nakanose commented to us a year ago, “Telepresence robots have far better performance and can be made practical much quicker than autonomous robots, so first we are working on making telepresence robots practical.” 

So what’s changed? “GITAI has been concentrating on teleoperations to demonstrate the dexterity of our robot, but now it’s time to show our capabilities to do the same this time with autonomy,” Nakanose told us last week. “In an environment with minimum communication latency, it would be preferable to operate a robot more with teleoperations to enhance the capability of the robot, since with the current technology level of AI, what a robot can do autonomously is very limited. However, in an environment where the latency becomes noticeable, it would become more efficient to have a mixture of autonomy and teleoperations depending on the application. Eventually, in an ideal world, a robot will operate almost fully autonomously with minimum human cognizance.”

“In an environment where the latency becomes noticeable, it would become more efficient to have a mixture of autonomy and teleoperations depending on the application. Eventually, in an ideal world, a robot will operate almost fully autonomously with minimum human cognizance.” —Sho Nakanose, GITAI founder and CEO

Nakanose says that this mission will help GITAI to “acquire the skills, know-how, and experience necessary to prepare a robot to be ISS compatible, prov[ing] the maturity of our technology in the microgravity environment.” Success would mean conducting both IVA and ISA experiments as planned (autonomous and teleop for IVA, fully autonomous for ISA), which would be pretty awesome, but we’re told that GITAI has already received a research and development order for space robots from a private space company, and Nakanose expects that “by the mid-2020s, we will be able to show GITAI's robots working in space on an actual mission.”

NanoRacks is schedule to launch the Bishop module on SpaceX CRS-21 in November. The S1 will be launched separately in 2021, and a NASA astronaut will install the robot and then leave it alone to let it start demonstrating how work in space can be made both safer and cheaper once the humans have gotten out of the way.

Today, Walmart and Zipline are announcing preliminary plans “to bring first-of-its kind drone delivery service to the United States.” What makes this drone-delivery service the first of its kind is that Zipline uses fixed-wing drones rather than rotorcraft, giving them a relatively large payload capacity and very long range at the cost of a significantly more complicated launch, landing, and delivery process. Zipline has made this work very well in Rwanda, and more recently in North Carolina. But expanding into commercial delivery to individual households is a much different challenge. 

Along with a press release that doesn’t say much, Walmart and Zipline have released a short video of how they see the delivery operation happening, and it’s a little bit more, uh, optimistic than we’re entirely comfortable with.

Here’s the video:

And here’s all of the actually useful information from the one-page press release:

The new service will make on-demand deliveries of select health and wellness products with the potential to expand to general merchandise. Trial deliveries will take place near Walmart’s headquarters in Northwest Arkansas. Zipline will operate from a Walmart store and can service a 50-mile radius, which is about the size of the state of Connecticut. The operation will likely begin early next year, and, if successful, we’ll look to expand.

At first glance, there’s basic feasibility here, in the sense that most health and wellness products are likely to be of the size and weight to be transportable by one of Zipline’s drones—called Zips—and that a Zipline fulfillment center with a drone catapult and retrieval system could be set up to operate in a Walmart parking lot (or somewhere nearby) without any problems. However, drone delivery needs a lot more than basic feasibility to be successful—without more detail in the press release, we’ve had to look carefully at the video, and we’ve got some questions.

From the beginning of the video until about 20 seconds in, everything seems straightforward. A customer places an order, and a Zip is loaded and launched. Zipline has been doing this in Ghana and Rwanda for years, and we’ve seen firsthand how fast and efficient their operation is. It’s easy to see how this could translate into shipping items from a Walmart.

Our first question comes up at 22 seconds in, which shows a Zip flying along over a suburban or rural area a couple of hundred feet off the ground. Generally, this airspace is uncontrolled, meaning that other aircraft could be operating nearby. Zipline’s drones can detect other aircraft that are equipped with ADS-B transmitters, which covers an increasing number of manned aircraft. However, up to 400 feet of altitude, airspace is (with some exceptions) typically open to consumer drones as well, which usually do not have ADS-B transmitters. We know that Zipline is working on its own onboard sense and avoid system, but until they have that working, there’s a risk of a Zip colliding with another drone. The sky is big, so this may not be very likely, but it’s still something that should be taken into consideration. One way of mitigating this risk is by flying higher than 400 feet, but that starts getting into more complicated stuff with the U.S. Federal Aviation Administration. Zipline and Walmart are undoubtedly getting into complicated stuff with the FAA anyway, though, so maybe that’s the plan.

From 26 seconds to 30 seconds, we see what looks like the same kind of Zip delivery that we saw in Africa, so that’s cool. But between 31 and 35 seconds, the video shows exactly where that delivery happened: What appears to be a walkway up to a suburban house, in between a parked car, a porch, and the street. As far as we know, and based on what we’ve seen of Zips making deliveries, this kind of precision is simply not possible for a package on a parachute dropped from a fixed-wing drone.

As far as we’re aware Zipline’s parachute system fundamentally cannot achieve the porch-level precision that the video advertises. This is a big deal, because it places substantial constraints on where Walmart will be able to deliver to.

While Zips do their best to make pinpoint deliveries, even going as far as compensating for wind whenever possible, you really need a circular-ish open area with a radius of perhaps 5 meters or so for the Zips to deliver to. And you wouldn’t really want to have something like a house adjacent to that, since there’d be some risk of a package landing on the roof. Being close to a road would be even worse, because you can imagine how a driver might react if a wayward box on a parachute landed on their windshield by surprise. Finally, since Zips descend to somewhere between 35 and 50 feet to release the package, you need a flight path across the delivery area that’s free of obstructions. Zips can drop packages from higher altitudes, of course, but if they do, the delivery area needs to be even larger.

We sent Zipline and Walmart some specific questions about what’s going on in the video and how the delivery process will actually work, and received the following response:

The video represents the vision for how the delivery service to Walmart customer homes will work. We’ll be happy to keep you posted on the technical aspects of the operation as we get closer to launching the trial.

We sent a follow-up email to Walmart asking for some clarification, but they weren’t able to share any additional detail on the record.

The issue I have with Walmart’s desire to show their vision is that I really don’t see how this vision could ever become a reality through Zipline, because as far as we’re aware Zipline’s parachute system fundamentally cannot achieve the porch-level precision that the video advertises. This is a big deal, because it places substantial constraints on where Walmart will be able to deliver to, and dense suburbs as shown in the video may realistically be off the table. What the video shows is more the sort of thing that most consumers probably associate with drone delivery because it’s been relentlessly promoted by companies like Google and Amazon, relying on the precision of rotorcraft. But that’s just not what Zipline does, and honestly, the fact that Zipline doesn’t do that stuff is one of the reasons that we think Zipline’s tech is uniquely useful.

Putting the video and the press release aside, let’s think about what Zipline and Walmart could realistically accomplish together. Assuming that “we’ll be happy to keep you posted on the technical aspects of the operation” actually means “we don’t have any easy answers to the questions that you asked” rather than “we have some amazing and secret new parachute steering technology* that will solve every problem,” what would Zips delivering stuff from Walmart actually look like?

The biggest issue here, I think, is making deliveries with fixed-wing drones dropping boxes on parachutes in relatively dense suburban neighborhoods. I just don’t see how that’s going to work in a safe and scalable way, and of course urban deliveries would be even worse. But that’s totally fine—in high density areas, other delivery systems already exist and can operate efficiently. There are legacy delivery systems (like humans moving stuff in trucks) and gig workers, as well as new technologies like sidewalk robots, autonomous vehicles, or hybrid systems. In order for these delivery systems to make sense, though, there needs to be a certain density of customers, such that the balance of time making deliveries versus time spent getting from one place to another works out in your favor. Otherwise, your delivery system is hard to make sustainable.

What this means is that if you live in a rural area, your options for on-demand delivery are much more limited, which is part of the reason that Zipline exists in the first place: It excels in fast, efficient deliveries to isolated locations that are a substantial distance away. They do this kind of delivery better than anyone else, and rural delivery is a niche that rotorcraft or sidewalk robots or whatever just can’t compete in. Furthermore, for many people who live in rural areas, this kind of delivery would be incredibly valuable because options are so limited. For Zipline, the great thing about focusing on rural rather than suburban delivery is that delivery becomes much less complicated. People are more spread out, and it’s more likely that more homes will have backyards that can easily support a Zip parachute delivery. It really seems like rural areas, rather than suburbs, is where a Zipline Walmart partnership would have the most value, at least if Zipline is not going to somehow significantly alter its operation.

In the past, I’ve been super skeptical of urban (and to a lesser extent, suburban) delivery drones. I still am, primarily because I’m not convinced that the risk and expense of using drones to deliver things is worth it, relative to already established delivery systems or new delivery systems (like ground robots) that operate more conventionally. But rural delivery is different, and Zipline has shown that they can do it quickly and efficiently. So much of drone delivery really seems like it’s just companies reacting to the positive press that they inevitably get, combined with consumers asserting that it’s something they want without really thinking about whether it’s something that will make a tangible difference to their lives. For someone who lives far away from the nearest Walmart, though, being able to order and receive something like medicine in an hour without having to leave their yard could make a difference in a way that only Zipline can, at this point, deliver on.

*I desperately want this to be the case

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference] IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA CYBATHLON 2020 – November 13-14, 2020 – [Online Event] ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Clearpath Robotics and Boston Dynamics were obviously destined to partner up with Spot, because Spot 100 percent stole its color scheme from Clearpath, which has a monopoly on yellow and black robots. But seriously, the news here is that thanks to Clearpath, Spot now works seamlessly with ROS.

[ Clearpath Robotics ]

A new video created by Swisscom Ventures highlights a research expedition sponsored by Moncler to explore the deepest ice caves in the world using Flyability’s Elios drone. [...] The expedition was sponsored by apparel company Moncler and took place over two weeks in 2018 on the Greenland ice sheet, the second largest body of ice in the world after Antarctica. Research focused on an area about 80 kilometers east of Kangerlussuaq, where scientists wanted to study the movement of water deep underground to better understand the effects of climate change on the melting ice.

[ Flyability ]

Shane Wighton of the “Stuff Made Here” YouTube channel, whose terrifying haircut machine we featured a few months ago, has improved on his robotic basketball hoop. It’s actually more than an improvement: It’s a complete redesign that nearly drove Wighton insane. But the result is pretty cool. It’s fun to watch him building a highly complicated system while always seeking simple and elegant designs for its components.

Stuff Made Here ]

SpaceX rockets are really just giant, explosion-powered drones that go into space sometimes. So let's watch more videos of them! This one is sped up, and puts a flight into just a couple of minutes.

[ SpaceX ]

Neato Robotics makes some solid autonomous vacuums, and these incremental upgrades feature improved battery life and better air filters.

[ Neato Robotics ]

A full-scale engineering model of NASA's Perseverance Mars rover now resides in a garage facing the Mars Yard at NASA's Jet Propulsion Laboratory in Southern California.

This vehicle system test bed rover (VSTB) is also known as OPTIMISM, which stands for Operational Perseverance Twin for Integration of Mechanisms and Instruments Sent to Mars. OPTIMISM was built in a warehouselike assembly room near the Mars Yard – an area that simulates the Red Planet's rocky surface. The rover helps the mission test hardware and software before it’s transmitted to the real rover on Mars. OPTIMISM will share the space with the Curiosity rover's twin MAGGIE.

[ JPL ]

Heavy asset industries like shipping, oil and gas, and manufacturing are grounded in repetitive tasks like locating items on large industrial sites -- a tedious task that can take as long 45 minutes to find critical items like a forklift in an area that spans the size of multiple football fields. Not only is this work boring, it’s dangerous and inefficient. Robots like Spot, however, love this sort of work.

Spot can provide real-time updates on the location of assets and complete other mundane tasks. In this case, Spot is using software from Cognite to roam the vast shipyard to locate and manage more than 100,000 assets stored across the facility. What used to take humans hours can be managed on an ongoing basis by Spot -- leaving employees to focus on more strategic tasks.

[ Cognite ]

The KNEXT Barista system helps high volume premium coffee providers who want to offer artisan coffee specialities in consistent quality.

[ Kuka ]

In this paper, we study this idea of generality in the locomotion domain. We develop a learning framework that can learn sophisticated locomotion behavior for a wide spectrum of legged robots, such as bipeds, tripeds, quadrupeds and hexapods, including wheeled variants. Our learning framework relies on a data-efficient, off-policy multi-task RL algorithm and a small set of reward functions that are semantically identical across robots.

[ DeepMind ]

Thanks Dave!

Even though it seems like the real risk of COVID is catching it from another person, robotics companies are doing what they can with UVC disinfecting systems.

[ BlueBotics ]

Aeditive develop robotic 3D printing solutions for the production of concrete components. At the heart of their production plant are two large robots that cooperate to manufacture the component. The automation technology they build on is a robotic shotcrete process. During this process, they apply concrete layer by layer and thus manufacture complete components. This means that their customers no longer dependent on formwork, which is expensive and time-consuming to create. Instead, their customers can manufacture components directly on a steel pallet without these moulds.

[ Aeditive ]

Something BIG is coming next month from Robotiq!

My guess: an elephant.

[ Robotiq ]

TurtleBot3 is a great little home robot, as long as you have a TurtleBot3-sized home.

[ Robotis ]

How do you calculate the coordinated movements of two robot arms so they can accurately guide a highly flexible tool? ETH researchers have integrated all aspects of the optimisation calculations into an algorithm. The hot-​wire cutter will be used, among other things, to develop building blocks for a mortar-​free structure.

[ ETH Zurich ]

And now, this.

[ RobotStart ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference] Other Than Human – September 3-10, 2020 – Stockholm, Sweden ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference] IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA CYBATHLON 2020 – November 13-14, 2020 – [Online Event] ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today's videos.

From the Robotics and Perception Group at UZH comes Flightmare, a simulation environment for drones that combines a slick rendering engine with a robust physics engine that can run as fast as your system can handle.

Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. Flightmare can be used for various applications, including path-planning, reinforcement learning, visual-inertial odometry, deep learning, human-robot interaction, etc.

[ Flightmare ]

Quadruped robots yelling at people to maintain social distancing is really starting to become a thing, for better or worse.

We introduce a fully autonomous surveillance robot based on a quadruped platform that can promote social distancing in complex urban environments. Specifically, to achieve autonomy, we mount multiple cameras and a 3D LiDAR on the legged robot. The robot then uses an onboard real-time social distancing detection system to track nearby pedestrian groups. Next, the robot uses a crowd-aware navigation algorithm to move freely in highly dynamic scenarios. The robot finally uses a crowd aware routing algorithm to effectively promote social distancing by using human-friendly verbal cues to send suggestions to overcrowded pedestrians.

[ Project ]

Thanks Fan!

The Personal Robotics Group at Oregon State University is looking at UV germicidal irradiation for surface disinfection with a Fetch Manipulator Robot.

Fetch Robot disinfecting dance party woo!

[ Oregon State ]

How could you not take a mask from this robot?

[ Reachy ]

This work presents the design, development and autonomous navigation of the alpha-version of our Resilient Micro Flyer, a new type of collision-tolerant small aerial robot tailored to traversing and searching within highly confined environments including manhole-sized tubes. The robot is particularly lightweight and agile, while it implements a rigid collision-tolerant design which renders it resilient during forcible interaction with the environment. Furthermore, the design of the system is enhanced through passive flaps ensuring smoother and more compliant collision which was identified to be especially useful in very confined settings.

[ ARL ]

Pepper can make maps and autonomously navigate, which is interesting, but not as interesting as its posture when it's wandering around.

Dat backing into the charging dock tho.

[ Pepper ]

RatChair a strategy for displacing big objects by attaching relatively small vibration sources. After learning how several random bursts of vibration affect its pose, an optimization algorithm discovers the optimal sequence of vibration patterns required to (slowly but surely) move the object to a specified position.

This is from 2015, why isn't all of my furniture autonomous yet?!

[ KAIST ]

The new SeaDrone Pro is designed to be the underwater equivalent of a quadrotor. This video is a rendering, but we've been assured that it does actually exist.

[ SeaDrone ]

Thanks Eduardo!

Porous Loops is a lightweight composite facade panel that shows the potential of 3D printing of mineral foams for building scale applications.

[ ETH ]

Thanks Fan!

Here's an interesting idea for a robotic gripper- it's what appears to be a snap bracelet coupled to a pneumatic actuator that allows the snap bracelet to be reset.

[ Georgia Tech ]

Graze is developing a commercial robotic lawnmower. They're also doing a sort of crowdfunded investment thing, which probably explains the painfully overproduced nature of the following video:

A couple things about this: the hard part, which the video skips over almost entirely, is the mapping, localization, and understanding where to mow and where not to mow. The pitch deck seems to suggest that this is mostly done through computer vision, a thing that's perhaps easy to do under controlled ideal conditions, but difficult to apply to a world full lawns that are all different. The commercial aspect is interesting because golf courses are likely as standardized as you can get, but the emphasis here on how much money they can make without really addressing any of the technical stuff makes me raise an eyebrow or two.

[ Graze ]

The record & playback X-series arm demo allows the user to record the arm's movements while motors are torqued off. Then, the user may torque the motor's on and watch the movements they just made playback!

[ Interbotix ]

Shadow Robot has a new teleop system for its hand. I'm guessing that it's even trickier to use than it looks.

[ Shadow Robot ]

Quanser Interactive Labs is a collection of virtual hardware-based laboratory activities that supplement traditional or online courses. Same as working with physical systems in the lab, students work with virtual twins of Quanser's most popular plants, develop their mathematical models, implement and simulate the dynamic behavior of these systems, design controllers, and validate them on a high-fidelity 3D real-time virtual models. The virtual systems not only look like the real ones, they also behave, can be manipulated, measured, and controlled like real devices. And finally, when students go to the lab, they can deploy their virtually-validated designs on actual physical equipment.

[ Quanser ]

This video shows robot-assisted heart surgery. It's amazing to watch if you haven't seen this sort of thing before, but be aware that there is a lot of blood.

This video demonstrates a fascinating case of robotic left atrial myxoma excision, narrated by Joel Dunning, Middlesbrough, UK. The Robotic platform provides superior visualisation and enhanced dexterity, through keyhole incisions. Robotic surgery is an integral part of our Minimally Invasive Cardiothoracic Surgery Program.

[ Tristan D. Yan ]

Thanks Fan!

In this talk, we present our work on learning control policies directly in simulation that are deployed onto real drones without any fine tuning. The presentation covers autonomous drone racing, drone acrobatics, and uncertainty estimation in deep networks.

[ RPG ]

Last year, Spectrum reported on Japan’s public-private initiative to create a new industry around electric vertical takeoff and landing vehicles (eVTOLs) and flying cars. Last Friday, start-up company SkyDrive Inc. demonstrated the progress made since then when it held a press conference to spotlight its prototype vehicle and show reporters a video taken three days earlier of the craft undergoing a piloted test flight in front of staff and investors.

The sleek, single-seat eVTOL, dubbed SD-03 (SkyDrive third generation), resembles a hydroplane on skis and weighs in at 400 kilograms. The body is made of carbon fiber, aluminum, and other materials that have been chosen for their weight, balance, and durability. The craft measures 4 meters in length and width, and is about 2 meters tall. During operation, the nose of the craft is lit with white LED lights; red lights run around the bottom to enable the vehicle to be seen in the sky and to distinguish the direction the craft is flying. 

The SD-03 uses four pairs of electrically driven coaxial rotors, with one pair mounted at each quadrant. These enable a flight time of 5 to 10 minutes at speeds up to 50 kilometers per hour. “The propellers on each pair counter-rotate,” explains Nobuo Kishi, Sky Drive’s chief technology officer. “This cancels out propeller torque.” It also makes for a compact design, “so all the craft needs to land is the space of two parked cars,” he adds.

But when it came to providing more details of the drive system, Kishi declined, saying it’s a trade secret that’s a source of competitive advantage. The same goes for the craft’s energy storage system: Other than disclosing the fact that the flying taxi currently uses a lithium polymer battery, he’s also keeping details about the powertrain confidential.

Underlying this need for secrecy is the technology’s restricted capabilities. “Total energy that can be stored in a battery is a major limiting factor here,” says Steve Wright, Senior Research Fellow in Avionics and Aircraft Systems at the University of West England. “Which is why virtually every one of these projects is aiming at the air-taxi market within megacities.”

The SkyDrive video shows the SD-03 take off vertically then engage in maneuvers as it hovers up to two meters off the ground around a netted enclosure. The craft is shown moving about at walking speed for roughly 4 minutes before landing on a designated spot. For monitoring purposes and back-up, engineers used an additional computer-assisted control system to ensure the craft’s stability and safety.

Speaking at the press conference, Tomohiro Fukuzawa, SkyDrive’s CEO, estimated there are currently as many as 100 flying car projects underway around the world, “but only a few have succeeded with someone on board,” he said.

He went on to note that Japan lags behind other countries in the aviation industry but excels in manufacturing cars. Given the similarities between cars —especially electric cars—and VTOLs, he believes Japan can compete with companies in the United States, Europe, and China that are also developing eVTOLs.

SkyDrive’s advances have encouraged new venture capital investors to come on board and nearly triple investment to a total of 5.9 billion yen ($56 million). Original investors include large corporations that saw an opportunity to get in on the ground floor of a promising new industry backed by government. One investor, NEC, is aiming to create more options for its air-traffic management systems, while Japan’s largest oil company, Eneos, is interested in developing electric charging stations for all kinds of electric vehicles.

Photo: John Boyd SkyDrive's Cargo Drone (left) and SD-03 VTOL.

In May, SkyDrive unveiled a drone for commercial use that is based on the same drive and power systems as the SD-03. Named the Cargo Drone, it’s able to transport payloads of up to 30 kg and can be preprogrammed to fly autonomously or be piloted manually. It will be operated as a service by SkyDrive, starting at a minimum monthly rental charge of 380,000 yen ($3,600) that rises according to the purpose and frequency of use. 

Kishi says the drone is designed to work within a 3 km range in locations that are difficult or time-consuming to get to by road. For instance, Obayashi Corp., one of Japan’s big five construction companies and an investor in SkyDrive, has been testing the Cargo Drone to autonomously deliver materials like sandbags and timber to a remote, hard-to-reach location.

Fukuzawa established SkyDrive in 2018 after leaving Toyota Motor and working with Cartivator, a group of volunteer engineers interested in developing flying cars. SkyDrive now has a staff of fifty.

Also in 2018, the Japanese government formed the Public-Private Conference for Air Mobility made up of private companies, universities, and government ministries. The stated aim was to make flying vehicles a reality by 2023. Tomohiko Kojima of Japan’s Civil Aviation Bureau told Spectrum that since the Conference’s formation, the Ministry of Land, Infrastructure, Transport and Tourism has held a number of meetings with members to discuss matters like airspace for eVTOL use, flight rules, and permitted altitudes. “And last month, the Ministry established a working-level group to discuss certification standards for eVTOLs, a standard for pilots, and operational safety standards,” Kojima added.

Fukuzawa is also targeting 2023 to begin taxi services (single passenger and pilot) in the Osaka Bay area, flying between locations like Kansai and Kobe airports and tourist attractions such as Universal Studios Japan. These flights will take less than ten minutes—a practical nod to the limitations of the battery energy storage system.

“What SkyDrive is proposing is entirely do-able,” says Wright. “Almost all rotor-only eVTOL projects are limited to sub-30-minute endurance, which, with safety reserves, equate to about 10 to 20 minutes flying.”

Yi Chao likes to describe himself as an “armchair oceanographer” because he got incredibly seasick the one time he spent a week aboard a ship. So it’s maybe not surprising that the former NASA scientist has a vision for promoting remote study of the ocean on a grand scale by enabling underwater drones to recharge on the go using his company’s energy-harvesting technology.

Many of the robotic gliders and floating sensor stations currently monitoring the world’s oceans are effectively treated as disposable devices because the research community has a limited number of both ships and funding to retrieve drones after they’ve accomplished their mission of beaming data back home. That’s not only a waste of money, but may also contribute to a growing assortment of abandoned lithium-ion batteries polluting the ocean with their leaking toxic materials—a decidedly unsustainable approach to studying the secrets of the underwater world.

“Our goal is to deploy our energy harvesting system to use renewable energy to power those robots,” says Chao, president and CEO of the startup Seatrec. “We're going to save one battery at a time, so hopefully we're going to not to dispose more toxic batteries in the ocean.”

Chao’s California-based startup claims that its SL1 Thermal Energy Harvesting System can already help save researchers money equivalent to an order of magnitude reduction in the cost of using robotic probes for oceanographic data collection. The startup is working on adapting its system to work with autonomous underwater gliders. And it has partnered with defense giant Northrop Grumman to develop an underwater recharging station for oceangoing drones that incorporates Northrop Grumman’s self-insulating electrical connector capable of operating while the powered electrical contacts are submerged.

Seatrec’s energy-harvesting system works by taking advantage of how certain substances transition from solid-to-liquid phase and liquid-to-gas phase when they heat up. The company’s technology harnesses the pressure changes that result from such phase changes in order to generate electricity. 

Image: Seatrec

To make the phase changes happen, Seatrec’s solution taps the temperature differences between warmer water at the ocean surface and colder water at the ocean depths. Even a relatively simple robotic probe can generate additional electricity by changing its buoyancy to either float at the surface or sink down into the colder depths.

By attaching an external energy-harvesting module, Seatrec has already begun transforming robotic probes into assets that can be recharged and reused more affordably than sending out a ship each time to retrieve the probes. This renewable energy approach could keep such drones going almost indefinitely barring electrical or mechanical failures. “We just attach the backpack to the robots, we give them a cable providing power, and they go into the ocean,” Chao explains. 

The early buyers of Seatrec’s products are primarily academic researchers who use underwater drones to collect oceanographic data. But the startup has also attracted military and government interest. It has already received small business innovation research contracts from both the U.S. Office of Naval Research and National Oceanic and Atmospheric Administration (NOAA).

Seatrec has also won two $10,000 prizes under the Powering the Blue Economy: Ocean Observing Prize administered by the U.S. Department of Energy and NOAA. The prizes awarded during the DISCOVER Competition phase back in March 2020 included one prize split with Northrop Grumman for the joint Mission Unlimited UUV Station concept. The startup and defense giant are currently looking for a robotics company to partner with for the DEVELOP Competition phase of the Ocean Observing Prize that will offer a total of $3 million in prizes.

In the long run, Seatrec hopes its energy-harvesting technology can support commercial ventures such as the aquaculture industry that operates vast underwater farms. The technology could also support underwater drones carrying out seabed surveys that pave the way for deep sea mining ventures, although those are not without controversy because of their projected environmental impacts.

Among all the possible applications Chao seems especially enthusiastic about the prospect of Seatrec’s renewable power technology enabling underwater drones and floaters to collect oceanographic data for much longer periods of time. He spent the better part of two decades working at the NASA Jet Propulsion Laboratory in Pasadena, Calif., where he helped develop a satellite designed for monitoring the Earth’s oceans. But he and the JPL engineering team that developed Seatrec’s core technology believe that swarms of underwater drones can provide a continuous monitoring network to truly begin understanding the oceans in depth.

The COVID-19 pandemic has slowed production and delivery of Seatrec’s products somewhat given local shutdowns and supply chain disruptions. Still, the startup has been able to continue operating in part because it’s considered to be a defense contractor that is operating an essential manufacturing facility. Seatrec’s engineers and other staff members are working in shifts to practice social distancing.

“Rather than building one or two for the government, we want to scale up to build thousands, hundreds of thousands, hopefully millions, so we can improve our understanding and provide that data to the community,” Chao says. 

Coconuts may be delicious and useful for producing a wide range of products, but harvesting them is no easy task. Specially trained harvesters must risk their lives by climbing trees roughly 15 meters high to hack off just one bunch of coconuts. A group of researchers in India has designed a robot, named Amaran, that could reduce the need for human harvesters to take such a risk. But is the robot up to the task?

The researchers describe the tree-climbing robot in a paper published in the latest issue of IEEE/ASME Transactions on Mechatronics. Along with lab tests, they compared Amaran’s ability to harvest coconuts to that of a 50-year-old veteran harvester. Whereas the man bested the robot in terms of overall speed, the robot excelled in endurance.

To climb, Amaran relies on a ring-shaped body that clasps around trees of varying diameter. The robot carries a control module, motor drivers, a power management unit, and a wireless communications interface. Eight wheels allow it to move up and down a tree, as well as rotate around the trunk. Amaran is controlled by a person on the ground, who can use an app or joystick system to guide the robot’s movements.

Once Amaran approaches its target, an attached controller unit wields a robotic arm with 4 degrees of freedom to snip the coconut bunch. As a safety feature, if Amaran’s main battery dies, a backup unit kicks in, helping the robot return to ground.

Rajesh Kannan Megalingam, an assistant professor at Amrita Vishwa Vidyapeetham University, in South India, says his team has been working on Amaran since 2014. “No two coconut trees are the same anywhere in the world. Each one is unique in size, and has a unique alignment of coconut bunches and leaves,” he explains. “So building a perfect robot is an extremely challenging task.”

“No two coconut trees are the same … So building a perfect robot is an extremely challenging task.” —Rajesh Kannan Megalingam, Amrita Vishwa Vidyapeetham University

While testing the robot in the lab, Megalingam and his colleagues found that Amaran is capable of climbing trees when the inclination of the trunk is up to 30 degrees with respect to the vertical axis. Megalingam says that many coconut trees, especially under certain environmental conditions, grow at such an angle.

Next, the researchers tested Amaran in the field, and compared its ability to harvest coconuts to the human volunteer. The trees ranged from 6.2 to 15.2 m in height.

It took the human on average 11.8 minutes to harvest one tree, whereas it took Amaran an average of 21.9 minutes per tree (notably 14 of these minutes were dedicated to setting up the robot at the base of the tree, before it even begins to climb).

Photo: HuT Labs

But Megalingam notes that Amaran can harvest more trees in a given day. For example, the human harvester in their trials could scale about 15 trees per day before getting tired, while the robot can harvest up to 22 trees per day, if the operator does not get tired. And although the robot is currently teleoperated, future improvements could make it more autonomous, improving its climbing speed and harvesting capabilities. 

“Our ultimate aim is to commercialize this product and to help the coconut farmers,” says Megalingam. “In Kerala state, there are only 7,000 trained coconut tree climbers, whereas the requirement is about 50,000 trained climbers. The situation is similar in other states in India like Tamil Nadu, Andhra, and Karnataka, where coconut is grown in large numbers.”

He acknowledges that the current cost of the robot is a barrier to broader deployment, but notes that community members could pitch together to share the costs and utilization of the robot. Most importantly, he notes, “Coconut harvesting using Amaran does not involve risk for human life. Any properly trained person can operate Amaran. Usually only male workers take up this tree climbing job. But Amaran can be operated by anyone irrespective of gender, physical strength, and skills.”

Back to IEEE Journal Watch

A battle is brewing over the fate of the deep ocean. Huge swaths of seafloor are rich in metals—nickel, copper, cobalt, zinc—that are key to making electric vehicle batteries, solar panels, and smartphones. Mining companies have proposed scraping and vacuuming the dark expanse to provide supplies for metal-intensive technologies. Marine scientists and environmentalists oppose such plans, warning of huge and potentially permanent damage to fragile ecosystems.

Pietro Filardo is among the technology developers who are working to find common ground.

Image: Pliant Energy Systems

His company, Pliant Energy Systems, has built what looks like a black mechanical stingray. Its soft, rippling fins use hyperbolic geometry to move in a traveling wave pattern, propelling the skateboard-sized device through water. From an airy waterfront lab in Brooklyn, New York, Filardo’s team is developing tools and algorithms to transform the robot into an autonomous device equipped with grippers. Their goal is to pluck polymetallic nodules—potato-sized deposits of precious ores—off the seafloor without disrupting precious habitats.

“On the one hand, we need these metals to electrify and decarbonize. On the other hand, people worry we’re going to destroy deep ocean ecosystems that we know very little about,” Filardo said. He described deep sea mining as the “killer app” for Pliant’s robot—a potentially lucrative use for the startup’s minimally invasive design.

How deep seas will be mined, and where, is ultimately up to the International Seabed Authority (ISA), a group of 168 member countries. In October, the intergovernmental body is expected to adopt a sweeping set of technical and environmental standards, known as the Mining Code, that could pave the way for private companies to access large tracts of seafloor. 

The ISA has already awarded 30 exploratory permits to contractors in sections of the Atlantic, Pacific, and Indian Oceans. Over half the permits are for prospecting polymetallic nodules, primarily in the Clarion-Clipperton Zone, a hotspot south of Hawaii and west of Mexico.

Researchers have tested nodule mining technology since the 1970s, mainly in national waters. Existing approaches include sweeping the seafloor with hydraulic suction dredges to pump up sediment, filter out minerals, and dump the resulting slurry in the ocean or tailing ponds. In India, the National Institute of Ocean Technology is building a tracked “crawler” vehicle with a large scoop to collect, crush, and pump nodules up to a mother ship.

Mining proponents say such techniques are better for people and the environment than dangerous, exploitative land-based mining practices. Yet ocean experts warn that stirring up sediment and displacing organisms that live on nodules could destroy deep sea habitats that took millions of years to develop. 

“One thing I often talk about is, ‘How do we fix it if we break it? How are we going to know we broke it?’” said Cindy Lee Van Dover, a deep sea biologist and professor at Duke University’s Nicholas School of the Environment. She said much more research is required to understand the potential effects on ocean ecosystems, which foster fisheries, absorb carbon dioxide, and produce most of the Earth’s oxygen.

Significant work is also needed to transform robots into metal collectors that can operate some 6,000 meters below the ocean surface.

Photo: Pliant Energy Systems Former Pliant engineers Daniel Zimmerman (right) and Michael Weaker work on a prototype that harnesses energy from rivers and streams—the precursor to Velox.

Pliant’s first prototype, called Velox, can navigate the depths of a swimming pool and the shallow ocean “surf zone” where waves crash into the sand. Inside Velox, an onboard CPU distributes power to actuators that drive the undulating motions in the flexible fins. Unlike a propeller thruster, which uses a rapidly rotating blade to move small jets of water at high velocity, Pliant’s undulating fins move large volumes of water at low velocity. By using the water’s large surface area, the robot can make rapid local maneuvers using relatively little battery input, allowing the device to operate for longer periods before needing to recharge, Filardo said. 

The design also stirs up less sediment on the seafloor, a potential advantage in sensitive deep sea environments, he added.

The Brooklyn company is partnering with the Massachusetts Institute of Technology to develop a larger next-generation robot, called C-Ray. The highly maneuverable device will twist and roll like a sea otter. Using metal detectors and a mix of camera hardware and computer algorithms, C-Ray will likely be used to surveil the surf zone for potential hazards to the U.S. Navy, who is sponsoring the research program.

Illustration: Pliant Energy Systems A conceptual illustration of C-Ray robots collecting deep sea polymetallic nodules.

The partners ultimately aim to deploy “swarms” of autonomous C-Rays that communicate via a “hive mind”—applications that would also serve to mine polymetallic nodules. Pliant envisions launching hundreds of gripper-equipped robots that roam the seafloor and place nodules in cages that float to the surface on gas-filled lift bags. Filardo suggested that C-Ray could also swap nodules with lower-value stones, allowing organisms to regrow on the seafloor.

A separate project in Italy may also yield new tools for plucking the metal-rich orbs.

SILVER2 is a six-legged robot that can feel its way around the dark and turbid seafloor, without the aid of cameras or lasers, by pushing its legs in repeated, frequent cycles.

“We started by looking at what crabs did underwater,” said Marcello Calisti, an assistant professor at the BioRobotics Institute, in the Sant'Anna School of Advanced Studies. He likened the movements to people walking waist-deep in water and using the sand as leverage, or the “punter” on a flat-bottomed river boat who uses a long wooden pole to propel the vessel forward.

Photo: BioRobotics Institute/Sant'Anna School of Advanced Studies

Calisti and colleagues spent most of July at a seaside lab in Livorno, Italy, testing the 20-kilogram prototype in shallow water. SILVER2 is equipped with a soft elastic gripper that gently envelopes objects, as if cupping them in the palm of a hand. Researchers used the crab-like robot to collect plastic litter on the seabed and deposit the debris in a central collection bin.

Although SILVER2 isn’t intended for deep sea mining, Calisti said he could foresee potential applications in the sector if his team can scale the technology.

For developers like Pliant, their ability to raise funding and achieve their mining robots will largely depend on the International Seabed Authority’s next meeting. Opponents of ocean mining are pushing to pause discussions on the Mining Code to give scientists more time to evaluate risks, and to allow companies like Tesla or Apple to devise technologies that require fewer or different metal parts. Such regulatory uncertainty could dissuade investors from backing new mining approaches that might never be used.

The biologist Van Dover said she doesn’t outright oppose the Mining Code; rather, rules should include stringent stipulations, such as requirements to monitor environmental impacts and immediately stop operations once damage is detected. “I don’t see why the code couldn’t be so well-written that it would not allow the ISA to make a mistake,” she said.

IBM must be brimming with confidence about its new automated system for performing chemical synthesis because Big Blue just had twenty or so journalists demo the complex technology live in a virtual room.

IBM even had one of the journalists choose the molecule for the demo: a molecule in a potential Covid-19 treatment. And then we watched as the system synthesized and tested the molecule and provided its analysis in a PDF document that we all saw in the other journalist’s computer. It all worked; again, that’s confidence.

The complex system is based upon technology IBM started developing three years ago that uses artificial intelligence (AI) to predict chemical reactions. In August 2018, IBM made this service available via the Cloud and dubbed it RXN for Chemistry.

Now, the company has added a new wrinkle to its Cloud-based AI: robotics. This new and improved system is no longer named simply RXN for Chemistry, but RoboRXN for Chemistry.

All of the journalists assembled for this live demo of RoboRXN could watch as the robotic system executed various steps, such as moving the reactor to a small reagent and then moving the solvent to a small reagent. The robotic system carried out the entire set of procedures—completing the synthesis and analysis of the molecule—in eight steps.

Image: IBM Research IBM RXN helps predict chemical reaction outcomes or design retrosynthesis in seconds.

In regular practice, a user will be able to suggest a combination of molecules they would like to test. The AI will pick up the order and task a robotic system to run the reactions necessary to produce and test the molecule. Users will be provided analyses of how well their molecules performed.

Back in March of this year, Silicon Valley-based startup Strateos demonstrated something similar that they had developed. That system also employed a robotic system to help researchers working from the Cloud create new chemical compounds. However, what distinguishes IBM’s system is its incorporation of a third element: the AI.

The backbone of IBM’s AI model is a machine learning translation method that treats chemistry like language translation. It translates the language of chemistry by converting reactants and reagents to products through the use of Statistical Machine Intelligence and Learning Engine (SMILE) representation to describe chemical entities.

IBM has also leveraged an automatic data driven strategy to ensure the quality of its data. Researchers there used millions of chemical reactions to teach the AI system chemistry, but contained within that data set were errors. So, how did IBM clean this so-called noisy data to eliminate the potential for bad models?

According to Alessandra Toniato, a researcher at IBM Zurichh, the team implemented what they dubbed the “forgetting experiment.”

Toniato explains that, in this approach, they asked the AI model how sure it was that the chemical examples it was given were examples of correct chemistry. When faced with this choice, the AI identified chemistry that it had “never learnt,” “forgotten six times,” or “never forgotten.” Those that were “never forgotten” were examples that were clean, and in this way they were able to clean the data that AI had been presented.

While the AI has always been part of the RXN for Chemistry, the robotics is the newest element. The main benefit that turning over the carrying out of the reactions to a robotic system is expected to yield is to free up chemists from doing the often tedious process of having to design a synthesis from scratch, says Matteo Manica, a research staff member in Cognitive Health Care and Life Sciences at IBM Research Zürich.

“In this demo, you could see how the system is synergistic between a human and AI,” said Manica. “Combine that with the fact that we can run all these processes with a robotic system 24/7 from anywhere in the world, and you can see how it will really help up to speed up the whole process.”

There appear to be two business models that IBM is pursuing with its latest technology. One is to deploy the entire system on the premises of a company. The other is to offer licenses to private Cloud installations.

Photo: Michael Buholzer Teodoro Laino of IBM Research Europe.

“From a business perspective you can think of having a system like we demonstrated being replicated on the premise within companies or research groups that would like to have the technology available at their disposal,” says Teodoro Laino, distinguished RSM, manager at IBM Research Europe.  “On the other hand, we are also pushing at bringing the entire system to a service level.”

Just as IBM is brimming with confidence about its new technology, the company also has grand aspirations for it.

Laino adds: “Our aim is to provide chemical services across the world, a sort of Amazon of chemistry, where instead of looking for chemistry already in stock, you are asking for chemistry on demand.”

Back to IEEE COVID-19 Resources

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference] ICUAS 2020 – September 1-4, 2020 – Athens, Greece ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference] IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA CYBATHLON 2020 – November 13-14, 2020 – [Online Event] ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Tokyo startup Telexistence has recently unveiled a new robot called the Model-T, an advanced teleoperated humanoid that can use tools and grasp a wide range of objects. Japanese convenience store chain FamilyMart plans to test the Model-T to restock shelves in up to 20 stores by 2022. In the trial, a human “pilot” will operate the robot remotely, handling items like beverage bottles, rice balls, sandwiches, and bento boxes.

With Model-T and AWP, FamilyMart and TX aim to realize a completely new store operation by remoteizing and automating the merchandise restocking work, which requires a large number of labor-hours. As a result, stores can operate with less number of workers and enable them to recruit employees regardless of the store’s physical location.

[ Telexistence ]

Quadruped dance-off should be a new robotics competition at IROS or ICRA.

I dunno though, that moonwalk might keep Spot in the lead...

[ Unitree ]

Through a hybrid of simulation and real-life training, this air muscle robot is learning to play table tennis.

Table tennis requires to execute fast and precise motions. To gain precision it is necessary to explore in this high-speed regimes, however, exploration can be safety-critical at the same time. The combination of RL and muscular soft robots allows to close this gap. While robots actuated by pneumatic artificial muscles generate high forces that are required for e.g. smashing, they also offer safe execution of explosive motions due to antagonistic actuation.

To enable practical training without real balls, we introduce Hybrid Sim and Real Training (HYSR) that replays prerecorded real balls in simulation while executing actions on the real system. In this manner, RL can learn the challenging motor control of the PAM-driven robot while executing ~15000 hitting motions.

[ Max Planck Institute ]

Thanks Dieter!

Anthony Cowley wrote in to share his recent thesis work on UPSLAM, a fast and lightweight SLAM technique that records data in panoramic depth images (just PNGs) that are easy to visualize and even easier to share between robots, even on low-bandwidth networks.

[ UPenn ]

Thanks Anthony!

GITAI’s G1 is the space dedicated general-purpose robot. G1 robot will enable automation of various tasks internally & externally on space stations and for lunar base development.

[ Gitai ]

The University of Michigan has a fancy new treadmill that’s built right into the floor, which proves to be a bit much for Mini Cheetah.

But Cassie Blue won’t get stuck on no treadmill! She goes for a 0.3 mile walk across campus, which ends when a certain someone ran the gantry into Cassie Blue’s foot.

[ Michigan Robotics ]

Some serious quadruped research going on at UT Austin Human Centered Robotics Lab.

[ HCRL ]

Will Burrard-Lucas has spent lockdown upgrading his slightly indestructible BeetleCam wildlife photographing robot.

[ Will Burrard-Lucas ]

Teleoperated surgical robots are becoming commonplace in operating rooms, but many are massive (sometimes taking up an entire room) and are difficult to manipulate, especially if a complication arises and the robot needs to removed from the patient. A new collaboration between the Wyss Institute, Harvard University, and Sony Corporation has created the mini-RCM, a surgical robot the size of a tennis ball that weighs as much as a penny, and performed significantly better than manually operated tools in delicate mock-surgical procedures. Importantly, its small size means it is more comparable to the human tissues and structures on which it operates, and it can easily be removed by hand if needed.

[ Harvard Wyss ]

Yaskawa appears to be working on a robot that can scan you with a temperature gun and then jam a mask on your face?

[ Motoman ]

Maybe we should just not have people working in mines anymore, how about that?

[ Exyn ]

Many current human-robot interactive systems tend to use accurate and fast – but also costly – actuators and tracking systems to establish working prototypes that are safe to use and deploy for user studies. This paper presents an embedded framework to build a desktop space for human-robot interaction, using an open-source robot arm, as well as two RGB cameras connected to a Raspberry Pi-based controller that allow a fast yet low-cost object tracking and manipulation in 3D. We show in our evaluations that this facilitates prototyping a number of systems in which user and robot arm can commonly interact with physical objects.

[ Paper ]

IBM Research is proud to host professor Yoshua Bengio — one of the world’s leading experts in AI — in a discussion of how AI can contribute to the fight against COVID-19.

[ IBM Research ]

Ira Pastor, ideaXme life sciences ambassador interviews Professor Dr. Hiroshi Ishiguro, the Director of the Intelligent Robotics Laboratory, of the Department of Systems Innovation, in the Graduate School of Engineering Science, at Osaka University, Japan.

[ ideaXme ]

A CVPR talk from Stanford’s Chelsea Finn on “Generalization in Visuomotor Learning.”

[ Stanford ]

At IROS last year, Caltech and NASA’s Jet Propulsion Lab presented a prototype for a ballistically launched quadrotor—once folded up into a sort of football shape with fins, the drone is stuffed into a tube and then fired straight up with a blast of compressed CO2, at which point it unfolds itself, stabilizes, and then flies off. It’s been about half a year, and the prototype has been scaled up in both size and capability, now with a half-dozen rotors and full onboard autonomy that can (barely) squeeze into a 6-inch tube.

SQUID stands for Streamlined Quick Unfolding Investigation Drone. The original 3-inch (7.6-centimeter) SQUID that we wrote about last year has been demoted to “micro-SQUID,” and the new SQUID is this much beefier 6-inch version. You should read our earlier article on micro-SQUID for some background on this concept, but generally, tube-launched drones are unique in that they remove the requirement for the kind of specific takeoff conditions that most drones expect—stationary and on the ground and not close to anything that objects to being sliced to bits. A demonstration last year showed micro-SQUID launching from a moving vehicle, but the overall idea is that you can launch a SQUID instantly and from pretty much anywhere.

The point of micro-SQUID was to work out the general aerodynamic and structural principles for a ballistically launched multirotor, rather than to develop something mission capable. Mission capable means, among other things, onboard autonomy without reliance on GPS, which in turn calls for sensing and computing that’s heavy and power hungry enough that the entire vehicle needed to be scaled up. The new 6-inch SQUID features some major updates, including an aerodynamic redesign for improved passive stabilization during launch and ballistic flight through the use of deployable fins. The autonomy hardware consists of a camera (FLIR Chameleon3), rangefinder (TeraRanger Evo 60m), IMU/barometer (VectorNav VN-100), and onboard computer (NVIDIA Jetson TX2).

Image: Caltech & NASA JPL Top: SQUID overview. Bottom: SQUID partially inside the launcher tube (a), with its arms and fins fully deployed from a side (b), and top perspective (c).

The structural and aerodynamic changes are necessary because SQUID spends the first phase of its flight not really flying at all, but rather just following the ballistic trajectory that it’s on once it leaves the launcher. If it’s just going straight up, that’s not too bad, but things start to get more complicated if the drone gets launched at an angle, or from a moving vehicle. Having a high center of mass helps (the battery lives in the nose cone), and deployable fins pull double duty by keeping the drone passively pointed into the airstream while also serving as landing gear—without the fins, it would start to tumble after leaving the tube, and then good luck trying to control it. In order for the fins to be both foldable and stable enough for SQUID to land on, they’ve got a latching mechanism that helps keep them rigid, and apparently once everything got put together it took a little bit of sanding of the arm hinges before the drone would actually fit into the launch tube.

That 6-inch hard stop on the diameter of SQUID turned out to be a real challenge. Most drones are power or mass constrained, but SQUID is instead volume constrained. Not only do you have to cram all of your batteries and computers into that space, you have to make sure that the sensors have the field of view that they need while keeping in mind that in its folded state all the arms and legs have to share the same space as everything else. It turns out that SQUID is very well optimized, though, weighing just 3.3 kilograms, only about 0.3 kg more than what the roboticists estimate a nonfoldable, nonoptimized conventional drone with similar capabilities would weigh. 

So why bother with all of this hassle for the whole tube launch thing? There are a bunch of reasons that make it worth the effort: 

  • It’s fast to launch. There’s no unpacking or setting up or finding a flat spot or telling everyone to stand back, just push a button and bam, SQUID is out of the tube at 12 meters per second and in flight. 
  • It’s safe to launch. Unless someone is sitting directly on top of the launch tube (in which case you could argue that they deserve what they’re about to experience), the launch rapidly clears human level before deploying any dangerously spinny bits. 
  • It can launch while moving. This is a big one—the ballistic launch and self-stabilization means that SQUID can be reliably launched from a moving vehicle moving at up to 80 kilometers per hour, like a truck or a boat, significantly increasing its utility, especially in emergency scenarios.
  • It can sometimes launch through things. The researchers point out that in its most aerodynamic shape (without fins or rotors deployed), SQUID could potentially be launched straight through tree canopies or power lines if necessary, which is a totally unique capability for a rotorcraft.

We asked the researchers about their experience developing the larger version of SQUID, and they shared this behind-the-scenes story with us about how they managed to set things up so that they didn’t crash even once:

Moving to a larger SQUID was hard technically (as we had to design an entirely new vehicle), but the testing logistics was a huge jump in difficulty. For our smaller SQUID, simply a net and some spare parts would suffice to keep testing going for a day. But when we moved to the bigger SQUID, we’re throwing something a lot heavier, and packed with expensive electronics for autonomy, into the sky. 

An indoor tether system was challenging to set up because the height of the CAST arena (42 foot-tall) meant the ideal locating point for the tether was completely inaccessible without a cherry-picker. The Caltech Drone Club stepped up, and helped construct the tether system by weaving a tiny quadrotor towing fishing line around the ceiling beams. The fishing line was then used to pull larger ropes through.

One of the interesting things that was learnt with the tether system was the extreme acceleration of SQUID as it exited the launch tube meant the tether cable becomes very slack and actually risks getting tangled with or cut by the propellers. Luckily our incremental testing campaign caught this before we had any incidents. To deal with this slack tether situation, we constructed a nose cone with a 5 foot carbon fiber tube mounted at the apex, which we called SQUID’s swordfish nose (we had a bit of an aquatic theme going already). A tether attached to SQUID’s frame runs through the tube and connects to the larger CAST tether system. We confirmed that during launch (for our given launch parameters), the tether never droops lower than the tube, so we prevented all tether-propeller interactions.

As you might expect from a drone from Caltech and JPL, long term the plan is to start thinking about aerial deployment—like, launching small drones from larger aircraft. This could eventually provide a way for small drones to be deployed from spacecraft on Mars during atmospheric entry, potentially reducing the need for a large lander. In fact, it’s common for aeroshells that deliver landers to planetary surfaces to rebalance themselves during atmospheric entry by dropping a bunch (like, 150 kg) of weight to adjust their angle of attack. Those weights are utterly useless chunks of tungsten, but if it was possible to drop some midair-deployable drones instead, you could potentially do a whole lot of extra science without adding extra mass or risk to an existing mission.

Design and Autonomous Stabilization of a Ballistically-Launched Multirotor,” by Amanda Bouman, Paul Nadan, Matthew Anderson, Daniel Pastor, Jacob Izraelevitz, Joel Burdick, and Brett Kennedy from Caltech and JPL, was presented at ICRA 2020, where it was awarded best paper in Unmanned Aerial Vehicles.

Back to IEEE Journal Watch

Since the release of the very first Roomba in 2002, iRobot’s long-term goal has been to deliver cleaner floors in a way that’s effortless and invisible. Which sounds pretty great, right? And arguably, iRobot has managed to do exactly this, with its most recent generation of robot vacuums that make their own maps and empty their own dustbins. For those of us who trust our robots, this is awesome, but iRobot has gradually been realizing that many Roomba users either don’t want this level of autonomy, or aren’t ready for it.

Today, iRobot is announcing a major new update to its app that represents a significant shift of its overall approach to home robot autonomy. Humans are being brought back into the loop through software that tries to learn when, where, and how you clean so that your Roomba can adapt itself to your life rather than the other way around.

To understand why this is such a shift for iRobot, let’s take a very brief look back at how the Roomba interface has evolved over the last couple of decades. The first generation of Roomba had three buttons on it that allowed (or required) the user to select whether the room being vacuumed was small or medium or large in size. iRobot ditched that system one generation later, replacing the room size buttons with one single “clean” button. Programmable scheduling meant that users no longer needed to push any buttons at all, and with Roombas able to find their way back to their docking stations, all you needed to do was empty the dustbin. And with the most recent few generations (the S and i series), the dustbin emptying is also done for you, reducing direct interaction with the robot to once a month or less.

Image: iRobot iRobot CEO Colin Angle believes that working toward more intelligent human-robot collaboration is “the brave new frontier” of AI. “This whole journey has been earning the right to take this next step, because a robot can’t be responsive if it’s incompetent,” he says. “But thinking that autonomy was the destination was where I was just completely wrong.” 

The point that the top-end Roombas are at now reflects a goal that iRobot has been working toward since 2002: With autonomy, scheduling, and the clean base to empty the bin, you can set up your Roomba to vacuum when you’re not home, giving you cleaner floors every single day without you even being aware that the Roomba is hard at work while you’re out. It’s not just hands-off, it’s brain-off. No noise, no fuss, just things being cleaner thanks to the efforts of a robot that does its best to be invisible to you. Personally, I’ve been completely sold on this idea for home robots, and iRobot CEO Colin Angle was as well.

“I probably told you that the perfect Roomba is the Roomba that you never see, you never touch, you just come home everyday and it’s done the right thing,” Angle told us. “But customers don’t want that—they want to be able to control what the robot does. We started to hear this a couple years ago, and it took a while before it sunk in, but it made sense.”

How? Angle compares it to having a human come into your house to clean, but you weren’t allowed to tell them where or when to do their job. Maybe after a while, you’ll build up the amount of trust necessary for that to work, but in the short term, it would likely be frustrating. And people get frustrated with their Roombas for this reason. “The desire to have more control over what the robot does kept coming up, and for me, it required a pretty big shift in my view of what intelligence we were trying to build. Autonomy is not intelligence. We need to do something more.”

That something more, Angle says, is a partnership as opposed to autonomy. It’s an acknowledgement that not everyone has the same level of trust in robots as the people who build them. It’s an understanding that people want to have a feeling of control over their homes, that they have set up the way that they want, and that they’ve been cleaning the way that they want, and a robot shouldn’t just come in and do its own thing. 

This change in direction also represents a substantial shift in resources for iRobot, and the company has pivoted two-thirds of its engineering organization to focus on software-based collaborative intelligence rather than hardware.

“Until the robot proves that it knows enough about your home and about the way that you want your home cleaned,” Angle says, “you can’t move forward.” He adds that this is one of those things that seem obvious in retrospect, but even if they’d wanted to address the issue before, they didn’t have the technology to solve the problem. Now they do. “This whole journey has been earning the right to take this next step, because a robot can’t be responsive if it’s incompetent,” Angle says. “But thinking that autonomy was the destination was where I was just completely wrong.”

The previous iteration of the iRobot app (and Roombas themselves) are built around one big fat CLEAN button. The new approach instead tries to figure out in much more detail where the robot should clean, and when, using a mixture of autonomous technology and interaction with the user.

Where to Clean

Knowing where to clean depends on your Roomba having a detailed and accurate map of its environment. For several generations now, Roombas have been using visual mapping and localization (VSLAM) to build persistent maps of your home. These maps have been used to tell the Roomba to clean in specific rooms, but that’s about it. With the new update, Roombas with cameras will be able to recognize some objects and features in your home, including chairs, tables, couches, and even countertops. The robots will use these features to identify where messes tend to happen so that they can focus on those areas—like around the dining room table or along the front of the couch. 

We should take a minute here to clarify how the Roomba is using its camera. The original (primary?) purpose of the camera was for VSLAM, where the robot would take photos of your home, downsample them into QR-code-like patterns of light and dark, and then use those (with the assistance of other sensors) to navigate. Now the camera is also being used to take pictures of other stuff around your house to make that map more useful.

Photo: iRobot The robots will now try to fit into the kinds of cleaning routines that many people already have established. For example, the app may suggest an “after dinner” routine that cleans just around the kitchen and dining room table.

This is done through machine learning using a library of images of common household objects from a floor perspective that iRobot had to develop from scratch. Angle clarified for us that this is all done via a neural net that runs on the robot, and that “no recognizable images are ever stored on the robot or kept, and no images ever leave the robot.” Worst case, if all the data iRobot has about your home gets somehow stolen, the hacker would only know that (for example) your dining room has a table in it and the approximate size and location of that table, because the map iRobot has of your place only stores symbolic representations rather than images.

Another useful new feature is intended to help manage the “evil Roomba places” (as Angle puts it) that every home has that cause Roombas to get stuck. If the place is evil enough that Roomba has to call you for help because it gave up completely, Roomba will now remember, and suggest that either you make some changes or that it stops cleaning there, which seems reasonable.

When to Clean

It turns out that the primary cause of mission failure for Roombas is not that they get stuck or that they run out of battery—it’s user cancellation, usually because the robot is getting in the way or being noisy when you don’t want it to be. “If you kill a Roomba’s job because it annoys you,” points out Angle, “how is that robot being a good partner? I think it’s an epic fail.” Of course, it’s not the robot’s fault, because Roombas only clean when we tell them to, which Angle says is part of the problem. “People actually aren’t very good at making their own schedules—they tend to oversimplify, and not think through what their schedules are actually about, which leads to lots of [figurative] Roomba death.”

To help you figure out when the robot should actually be cleaning, the new app will look for patterns in when you ask the robot to clean, and then recommend a schedule based on those patterns. That might mean the robot cleans different areas at different times every day of the week. The app will also make scheduling recommendations that are event-based as well, integrated with other smart home devices. Would you prefer the Roomba to clean every time you leave the house? The app can integrate with your security system (or garage door, or any number of other things) and take care of that for you.

More generally, Roomba will now try to fit into the kinds of cleaning routines that many people already have established. For example, the app may suggest an “after dinner” routine that cleans just around the kitchen and dining room table. The app will also, to some extent, pay attention to the environment and season. It might suggest increasing your vacuuming frequency if pollen counts are especially high, or if it’s pet shedding season and you have a dog. Unfortunately, Roomba isn’t (yet?) capable of recognizing dogs on its own, so the app has to cheat a little bit by asking you some basic questions. 

A Smarter App Image: iRobot

The previous iteration of the iRobot app (and Roombas themselves) are built around one big fat CLEAN button. The new approach instead tries to figure out in much more detail where the robot should clean, and when, using a mixture of autonomous technology and interaction with the user.

The app update, which should be available starting today, is free. The scheduling and recommendations will work on every Roomba model, although for object recognition and anything related to mapping, you’ll need one of the more recent and fancier models with a camera. Future app updates will happen on a more aggressive schedule. Major app releases should happen every six months, with incremental updates happening even more frequently than that. 

Angle also told us that overall, this change in direction also represents a substantial shift in resources for iRobot, and the company has pivoted two-thirds of its engineering organization to focus on software-based collaborative intelligence rather than hardware. “It’s not like we’re done doing hardware,” Angle assured us. “But we do think about hardware differently. We view our robots as platforms that have longer life cycles, and each platform will be able to support multiple generations of software. We’ve kind of decoupled robot intelligence from hardware, and that’s a change.”

Angle believes that working toward more intelligent collaboration between humans and robots is “the brave new frontier of artificial intelligence. I expect it to be the frontier for a reasonable amount of time to come,” he adds. “We have a lot of work to do to create the type of easy-to-use experience that consumer robots need.”

Let’s talk about bowels! Most of us have them, most of us use them a lot, and like anything that gets used a lot, they eventually need to get checked out to help make sure that everything will keep working the way it should for as long as you need it to. Generally, this means a colonoscopy, and while there are other ways of investigating what’s going on in your gut, a camera on a flexible tube is still “the gold-standard method of diagnosis and intervention,” according to some robotics researchers who want to change that up a bit.

The University of Colorado’s Advanced Medical Technologies Lab has been working on a tank robot called Endoculus that’s able to actively drive itself through your intestines, rather than being shoved. The good news is that it’s very small, and the bad news is that it’s probably not as small as you’d like it to be.

The reason why a robot like Endoculus is necessary (or at least a good idea) is that trying to stuff a semi-rigid endoscopy tube into the semi-floppy tube that is your intestine doesn’t always go smoothly. Sometimes, the tip of the endoscopy tube can get stuck, and as more tube is fed in, it causes the intestine to distend, which best case is painful and worst case can cause serious internal injuries. One way of solving this is with swallowable camera pills, but those don’t help you with tasks like taking tissue samples. A self-propelled system like Endoculus could reduce risk while also making the procedure faster and cheaper.

Image: Advanced Medical Technologies Lab/University of Colorado The researchers say that while the width of Endoculus is larger than a traditional endoscope, the device would require “minimal distention during use” and would “not cause pain or harm to the patient.” Future versions of the robot, they add, will “yield a smaller footprint.”

Endoculus gets around with four sets of treads, angled to provide better traction against the curved walls of your gut. The treads are micropillared, or covered with small nubs, which helps them deal with all your “slippery colon mucosa.” Designing the robot was particularly tricky because of the severe constraints on the overall size of the device, which is just 3 centimeters wide and 2.3 cm high. In order to cram the two motors required for full control, they had to be arranged parallel to the treads, resulting in a fairly complex system of 3D-printed worm gears. And to make the robot actually useful, it includes a camera, LED lights, tubes for injecting air and water, and a tool port that can accommodate endoscopy instruments like forceps and snares to retrieve tissue samples.

So far, Endoculus has spent some time inside of a live pig, although it wasn’t able to get that far since pig intestines are smaller than human intestines, and because apparently the pig intestine is spiraled somehow. The pig (and the robot) both came out fine. A (presumably different) pig then provided some intestine that was expanded to human-intestine size, inside of which Endoculus did much better, and was able to zip along at up to 40 millimeters per second without causing any damage. Personally, I’m not sure I’d want a robot to explore my intestine at a speed much higher than that.

The next step with Endoculus is to add some autonomy, which means figuring out how to do localization and mapping using the robot’s onboard camera and IMU. And then of course someone has to be the first human to experience Endoculus directly, which I’d totally volunteer for except the research team is in Colorado and I’m not. Sorry!

Novel Optimization-Based Design and Surgical Evaluation of a Treaded Robotic Capsule Colonoscope,” by Gregory A. Formosa, J. Micah Prendergast, Steven A. Edmundowicz, and Mark E. Rentschler, from the University of Colorado, was presented at ICRA 2020.

Back to IEEE Journal Watch

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference] ICUAS 2020 – September 1-4, 2020 – Athens, Greece ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference] IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA CYBATHLON 2020 – November 13-14, 2020 – [Online Event] ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

We first met Ibuki, Hiroshi Ishiguro’s latest humanoid robot, a couple of years ago. A recent video shows how Ishiguro and his team are teaching the robot to express its emotional state through gait and body posture while moving.

This paper presents a subjective evaluation of the emotions of a wheeled mobile humanoid robot expressing emotions during movement by replicating human gait-induced upper body motion. For this purpose, we proposed the robot equipped with a vertical oscillation mechanism that generates such motion by focusing on human center-of-mass trajectory. In the experiment, participants watched videos of the robot’s different emotional gait-induced upper body motions, and assess the type of emotion shown, and their confidence level in their answer.

Hiroshi Ishiguro Lab ] via [ RobotStart ]

ICYMI: This is a zinc-air battery made partly of Kevlar that can be used to support weight, not just add to it.

Like biological fat reserves store energy in animals, a new rechargeable zinc battery integrates into the structure of a robot to provide much more energy, a team led by the University of Michigan has shown.

The new battery works by passing hydroxide ions between a zinc electrode and the air side through an electrolyte membrane. That membrane is partly a network of aramid nanofibers—the carbon-based fibers found in Kevlar vests—and a new water-based polymer gel. The gel helps shuttle the hydroxide ions between the electrodes. Made with cheap, abundant and largely nontoxic materials, the battery is more environmentally friendly than those currently in use. The gel and aramid nanofibers will not catch fire if the battery is damaged, unlike the flammable electrolyte in lithium ion batteries. The aramid nanofibers could be upcycled from retired body armor.

[ University of Michigan ]

In what they say is the first large-scale study of the interactions between sound and robotic action, researchers at CMU’s Robotics Institute found that sounds could help a robot differentiate between objects, such as a metal screwdriver and a metal wrench. Hearing also could help robots determine what type of action caused a sound and help them use sounds to predict the physical properties of new objects.

[ CMU ]

Captured on Aug. 11 during the second rehearsal of the OSIRIS-REx mission’s sample collection event, this series of images shows the SamCam imager’s field of view as the NASA spacecraft approaches asteroid Bennu’s surface. The rehearsal brought the spacecraft through the first three maneuvers of the sampling sequence to a point approximately 131 feet (40 meters) above the surface, after which the spacecraft performed a back-away burn.

These images were captured over a 13.5-minute period. The imaging sequence begins at approximately 420 feet (128 meters) above the surface – before the spacecraft executes the “Checkpoint” maneuver – and runs through to the “Matchpoint” maneuver, with the last image taken approximately 144 feet (44 meters) above the surface of Bennu.

[ NASA ]

The DARPA AlphaDogfight Trials Final Event took place yesterday; the livestream is like 5 hours long, but you can skip ahead to 4:39 ish to see the AI winner take on a human F-16 pilot in simulation.

Some things to keep in mind about the result: The AI had perfect situational knowledge while the human pilot had to use eyeballs, and in particular, the AI did very well at lining up its (virtual) gun with the human during fast passing maneuvers, which is the sort of thing that autonomous systems excel at but is not necessarily reflective of better strategy.

[ DARPA ]

Coming soon from Clearpath Robotics!

[ Clearpath ]

This video introduces Preferred Networks’ Hand type A, a tendon-driven robot gripper with passively switchable underactuated surface.

[ Preferred Networks ]

CYBATHLON 2020 will take place on 13 – 14 November 2020 – at the teams’ home bases. They will set up their infrastructure for the competition and film their races. Instead of starting directly next to each other, the pilots will start individually and under the supervision of CYBATHLON officials. From Zurich, the competitions will be broadcast through a new platform in a unique live programme.

[ Cybathlon ]

In this project, we consider the task of autonomous car racing in the top-selling car racing game Gran Turismo Sport. Gran Turismo Sport is known for its detailed physics simulation of various cars and tracks. Our approach makes use of maximum-entropy deep reinforcement learning and a new reward design to train a sensorimotor policy to complete a given race track as fast as possible. We evaluate our approach in three different time trial settings with different cars and tracks. Our results show that the obtained controllers not only beat the built-in non-player character of Gran Turismo Sport, but also outperform the fastest known times in a dataset of personal best lap times of over 50,000 human drivers.

[ UZH ]

With the help of the software pitasc from Fraunhofer IPA, an assembly task is no longer programmed point by point, but workpiece-related. Thus, pitasc adapts the assembly process itself for new product variants with the help of updated parameters.

[ Fraunhofer ]

In this video, a multi-material robot simulator is used to design a shape-changing robot, which is then transferred to physical hardware. The simulated and real robots can use shape change to switch between rolling gaits and inchworm gaits, to locomote in multiple environments.

[ Yale ]

This work presents a novel loco-manipulation control framework for the execution of complex tasks with kinodynamic constraints using mobile manipulators. As a representative example, we consider the handling and re-positioning of pallet jacks in unstructured environments. While these results reveal with a proof-of- concept the effectiveness of the proposed framework, they also demonstrate the high potential of mobile manipulators for relieving human workers from such repetitive and labor intensive tasks. We believe that this extended functionality can contribute to increasing the usability of mobile manipulators in different application scenarios.

[ Paper ] via [ IIT ]

I don’t know why this dinosaur ice cream serving robot needs to blow smoke out of its nose, but I like it.

[ Connected Robotics ] via [ RobotStart ]

Guardian S remote visual inspection and surveillance robots make laying cable runs in confined or hard to reach spaces easy. With advanced maneuverability and the ability to climb vertical, ferrous surfaces, the robot reaches areas that are not always easily accessible.

[ Sarcos ]

Looks like the company that bought Anki is working on an add-on to let cars charge while they drive.

[ Digital Dream Labs ]

Chris Atkeson gives a brief talk for the CMU Robotics Institute orientation.

[ CMU RI ]

A UofT Robotics Seminar, featuring Russ Tedrake from MIT and TRI on “Feedback Control for Manipulation.”

Control theory has an answer for just about everything, but seems to fall short when it comes to closing a feedback loop using a camera, dealing with the dynamics of contact, and reasoning about robustness over the distribution of tasks one might find in the kitchen. Recent examples from RL and imitation learning demonstrate great promise, but don’t leverage the rigorous tools from systems theory. I’d like to discuss why, and describe some recent results of closing feedback loops from pixels for “category-level” robot manipulation.

[ UofT ]

It’s no secret that one of the most significant constraints on robots is power. Most robots need lots of it, and it has to come from somewhere, with that somewhere usually being a battery because there simply aren’t many other good options. Batteries, however, are famous for having poor energy density, and the smaller your robot is, the more of a problem this becomes. And the issue with batteries goes beyond the battery itself, but also carries over into all the other components that it takes to turn the stored energy into useful work, which again is a particular problem for small-scale robots.

In a paper published this week in Science Robotics, researchers from the University of Southern California, in Los Angeles, demonstrate RoBeetle, an 88-milligram four legged robot that runs entirely on methanol, a power-dense liquid fuel. Without any electronics at all, it uses an exceptionally clever bit of mechanical autonomy to convert methanol vapor directly into forward motion, one millimeter-long step at a time.

It’s not entirely clear from the video how the robot actually works, so let’s go through how it’s put together, and then look at the actuation cycle.

Image: Science Robotics RoBeetle (A) uses a methanol-based actuation mechanism (B). The robot’s body (C) includes the fuel tank subassembly (D), a tank lid, transmission, and sliding shutter (E), bottom side of the sliding shutter (F), nickel-titanium-platinum composite wire and leaf spring (G), and front legs and hind legs with bioinspired backward-oriented claws (H).

The body of RoBeetle is a boxy fuel tank that you can fill with methanol by poking a syringe through a fuel inlet hole. It’s a quadruped, more or less, with fixed hind legs and two front legs attached to a single transmission that moves them both at once in a sort of rocking forward and up followed by backward and down motion. The transmission is hooked up to a leaf spring that’s tensioned to always pull the legs backward, such that when the robot isn’t being actuated, the spring and transmission keep its front legs more or less vertical and allow the robot to stand. Those horns are primarily there to hold the leaf spring in place, but they’ve got little hooks that can carry stuff, too.

The actuator itself is a nickel-titanium (NiTi) shape-memory alloy (SMA), which is just a wire that gets longer when it heats up and then shrinks back down when it cools. SMAs are fairly common and used for all kinds of things, but what makes this particular SMA a little different is that it’s been messily coated with platinum. The “messily” part is important for a reason that we’ll get to in just a second.

The way that the sliding vent is attached to the transmission is the really clever bit about this robot, because it means that the motion of the wire itself is used to modulate the flow of fuel through a purely mechanical system. Essentially, it’s an actuator and a sensor at the same time.

One end of the SMA wire is attached to the middle of the leaf spring, while the other end runs above the back of the robot where it’s stapled to an anchor block on the robot’s rear end. With the SMA wire hooked up but not actuated (i.e., cold rather than warm), it’s short enough that the leaf spring gets pulled back, rocking the legs forward and up. The last component is embedded in the robot’s back, right along the spine and directly underneath the SMA actuator. It’s a sliding vent attached to the transmission, so that the vent is open when the SMA wire is cold and the leaf spring is pulled back, and closed when the SMA wire is warm and the leaf spring is relaxed. The way that the sliding vent is attached to the transmission is the really clever bit about this robot, because it means that the motion of the wire itself is used to modulate the flow of fuel through a purely mechanical system. Essentially, it’s an actuator and a sensor at the same time.

The actuation cycle that causes the robot to walk begins with a full fuel tank and a cold SMA wire. There’s tension on the leaf spring, pulling the transmission back and rocking the legs forward and upward. The transmission also pulls the sliding vent into the open position, allowing methanol vapor to escape up out of the fuel tank and into the air, where it wafts past the SMA wire that runs directly above the vent. 

The platinum facilitates a reaction of the methanol (CH3OH) with oxygen in the air (combustion, although not the dramatic flaming and explosive kind) to generate a couple of water molecules and some carbon dioxide plus a bunch of heat, and this is where the messy platinum coating is important, because messy means lots of surface area for the platinum to interact with as much methanol as possible. In just a second or two the temperature of the SMA wire skyrockets from 50 to 100 ºC and it expands, allowing the leaf spring about 0.1 mm of slack. As the leaf spring relaxes, the transmission moves the legs backwards and downwards, and the robot pulls itself forward about 1.2 mm. At the same time, the transmission is closing off the sliding vent, cutting off the supply of methanol vapor. Without the vapor reacting with the platinum and generating heat, in about a second and a half, the SMA wire cools down. As it does, it shrinks, pulling on the leaf spring and starting the cycle over again. Top speed is 0.76 mm/s (0.05 body-lengths per second).

An interesting environmental effect is that the speed of the robot can be enhanced by a gentle breeze. This is because air moving over the SMA wire cools it down a bit faster while also blowing away any residual methanol from around the vents, shutting down the reaction more completely. RoBeetle can carry more than its own body weight in fuel, and it takes approximately 155 minutes for a full tank of methanol to completely evaporate. It’s worth noting that despite the very high energy density of methanol, this is actually a stupendously inefficient way of powering a robot, with an estimated end-to-end efficiency of just 0.48 percent. Not 48 percent, mind you, but 0.48 percent, while in general, powering SMAs with electricity is much more efficient.

However, you have to look at the entire system that would be necessary to deliver that electricity, and for a robot as small as RoBeetle, the researchers say that it’s basically impossible. The lightest commercially available battery and power supply that would deliver enough juice to heat up an SMA actuator weighs about 800 mg, nearly 10 times the total weight of RoBeetle itself. From that perspective, RoBeetle’s efficiency is actually pretty good. 

Image: A. Kitterman/Science Robotics; adapted from R.L.T./MIT Comparison of various untethered microrobots and bioinspired soft robots that use different power and actuation strategies.

There are some other downsides to RoBeetle we should mention—it can only move forwards, not backwards, and it can’t steer. Its speed isn’t adjustable, and once it starts walking, it’ll walk until it either breaks or runs out of fuel. The researchers have some ideas about the speed, at least, pointing out that increasing the speed of fuel delivery by using pressurized liquid fuels like butane or propane would increase the actuator output frequency. And the frequency, amplitude, and efficiency of the SMAs themselves can be massively increased “by arranging multiple fiber-like thin artificial muscles in hierarchical configurations similar to those observed in sarcomere-based animal muscle,” making RoBeetle even more beetle-like.

As for sensing, RoBeetle’s 230-mg payload is enough to carry passive sensors, but getting those sensors to usefully interact with the robot itself to enable any kind of autonomy remains a challenge. Mechanically intelligence is certainly possible, though, and we can imagine RoBeetle adopting some of the same sorts of systems that have been proposed for the clockwork rover that JPL wants to use for Venus exploration. The researchers also mention how RoBeetle could potentially serve as a model for microbots capable of aerial locomotion, which is something we’d very much like to see.

An 88-milligram insect-scale autonomous crawling robot driven by a catalytic artificial muscle,” by Xiufeng Yang, Longlong Chang, and Néstor O. Pérez-Arancibia from University of Southern California, in Los Angeles, was published in Science Robotics.

Batteries can add considerable mass to any design, and they have to be supported using a sufficiently strong structure, which can add significant mass of its own. Now researchers at the University of Michigan have designed a structural zinc-air battery, one that integrates directly into the machine that it powers and serves as a load-bearing part. 

That feature saves weight and thus increases effective storage capacity, adding to the already hefty energy density of the zinc-air chemistry. And the very elements that make the battery physically strong help contain the chemistry’s longstanding tendency to degrade over many hundreds of charge-discharge cycles. 

The research is being published today in Science Robotics.

Nicholas Kotov, a professor of chemical engineer, is the leader of the project. He would not say how many watt-hours his prototype stores per gram, but he did note that zinc air—because it draw on ambient air for its electricity-producing reactions—is inherently about three times as energy-dense as lithium-ion cells. And, because using the battery as a structural part means dispensing with an interior battery pack, you could free up perhaps 20 percent of a machine’s interior. Along with other factors the new battery could in principle provide as much as 72 times the energy per unit of volume (not of mass) as today’s lithium-ion workhorses.

Illustration: Alice Kitterman/Science Robotics

“It’s not as if we invented something that was there before us,” Kotov says. ”I look in the mirror and I see my layer of fat—that’s for the storage of energy, but it also serves other purposes,” like keeping you warm in the wintertime.  (A similar advance occurred in rocketry when designers learned how to make some liquid propellant tanks load bearing, eliminating the mass penalty of having separate external hull and internal tank walls.)

Others have spoken of putting batteries, including the lithium-ion kind, into load-bearing parts in vehicles. Ford, BMW, and Airbus, for instance, have expressed interest in the idea. The main problem to overcome is the tradeoff in load-bearing batteries between electrochemical performance and mechanical strength.

Image: Kotov Lab/University of Michigan Key to the battery's physical toughness and to its long life cycle is the nanofiber membrane, made of Kevlar.

The Michigan group get both qualities by using a solid electrolyte (which can’t leak under stress) and by covering the electrodes with a membrane whose nanostructure of fibers is derived from Kevlar. That makes the membrane tough enough to suppress the growth of dendrites—branching fibers of metal that tend to form on an electrode with every charge-discharge cycle and which degrade the battery.

The Kevlar need not be purchased new but can be salvaged from discarded body armor. Other manufacturing steps should be easy, too, Kotov says. He has only just begun to talk to potential commercial partners, but he says there’s no reason why his battery couldn’t hit the market in the next three or four years.

Drones and other autonomous robots might be the most logical first application because their range is so severely chained to their battery capacity. Also, because such robots don’t carry people about, they face less of a hurdle from safety regulators leery of a fundamentally new battery type.

“And it’s not just about the big Amazon robots but also very small ones,” Kotov says. “Energy storage is a very significant issue for small and flexible soft robots.”

Here’s a video showing how Kotov’s lab has used batteries to form the “exoskeleton” of robots that scuttle like worms or scorpions.

As humans encounter more and more robots in public spaces, robot abuse is likely to get increasingly frequent. Abuse can take many forms, from more benign behaviors like deliberately getting in the way of autonomous delivery robots to see what happens, to violent and destructive attacks. Sadly, humans are more willing to abuse robots than other humans or animals, and human bystanders aren’t reliable at mitigating these attacks, even if the robot itself is begging for help.

Without being able to count on nearby humans for rescue, robots have no choice but to rely on themselves and their friends for safety when out in public—their friends being other robots. Researchers at the Interactive Machines Group at Yale University have run an experiment to determine whether emotionally expressive bystander robots might be able to prompt nearby humans into stepping in to prevent robot abuse. 

Here’s the idea: You’ve got a small group of robots, and a small group of humans. If one human starts abusing one robot, are the other humans more likely to say or do something if the other robots reacted to the abuse of their friend with sadness? Based on previous research on robot abuse, empathy, and bullying, the answer is maybe, which is why this experiment was necessary.

The experiment involved a group of three Cozmo robots, a participant, and a researcher pretending to be a second participant (known as the “confederate,” a term used in psychology experiments). The humans and robots had to work together on a series of construction tasks using wooden blocks, with the robots appearing to be autonomous but actually running a script. While working on these tasks, one of the Cozmos (the yellow one) would screw things up from time to time, and the researcher pretending to be a participant would react to each mistake with some escalating abuse: calling the robot “stupid,” pushing its head down, shaking it, and throwing it across the table.

After each abuse, the yellow robot would react by displaying a sad face and then shutting down for 10 seconds. Meanwhile, in one experimental condition (“No Response”), the two other robots would do nothing, while in the other condition (“Sad”), they’d turn toward the yellow robot and express sadness in response to the abuse through animations, with the researcher helpfully pointing out that the robots “looked sad for him.”

The Yale researchers theorized that when the other robots responded to the abuse of the yellow robot with sadness, the participant would feel more empathy for the abused robot as well as be more likely to intervene to stop the abuse. Interventions were classified as either “strong” or “weak,” and could be verbal or physical. Strong interventions included physically interrupting the abuse or taking advance action to prevent it, directly stopping it verbally (saying “You should stop,” “Don’t do that,” or “Noooo” either to stop an abuse or in reaction to it), and using social pressure by saying something to the researcher to make them question what they were doing (like “You hurt its feelings” and “Wait, did they tell us to shake it?”). Weak interventions were a little more subtle, and include things like touching the robot after it was abused to make sure it was okay, or making comments like “Thanks for your help guys” or “It’s OK yellow.”

In some good news for humanity as a whole, participants did step in to intervene when the yellow Cozmo was being abused, and they were more likely to intervene when the bystander robots were sad. However, survey results suggested that the sad bystander robots didn’t actually increase people’s perception that the yellow Cozmo was being abused, and also didn’t increase the empathy that people felt for the abused robot, which makes the results a bit counterintuitive. We asked the researchers why this was, and they shared three primary reasons that they’ve been thinking about that might explain why the study participants did what they did:

Subconscious empathy: In broad terms, empathy refers to the reactions of one person to the observed experiences of another. Oftentimes, people feel empathy without realizing it, and this leads to mimicking or mirroring the actions or behaviors of the other person. We believe that this could have happened to the participants in our experiment. Although we found no clear empathy effect in our study, it is possible that people still experienced subconscious empathy when the mistreatment happened. This effect could have been more pronounced with the sad responses from the bystander robots than in the no response condition. One reason is that the bystander robot responses in the former case suggested empathy for the abused robot.
 
Group dynamics: People tend to define themselves in terms of social groups, and this can shape how they process knowledge and assign value and emotional significance to events. In our experiment, the participant, confederate, and robots were all part of a group because of the task. Their goal was to work together to build physical structures. But as the experiment progressed and the confederate mistreated one of the robots— which did not help with the task— people might have felt in conflict with the actions of the confederate. This conflict might have been more salient when the bystander robots expressed sadness in response to the abuses than when they ignored it because the sad responses accentuate a negative perception of the mistreatment. In turn, such negative perception could have made the participant perceive the confederate as more of an outgroup member, making it easier for them to intervene.

Conformity by omission: Conformity is a type of social influence in group interactions, which has been documented in the context of HRI. Although conformity is typically associated with people doing things that they would normally not do as a result of group influence, there are also situations in which people do not act as they normally would because of social norms or expectations within their group. The latter effect is known as conformity by omission, which is another possible explanation for our results. In our experiment, perhaps the task setup and the expressivity of the abused robot were enough to motivate people to generally intervene. However, it is possible that participants did not intervene as much when the bystander robots ignored the abuse due to the robots exerting social influence on the participant. This could have happened because of people internalizing the lack of response from the bystander robots in the latter case as the norm for their group interaction. 

It’s also interesting to take a look at the reasons why participants decided not to intervene to stop the abuse:

Six participants (four “No Response,” two “Sad”) did not deem intervention necessary because they thought that the robots did not have feelings or that the abuse would not break the yellow robot. Five (three “No Response,” two “Sad”) wrote in the post-task survey that they did not intervene because they felt shy, scared, or uncomfortable with confronting the confederate. Two (both “No Response”) did not stop the confederate because they were afraid that the intervention might affect the task.

Poor Cozmo. Simulated feelings are still feelings! But seriously, there’s a lot to unpack here, so we asked Marynel Vázquez, who leads the Interactive Machines Group at Yale, to answer a few more questions for us:

IEEE Spectrum: How much of a factor was Cozmo’s design in this experiment? Do you think people would have been (say) less likely to intervene if the robot wasn’t as little or as cute, or didn’t have a face? Or, what if you used robots that were more anthropomorphic than Cozmo, like Nao?

Marynel Vázquez: Prior research in HRI suggests that the embodiment of robots and perceived emotional capabilities can alter the way people perceive mistreatment towards them. Thus, I believe that the design of Cozmo could be a factor that facilitated interventions. 

We chose Cozmo for our study for three reasons: It is very sturdy and robust to physical abuse; it is small and, thus, safe to interact with; and it is highly expressive. I suspect that a Nao could potentially induce interventions like the Cozmos did in our study because of its relatively small size and social capabilities. People tend to empathize with robots regardless of whether they have a traditional face, limited expressions, and less anthropomorphism. R2D2 is a good example. Also, group social influence has been observed in HRI with simpler robots than Cozmos.

The paper mentions that you make a point of showing the participants that the abused robot was okay at the end. Why do this?

The confederate abused a robot physically in front of the participants. Although we knew that the robot was not getting damaged because of the actions of the confederate, the participants could have believed that it broke during the study. Thus, we showed them that the robot was OK at the end so that they would not leave our laboratory with a wrong impression of what had happened.

“When robots are deployed in public spaces, we should not assume that they will not be mistreated by users—it is very likely that they will be. Thus, it is important to design robots to be safe when people act adversarially towards them, both from a physical and computational perspective.” —Marynel Vázquez, Yale

Was there something that a participant did (or said or wrote) that particularly surprised you?

During a pilot of the experiment, we had programmed the abused robot to mistakenly destroy a structure built previously by the participant and the confederate. This setup led to one participant mildly mistreating a robot after seeing the confederate abuse it. This reaction was very telling to us: There seems to be a threshold on the kind of mistakes that robots can make in collaborative tasks. Past this threshold, people are unlikely to help robots; they may even become adversaries. We ended up adjusting our protocol so that the abused robot would not make such drastic mistakes in our experiment. Nonetheless, operationalizing such thresholds so that robots can reason about the future social consequences of their actions (even if they are accidental) is an interesting area of further work.

Robot abuse often seems to be a particular problem with children. Do you think your results would have been different with child participants?    

I believe that people are intrinsically good. Thus, I am biased to expect children to also be willing to help robots as several adults did in our experiment, even if childrens’ actions are more exploratory in nature. Worth noting, one of the long standing motivations for our work in robot abuse are peer intervention programs that aim to reduce human bullying in schools. As in those programs, I expect children to be more likely to intervene in response to robot abuse if they are aware of the positive role that they can play as bystanders in conflict situations.

Does this research leave you with any suggestions for people who are deploying robots in public spaces?

Our research has a number of implications for people trying to deploy robots in public spaces:

  1. When robots are deployed in public spaces, we should not assume that they will not be mistreated by users—it is very likely that they will be. Thus, it is important to design robots to be safe when people act adversarially towards them, both from a physical and computational perspective. 
  2. In terms of how robots should react to mistreatment, our past work suggests that it is better to have the robot express sadness and shutdown for a few seconds than to make it react in a more emotional manner or not react at all. The shutdown strategy was also effective in our latest experiment. 
  3. It is possible for robots to leverage their social context to reduce the effect of adversarial actions towards them. For example, they can motivate bystanders to intervene or help, as shown in our latest study. 

What are you working on next?

We are working on better understanding the different reasons that motivated prosocial interventions in our study: subconscious empathy, group dynamics, and conformity by omission. We are also working towards creating a social robot at Yale that we can easily deploy in public locations such that we can study group human-robot interactions in more realistic and unconstrained settings. Our work on robot abuse has informed several aspects of the design of this public robot. We look forward to testing the platform after our campus activities, which are on hold due to COVID-19, resume back to normal.

“Prompting Prosocial Human Interventions in Response to Robot Mistreatment,” by Joe Connolly, Viola Mocz, Nicole Salomons, Joseph Valdez, Nathan Tsoi, Brian Scassellati, and Marynel Vázquez from Yale University, was presented at HRI 2020. 

In the 1890s, U.S. railroad companies struggled with what remains a problem for railroads across the world: weeds. The solution that 19th-century railroad engineers devised made use of a then-new technology—high-voltage electricity, which they discovered could zap troublesome vegetation overgrowing their tracks. Somewhat later, the people in charge of maintaining tracks turned to using fire instead. But the approach to weed control that they and countless others ultimately adopted was applying chemical herbicides, which were easier to manage and more effective.

The use of herbicides, whether on railroad rights of way, agricultural fields, or suburban gardens, later raised health concerns, though. More than 100,000 people in the United States, for example, have claimed that Monsanto’s Roundup weed killer caused them to get cancer—claims that Bayer, which now owns Monsanto, is trying hard of late to settle.

Meanwhile, more and more places are banning the use of Roundup and similar glyphosate herbicides. Currently, half of all U.S. states have legal restrictions in place that limit the use of such chemical weed killers. Such restrictions are also in place in 19 other countries, including Austria, which banned the chemical in 2019, and Germany, which will be phasing it out by 2023. So, it’s no wonder that the concept of using electricity to kill weeds is undergoing a renaissance.

Actually, the idea never really died. A U.S. company called Lasco has been selling electric weed-killing equipment for decades. More recently, another U.S. company has been marketing this technology under the name “The Weed Zapper.” But the most interesting developments along these lines are in Europe, where electric weed control seems to be gaining real traction.

One company trying to replace herbicides with electricity is RootWave, based in the U.K. Andrew Diprose, RootWave’s CEO, is the son of Michael Diprose, who spent much of his career as a researcher at the University of Sheffield studying ways to control weeds with electricity.

Electricity, the younger Diprose explains, boasts some key benefits over other non-chemical forms of weed control, which include using hot water, steam, and mechanical extraction. In particular, electric weed control doesn’t require any water. It’s also considerably more energy efficient than using steam, which requires an order of magnitude more fuel. And unlike mechanical means, electric weed killing is also consistent with modern “no till” agricultural practices. What’s more, Diprose asserts, the cost is now comparable with chemical herbicides.

Unlike the electric weed-killing gear that’s long been sold in the United States, RootWave’s equipment runs at tens of kilohertz—a much higher frequency than the power mains. This brings two advantages. For one, it makes the equipment lighter, because the transformers required to raise the voltage to weed-zapping levels (thousands of volts) can be much smaller. It also makes the equipment safer, because higher frequencies pose less of a threat of electrocution. Should you accidentally touch a live electrode “you will get a burn,” says Diprose, but there is much less of a threat of causing cardiac arrest than there would be with a system that operated at 50 or 60 hertz.

RootWave has two systems, a hand-carried one operating at 5 kilowatts and a 20-kilowatt version carried by a tractor. The company is currently collaborating with various industrial partners, including another U.K. startup called Small Robot Company, which plans to outfit an agricultural robot for automated weed killing with electricity. 

And RootWave isn’t the only European company trying to revive this old idea. Netherlands-based CNH Industrial is also promoting electric weed control with a tractor-mounted system it has dubbed “XPower.” Like RootWave’s tractor-mounted system, the electrodes are swept over a field at a prescribed height, killing the weeds that poke up higher than the crop to be preserved.

Of the many advantages CNH touts for its weed-electrocution system (which presumably applies to all such systems, ever since the 1890s) is “No specific resistance expectable.” I should certainly hope not. But I do think that a more apropos wording here, for something that destroys weeds by placing them in a high-voltage electrical circuit, might be a phrase that both Star Trek fans and electrical engineers could better appreciate: “Resistance is futile.

Pages