IEEE Spectrum Robotics

IEEE Spectrum Robotics recent content
Subscribe to IEEE Spectrum Robotics feed

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] RSS 2021 – July 12-16, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA IROS 2021 – September 27-1, 2021 – [Online Event] ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Drones in swarms (especially large swarms) generally rely on a centralized controller to keep them organized and from crashing into each other. But as swarms get larger and do more stuff, that's something that you can't always rely on, so folks at EPFL are working on a localized inter-drone communication system that can accomplish the same thing.

Predictive control of aerial swarms in cluttered environments, by Enrica Soria, Fabrizio Schiano and Dario Floreano from EPFL, is published this week in Nature.

[ EPFL ]

It takes a talented team of brilliant people to build Roxo, the first FedEx autonomous delivery robot. Watch this video to meet a few of the faces behind the bot–at FedEx Office and at DEKA Research.

Hey has anyone else noticed that the space between the E and the X in the FedEx logo looks kinda like an arrow?

[ FedEx ]

Thanks Fan!

Lingkang Zhang’s latest quadruped, ChiTu, runs ROS on a Raspberrypi 4B. Despite its mostly 3D printed-ness and low-cost servos, it looks to be quite capable.

[ Lingkang Zhang ]

Thanks Lingkang!

Wolfgang-OP is an open-source humanoid platform designed for RoboCup, which means it's very good at falling over and not exploding.

[ Hamburg Bit-Bots ]

Thanks Fan!

NASA’s Perseverance rover has been on the surface of Mars since February of 2021, joining NASA’s Curiosity rover, which has been studying the Red Planet since 2012. Perseverance is now beginning to ramp up its science mission on Mars while preparing to collect samples that will be returned to Earth on a future mission. Curiosity is ready to explore some new Martian terrain. This video provides a mission update from Perseverance Surface Mission Manager Jessica Samuels and Curiosity Deputy Project Scientist Abigail Fraeman.

[ NASA ]

It seems kinda crazy to me that this is the best solution for this problem, but I’m glad it works.

[ JHU LCSR ]

At USC’s Center for Advanced Manufacturing, we have developed a spray painting robot which we used to paint an USC themed Tommy Trojan mural.

[ USC ]

ABB Robotics is driving automation in the construction industry with new robotic automation solutions to address key challenges, including the need for more affordable and environmentally friendly housing and to reduce the environmental impact of construction, amidst a labor and skills shortage.

[ ABB ]

World’s first! Get to know our new avocado packing robot, the Speedpacker, which we have developed in conjunction with the machinery maker Selo. With this innovative robot, we pack avocados ergonomically and efficiently to be an even better partner for our customers and growers.

[ Nature's Pride ]

KUKA robots with high payload capacities were used for medical technology applications for the first time at the turn of the millennium. To this day, robots with payload capacities of up to 500 kilograms are a mainstay of medical robotics.

[ Kuka ]

We present a differential inverse kinematics control framework for task-space trajectory tracking, force regulation, obstacle and singularity avoidance, and pushing an object toward a goal location, with limited sensing and knowledge of the environment.

[ Dynamic Systems Lab ]

Should robots in the real world trust models? I wouldn't!

[ Science Robotics ]

Mark Muhn works together with the US FES CYBATHLON team Cleveland since 2012. For FES cycling he uses surgically implanted, intramuscular electrodes. In the CYBATHLON 2016 and 2020, Mark cycled on the first and the third place, respectively. At the past International IEEE EMBS Conference on Neural Engineering (NER21), he described the importance of user-centered design.

[ Cybathlon ]

This just-posted TEDx talk entitled “Towards the robots of science fiction” from Caltech's Aaron Aames was recorded back in 2019, which I mention only to alleviate any anxiety you might feel seeing so many people maskless indoors.

I don’t know exactly what Aaron was doing at 3:00, but I feel like we’ve all been there with one robot or another.

[ AMBER Lab ]

Are you ready for your close-up? Our newest space-exploring cameras are bringing the universe into an even sharper focus. Imaging experts on our Mars rovers teams will discuss how we get images from millions of miles away to your screens.

[ JPL ]

Some of the world's top universities have entered the DARPA Subterranean Challenge, developing technologies to map, navigate, and search underground environments. Led by CMU's Robotics Institute faculty members Sebastian Scherer and Matt Travers, as well as OSU's Geoff Hollinger, Team Explorer has earned first and second place positions in the first two rounds of competition. They look forward to this third and final year of the challenge, with the competition featuring all the subdomains of tunnel systems, urban underground, and cave networks. Sebastian, Matt, and Geoff discuss and demo some of the exciting technologies under development.

[ Explorer ]

An IFRR Global Robotics Colloquium on “The Future of Robotic Manipulation.”

Research in robotic manipulation has made tremendous progress in recent years. This progress has been brought about by researchers pursuing different, and possibly synergistic approaches. Prominent among them, of course, is deep reinforcement learning. It stands in opposition to more traditional, model-based approaches, which depend on models of geometry, dynamics, and contact. The advent of soft grippers and soft hands has led to substantial success, enabling many new applications of robotic manipulation. Which of these approaches represents the most promising route towards progress? Or should we combine them to push our field forward? How can we close the substantial gap between robotic and human manipulation capabilities? Can we identify and transfer principles of human manipulation to robots? These are some of the questions we will attempt to answer in this exciting panel discussion.

[ IFRR ]

Bipedal robots are a huge hassle. They’re expensive, complicated, fragile, and they spend most of their time almost but not quite falling over. That said, bipeds are worth it because if you want a robot to go everywhere humans go, the conventional wisdom is that the best way to do so is to make robots that can walk on two legs like most humans do. And the most frequent, most annoying two-legged thing that humans do to get places? Going up and down stairs.

Stairs have been a challenge for robots of all kinds (bipeds, quadrupeds, tracked robots, you name it) since, well, forever. And usually, when we see bipeds going up or down stairs nowadays, it involves a lot of sensing, a lot of computation, and then a fairly brittle attempt that all too often ends in tears for whoever has to put that poor biped back together again.

You’d think that the solution to bipedal stair traversal would just involve better sensing and more computation to model the stairs and carefully plan footsteps. But an approach featured in upcoming Robotics Science and Systems conference paper from Oregon State University and Agility Robotics does away will all of that out and instead just throws a Cassie biped at random outdoor stairs with absolutely no sensing at all. And it works spectacularly well. 

A couple of things to bear in mind: Cassie is “blind” in the sense that it has no information about the stairs that it’s going up or down. The robot does get proprioceptive feedback, meaning that it knows what kind of contact its limbs are making with the stairs. Also, the researchers do an admirable job of keeping that safety tether slack, and Cassie isn’t being helped by it in the least—it’s just there to prevent a catastrophic fall.

What really bakes my noodle about this video is how amazing Cassie is at being kind of terrible at stair traversal. The robot is a total klutz: it runs into railings, stubs its toes, slips off of steps, misses steps completely, and occasionally goes backwards. Amazingly, Cassie still manages not only to not fall, but also to keep going until it gets where it needs to be.

And this is why this research is so exciting—rather than try to develop some kind of perfect stair traversal system that relies on high quality sensing and a lot of computation to optimally handle stairs, this approach instead embraces real-world constraints while managing to achieve efficient performance that’s real-world robust, if perhaps not the most elegant.

The secret to Cassie’s stair mastery isn’t much of a secret at all, since there’s a paper about it on arXiv. The researchers used reinforcement learning to train a simulated Cassie on permutations of stairs based on typical city building codes, with sets of stairs up to eight individual steps. To transfer the learned stair-climbing strategies (referred to as policies) effectively from simulation to the real world, the simulation included a variety of disturbances designed to represent the kinds of things that are hard to simulate accurately. For example, Cassie had its simulated joints messed with, its simulated processing speed tweaked, and even the simulated ground friction was jittered around. So, even though the simulation couldn’t perfectly mimic real ground friction, randomly mixing things up ensures that the controller (the software telling the robot how to move) gains robustness to a much wider range of situations.

One peculiarity of using reinforcement learning to train a robot is that even if you come up with something that works really well, it’s sometimes unclear exactly why. You may have noticed in the first video that the researchers are only able to hypothesize about the reasons for the controller’s success, and we asked one of the authors, Kevin Green, to try and explain what’s going on:

“Deep reinforcement learning has similar issues that we are seeing in a lot of machine learning applications. It is hard to understand the reasoning for why a learned controller performs certain actions. Is it exploiting a quirk of your simulation or your reward function? Is it perhaps stuck in a local minima? Sometimes the reward function is not specific enough and the policy can exhibit strange, vestigial behaviors simply because they are not rewarded or penalized. On the other hand, a reward function can be too constraining and can lead to a policy which doesn’t fully explore the space of possible actions, limiting performance. We do our best to ensure our simulation is accurate and that our rewards are objective and descriptive. From there, we really act more like biomechanists, observing a functioning system for hints as to the strategies that it is using to be highly successful.”

One of the strategies that they observed, first author Jonah Siekmann told us, is that Cassie does better on stairs when it’s moving faster, which is a bit of a counterintuitive thing for robots generally:

“Because the robot is blind, it can choose very bad foot placements. If it tries to place its foot on the very corner of a stair and shift its weight to that foot, the resulting force pushes the robot back down the stairs. At walking speed, this isn’t much of an issue because the robot’s momentum can overcome brief moments where it is being pushed backwards. At low speeds, the momentum is not sufficient to overcome a bad foot placement, and it will keep getting knocked backwards down the stairs until it falls. At high speeds, the robot tends to skip steps which pushes the robot closer to (and sometimes over) its limits.”

These bad foot placements are what lead to some of Cassie’s more impressive feats, Siekmann says. “Some of the gnarlier descents, where Cassie skips a step or three and recovers, were especially surprising. The robot also tripped on ascent and recovered in one step a few times. The physics are complicated, so to see those accurate reactions embedded in the learned controller was exciting. We haven’t really seen that kind of robustness before.” In case you’re worried that all of that robustness is in video editing, here’s an uninterrupted video of ten stair ascents and ten stair descents, featuring plenty of gnarliness.

We asked the researchers whether Cassie is better at stairs than a blindfolded human would be. “It’s difficult to say,” Siekmann told us. “We’ve joked lots of times that Cassie is superhuman at stair climbing because in the process of filming these videos we have tripped going up the stairs ourselves while we’re focusing on the robot or on holding a camera.”

A robot being better than a human at a dynamic task like this is obviously a very high bar, but my guess is that most of us humans are actually less prepared for blind stair navigation than Cassie is, because Cassie was explicitly trained on stairs that were uneven: “a small amount of noise (± 1cm) is added to the rise and run of each step such that the stairs are never entirely uniform, to prevent the policy from deducing the precise dimensions of the stairs via proprioception and subsequently overfitting to perfectly uniform stairs.” Speaking as someone who just tried jogging up my stairs with my eyes closed in the name of science, I absolutely relied on the assumption that my stairs were uniform. And when humans can’t rely on assumptions like that, it screws us up, even if we have eyeballs equipped. 

Like most robot-y things, Cassie is operating under some significant constraints here. If Cassie seems even stompier than it usually is, that’s because it’s using this specific stair controller which is optimized for stairs and stair-like things but not much else.

“When you train neural networks to act as controllers, over time the learning algorithm refines the network so that it maximizes the reward specific to the environment that it sees,” explains Green. “This means that by training on flights of stairs, we get a very different looking controller compared to training on flat ground.” Green says that the stair controller works fine on flat ground, it’s just less efficient (and noisier). They’re working on ways of integrating multiple gait controllers that the robot can call on depending on what it’s trying to do; conceivably this might involve some very simple perception system just to tell the robot “hey look, there are some stairs, better engage stair mode.”

The paper ends with the statement that “this work has demonstrated surprising capabilities for blind locomotion and leaves open the question of where the limits lie.” I’m certainly surprised at Cassie’s stair capabilities, and it’ll be exciting to see what other environments this technique can be applied to. If there are limits, I’m sure that Cassie is going to try and find them.

Blind Bipedal Stair Traversal via Sim-to-Real Reinforcement Learning, by Jonah Siekmann, Kevin Green, John Warila, Alan Fern, and Jonathan Hurst from Oregon State University and Agility Robotics, will be presented at RSS 2021 in July.

Kinova robotic arms, from left to right: Gen2, Gen3 lite, Gen3

Multiple companies turned to Kinova® robotic arms to create mobile platforms with manipulation capabilities to tackle many aspects of the sanitary crisis. The addition of a dexterous manipulator to mobile platforms opens the door to applications such as patient care disinfection and cleaning — critical to the fight against the virus.

Ever since the pandemic hit at the beginning of 2020, it became clear that the human resources available to address all the different fronts in the fight against the virus would be thinly stretched — especially considering the fact that these people are subject to falling ill. Mobile robots with manipulation capabilities were quickly identified as a solution to alleviate this problem by freeing skilled people from menial tasks and by allowing remote or automated work which keeps exposure to the virus to a minimum.

Multiple companies turned to Kinova robotic arms for an off-the-shelf manipulation solution suitable for mobile platforms. The history Kinova has with the assistive market is now at the core of the technology — assistive products such as motorized wheelchair-mounted robots like Jaco® were designed from the beginning to be extremely safe, user-friendly, ultra-lightweight, and power-efficient. This experience has transpired into more recent products as well. All these features do not come at the expense of performance, in fact, Kinova robots boast some of the highest payload-to-weight ratios in the industry. It does make sense that robots like these are ideal for applications involving mobile platforms and integration into products that are meant to be interacted with in non-industrial settings.

One of the companies that successfully made such an integration is Diligent, who developed a patient care robot called Moxi by integrating a Kinova Gen2 robot to a mobile platform powered by cloud-based software and artificial intelligence. Moxi is designed to help clinical staff with menial tasks that do not involve the patients, like fetching supplies, delivering samples, and distributing equipment, thus freeing skilled staff like nurses to perform more value-added tasks. Its rounded design and friendly face make interactions with it feel more natural for both the public and the hospital staff who otherwise may not be used to interacting with robots. In the current pandemic, one can easily understand how a robot such as Moxi can find its uses to alleviate the workload of healthcare workers and prove to quickly provide a return on investment for healthcare institutions.

Another type of menial task that became surprisingly important in the context of the sanitary crisis is that of cleaning. Prior to the crisis, Peanut Robotics, a startup from California that raised $2 million in 2019 was already developing a mobile platform carrying a Kinova Gen3 for cleaning commercial spaces such as restaurants, offices, hotels, and even airports. By coupling the 7 degrees of freedom robot to a vertical rail, their system can reach even the most inconvenient places. Rather than using specialized robot end-effectors to work, they take advantage of the flexibility of the robot gripper to grab tools similar to what a human would use, thus making it possible to clean an entire room with a single system, including spraying disinfectant, scrubbing, and wiping — and all that autonomously! With the current context where more surfaces need more frequent cleaning and where being in contact with objects comes with a higher risk of infection, surely we will see this kind of robot increasingly frequently.

However, not all environments are suitable for such a deep cleaning. Common areas in malls or airports for example are simply too large and possibly too crowded for such operations. It is these kinds of cases that A&K Robotics are tackling with their Autonomous Mobile Robotic UV Disinfector (Amrud) — a project selected for funding by Canada’s Advanced Manufacturing Supercluster. They combined their expertise in navigation and mobile platforms with the capabilities of a Kinova Gen3 lite robot. The compact and extremely light (less than 6 kg) robot is carried around wielding a UV light source to disinfect surfaces. Its 6 degrees of freedom allow for more than enough flexibility to waive the light source around even the most complex surfaces. A&K already made the news a few times in 2020 by deploying their solution to assist in the disinfection of floors and high-touch surfaces. Whereas when they started the project back in 2017 they did not get much traction, it is clear that the recent needs got them much deserved attention.

As the pandemic settles, an always-increasing number of applications for robots are found. Be it traditionally non-industrialized industries looking to be more resilient to staff shortages or due to the democratization of working from home, robots are becoming more commonplace than ever. Kinova, with its wide range of robot type offers, is there to assist developers and integrators accomplish their tasks and contribute to the growth of the collaboration of robots in our daily lives.

To learn more about Kinova click here

Today the folks at Hackaday announced the 2021 Hackaday Prize, a hardware design challenge for DIYers, which this year goes by the theme “Rethink, Refresh, Rebuild.” The capital-P-Prize is actually a group of awards that range from a USD $25,000 grand prize to a set of $500 prizes given to the 50 top finalists.

This year the competition includes five separate “challenges”:

  • Rethink Displays
  • Refresh Work-From-Home Life
  • Reimagine Supportive Tech
  • Redefine Robots
  • Reactivate Wildcard

These short descriptions don’t always tell the whole story. For example, while “Rethink Displays” is pretty much what it says, “Reimagine Supportive Tech” isn’t just for projects like the one that won last year’s Hackaday Grand Prize—a mouth-operated interface to assist people with disabilities that prevent them from using other means to control a computer or wheelchair. This category also includes strategies for making hacking more accessible to, say, people who are too young to safely wield a soldering iron.

The “Refresh Work-From-Home Life” category anticipates that many of us will continue working from home, even after the pandemic becomes a distant memory. As somebody who has worked from home for more than a dozen years, I’m eager to see what the hackersphere comes up with in this realm.

“Redefine Robots” challenges contestants to come up with robotic companions or helpers that do something novel, including ones that are completely virtual. After the introduction of GPT-3, I’m a little worried about what hackers might be able to invent here.

“Reactivate Wildcard” is for projects that meet the general theme of reinvention, but don’t fit into the other categories.

Judging for the prize will take place in November, with the winners to be announced on November 19th. So you’ll have plenty of time to brainstorm and tinker.

If you want to compete, read over the official rules, fire up your design tools, and get hacking. Should you become stuck along the way, the good folks at Hackaday are even providing a mechanism to request some one-on-one mentoring. After a year in which many of us have been especially isolated, this seems a wonderful way to rethink, refresh, rebuild—and reconnect.

Early Saturday morning (Beijing time), China’s Tianwen-1 lander separated from its orbiter and successfully performed a fully autonomous powered descent to the Martian surface, carrying the Zhurong rover along with it. The landing site is Utopia Planitia, past site of the Viking 2 lander and future site of a major Starfleet shipyard. 

CNSA

The China National Space Administration (CNSA)  is not nearly as open about what it’s doing as NASA is and the descent and landing were not shown live. The landing itself was a technique that we’re fairly familiar with, although just because similar systems have been used in the past doesn't make it any less impressive. The difference was that the Tianwen-1 spacecraft was orbiting Mars rather than skimming past it, making the entry velocity a mere 4 km/s rather than the 5.5 km/s experienced by Perseverance. 

A bit of a bump at the end there, right? But there’s another video showing a much gentler landing, albeit with what looks like some kind of antigravity laser force field, although it could also be a representation of the lander’s obstacle-avoidance system.

We have yet to see any pictures from the surface, and the Zhurong rover is still perched on top of the descent stage—its wheels probably won’t touch the surface until next week, after the deployment of some little ramps, one for each set of wheels. Once safely down, the 240 kg solar-powered rover will begin its 90-day science mission, the most interesting aspect of which could be its search for subsurface water ice.

One of the instruments that Zhurong carries is RoPeR, which is a dual-frequency ground-penetrating radar system. It can look for ice layers up to 100m beneath the surface, which is a unique capability and could provide some additional information about where on Mars might be suitable for base. Ice means water, and water means both something to breathe and something to fuel rockets with. This stuff is necessary for exploration, since trying to haul these resources to Mars from Earth is almost certainly not practical. 

How much CNSA decides to share about Zhurong’s adventures going forward is anyone’s guess. NASA’s policy (with a few exceptions) is to post raw images from its spacecraft almost as soon as they’re received, but with Zhurong, it seems like we’ll have to just content ourselves with whatever CNSA wants us to see. Even so, it’s hard not to get excited for a little robot that’s about to start exploring a brand new piece of Mars for the very first time.

This webinar introduces the SynMatrix filter synthesis and optimization tool that can be used to drive Ansys HFSS full wave simulation and filter tuning.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA IROS 2021 – September 27-1, 2021 – [Online Event] ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

The 2021 Computer-Human Interaction conference (CHI) took place this week, in amongst the stereo smelling and tooth control were some incredibly creative robotics projects, like this “HairTouch” system that uses robotic manipulation to achieve a variety of haptic sensations using hair.

We propose a pinbased handheld device, HairTouch, to provide stiffness differences, roughness differences, surface height differences and their combinations. HairTouch consists of two pins for the two finger segments close to the index fingertip, respectively. By controlling brush hairs’ length and bending direction to change the hairs’ elasticity and hair tip direction, each pin renders various stiffness and roughness, respectively.

[ NTU ]

Thanks Fan!

Here's another cool thing from CHI: a “Pneumatic Raspberry Pi for Soft Robotics.”

FlowIO is a miniature, modular, pneumatic development platform with a software toolkit for control, actuation, and sensing of soft robots and programmable materials. Five pneumatic ports and multiple fully-integrated modules to satisfy various pressure, flow, and size requirements make FlowIO suitable for most wearable and non-wearable pneumatic applications in HCI and soft robotics.

[ FlowIO ]

Thanks Fan!

NASA’s Ingenuity Mars Helicopter completed its fifth flight with a one-way journey from Wright Brothers Field to a new airfield 423 feet (129 meters) to the south on May 7, 2021.

NASA has 3D-ified Ingenuity's third flight, so dig up your 3D glasses and check it out:

Also, audio!

[ NASA ]

Until we can find a good way of training cats, we'll have to make due with robots if we want to study their neuromuscular dynamics.

Toyoaki Tanikawa and his supervisors assistant professor Yoichi Masuda and Prof Masato Ishikawa developed a four-legged robot that enables the reproduction of motor control of animals using computers. This quadruped robot, which comprises highly back-drivable legs to reproduce the flexibility of animals and torque-controllable motors, can reproduce muscle characteristics of animals. Thus, it is possible to conduct various experiments using this robot instead of the animals themselves.

[ Osaka University ]

Thanks Yoichi!

Turner Topping is a PhD student and researcher with Kod*lab, a legged robotics group within the GRASP Lab at Penn Engineering. Through this video profile, one gains insight into Turner’s participation in the academic research environment, overcoming uncertainties and obstacles.

[ Kod*Lab ]

A team led by Assistant Professor Benjamin Tee from the National University of Singapore has developed a smart material known as AiFoam that could give machines human-like sense of touch, to better judge human intentions and respond to changes in the environment.

[ NUS ]

Boston University mechanical engineers have developed a unique way to use an ancient Japanese art form for a very 21st-century purpose. In a paper published this week in Science Robotics, Douglas Holmes and BU PhD student Yi Yang demonstrate how they were inspired by kirigami, the traditional Japanese art of paper cutting (cousin of origami paper-folding art), to design soft robotic grippers.

[ BU ]

Turns out, if you give robots voices and names and googly eyes and blogs (?), people will try to anthropomorphize them. Go figure!

[ NTNU ]

Domestic garbage management is an important aspect of a sustainable environment. This paper presents a novel garbage classification and localization system for grasping and placement in the correct recycling bin, integrated on a mobile manipulator. In particular, we first introduce and train a deep neural network (namely, GarbageNet) to detect different recyclable types of garbage in the wild. Secondly, we use a grasp localization method to identify the grasp poses of garbage that need to be collected from the ground. Finally, we perform grasping and sorting of the objects by the mobile robot through a whole-body control framework.

[ UCL ]

Thanks Dimitrios!

I am 100% here for telepresence robots with emotive antennas.

[ Pollen Robotics ]

We propose a novel robotic system that can improve its semantic perception during deployment. Our system tightly couples multi-sensor perception and localisation to continuously learn from self-supervised pseudo labels.

[ ASL ]

Vandi Verma is one of the people driving the Mars Perseverance rover, and CMU would like to remind you that that she graduated from CMU.

[ CMU ]

Pepper is here to offer a “phygital” experience to shoppers.

I had to look up “phygital,” and it's a combination of phyiscal and digital that is used exclusively in marketing, as far as I can tell, so let us never speak of it again.

[ CMU ]

Researchers conduct early mobility testing on an engineering model of NASA’s Volatiles Investigating Polar Exploration Rover, or VIPER, and fine-tune a newly installed OptiTrack motion tracking camera system at NASA Glenn’s Simulated Lunar Operations Lab.

[ NASA ]

Mmm, sorting is satisfying to watch.

[ Dorabot ]

iRobot seems to be hiring, although you’ll have to brave a pupper infestation.

Clean floors, though!

[ iRobot ]

Shadow Robot's bimanual teleoperation system is now commercially available for a price you almost certainly cannot afford!

Converge Robotics Group offers a haptic option, too.

[ Shadow ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA IROS 2021 – September 27-1, 2021 – [Online Event] ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

With rapidly growing demands on health care systems, nurses typically spend 18 to 40 percent of their time performing direct patient care tasks, oftentimes for many patients and with little time to spare. Personal care robots that brush your hair could provide substantial help and relief.

While the hardware set-up looks futuristic and shiny, the underlying model of the hair fibers is what makes it tick. CSAIL postdoc Josie Hughes and her team’s approach examined entangled soft fiber bundles as sets of entwined double helices - think classic DNA strands. This level of granularity provided key insights into mathematical models and control systems for manipulating bundles of soft fibers, with a wide range of applications in the textile industry, animal care, and other fibrous systems.

[ MIT CSAIL ]

Sometimes CIA​ needs to get creative when collecting intelligence. Charlie, for instance, is a robotic catfish that collects water samples. While never used operationally, the unmanned underwater vehicle (UUV) fish was created to study aquatic robot technology.

[ CIA ]

It's really just a giant drone, even if it happens to be powered by explosions.

[ SpaceX ]

Somatic's robot will clean your bathrooms for 40 hours a week and will cost you just $1,000 a month. It looks like it works quite well, as long as your bathrooms are the normal level of gross as opposed to, you know, super gross.

[ Somatic ]

NASA’s Ingenuity Mars Helicopter successfully completed a fourth, more challenging flight on the Red Planet on April 30, 2021. Flight Test No. 4 aimed for a longer flight time, longer distance, and more image capturing to begin to demonstrate its ability to serve as a scout on Mars. Ingenuity climbed to an altitude of 16 feet (5 meters) before flying south and back for an 872-foot (266-meter) round trip. In total, Ingenuity was in the air for 117 seconds, another set of records for the helicopter.

[ Ingenuity ]

The Perseverance rover is all new and shiny, but let's not forget about Curiosity, still hard at work over in Gale crater.

NASA’s Curiosity Mars rover took this 360-degree panorama while atop “Mont Mercou,” a rock formation that offered a view into Gale Crater below. The panorama is stitched together from 132 individual images taken on April 15, 2021, the 3,090th Martian day, or sol, of the mission. The panorama has been white-balanced so that the colors of the rock materials resemble how they would appear under daytime lighting conditions on Earth. Images of the sky and rover hardware were not included in this terrain mosaic.

[ MSL ]

Happy Star Wars Day from Quanser!

[ Quanser ]

Thanks Arman!

Lingkang Zhang's 12 DOF Raspberry Pi-powered quadruped robot, Yuki Mini, is complete!

Adorable, right? It runs ROS and the hardware is open source as well.

[ Yuki Mini ]

Thanks Lingkang!

Honda and AutoX have been operating a fully autonomous, no safety driver taxi service in China for a couple of months now.

If you thought SF was hard, well, I feel like this is even harder.

[ AutoX ]

This is the kind of drone delivery that I can get behind.

[ WeRobotics ]

The Horizon 2020 EU-funded PRO-ACT project will aim to develop and demonstrate a cooperation and manipulation capabilities between three robots for assembling an in-situ resource utilisation (ISRU) plant. PRO-ACT will show how robot working agents, or RWAs, can work together collaboratively to achieve a common goal.

[ Pro-Act ]

Thanks Fan!

This brief quadruped simulation video, from Jerry Pratt at IHMC, dates back to 2003 (!).

[ IHMC ]

Extend Robotics' vision is to extend human capability beyond physical presence​. We build affordable robotic arms capable of remote operation from anywhere in the world, using cloud-based teleoperation software​.

[ Extend Robotics ]

Meet Maria Vittoria Minniti, robotics engineer and PhD student at NCCR Digital Fabrication and ETH Zurich. Maria Vittoria makes it possible for simple robots to do complicated things.

[ NCCR Women ]

Thanks Fan!

iCub has been around for 10 years now, and it's almost like it hasn't gotten any taller! This IFRR Robotics Global Colloquium celebrates the past decade of iCub.

[ iCub ]

This CMU RI Seminar is by Cynthia Sung from UPenn, on Dynamical Robots via Origami-Inspired Design.

Origami-inspired engineering produces structures with high strength-to-weight ratios and simultaneously lower manufacturing complexity. This reliable, customizable, cheap fabrication and component assembly technology is ideal for robotics applications in remote, rapid deployment scenarios that require platforms to be quickly produced, reconfigured, and deployed. Unfortunately, most examples of folded robots are appropriate only for small-scale, low-load applications. In this talk, I will discuss efforts in my group to expand origami-inspired engineering to robots with the ability to withstand and exert large loads and to execute dynamic behaviors.

[ CMU RI ]

How can feminist methodologies and approaches be applied and be transformative when developing AI and ADM systems? How can AI innovation and social systems innovation be catalyzed concomitantly to create a positive movement for social change larger than the sum of the data science or social science parts? How can we produce actionable research that will lead to the profound changes needed—from scratch—in the processes to produce AI? In this seminar, 2020 CCSRE Race and Technology Practitioner Fellow Renata Avila discusses ideas and experiences from different disciplines that could help draft a blueprint for a better modeled digital future.

[ CMU RI ]

From what I’ve seen of humanoid robotics, there’s a fairly substantial divide between what folks in the research space traditionally call robotics, and something like animatronics, which tends to be much more character-driven.

There’s plenty of technology embodied in animatronic robotics, but usually under some fairly significant constraints—like, they’re not autonomously interactive, or they’re stapled to the floor and tethered for power, things like that. And there are reasons for doing it this way: namely, dynamic untethered humanoid robots are already super hard, so why would anyone stress themselves out even more by trying to make them into an interactive character at the same time? That would be crazy!

At Walt Disney Imagineering, which is apparently full of crazy people, they’ve spent the last three years working on Project Kiwi: a dynamic untethered humanoid robot that’s an interactive character at the same time. We asked them (among other things) just how they managed to stuff all of the stuff they needed to stuff into that costume, and how they expect to enable children (of all ages) to interact with the robot safely.

Project Kiwi is an untethered bipedal humanoid robot that Disney Imagineering designed not just to walk without falling over, but to walk without falling over with some character. At about 0.75 meters tall, Kiwi is a bit bigger than a NAO and a bit smaller than an iCub, and it’s just about completely self-contained, with the tether you see in the video being used for control rather than for power. Kiwi can manage 45 minutes of operating time, which is pretty impressive considering its size and the fact that it incorporates a staggering 50 degrees of freedom, a requirement for lifelike motion.

This version of the robot is just a prototype, and it sounds like there’s plenty to do in terms of hardware optimization to improve efficiency and add sensing and interactivity. The most surprising thing to me is that this is not a stage robot: Disney does plan to have some future version of Kiwi wandering around and interacting directly with park guests, and I’m sure you can imagine how that’s likely to go. Interaction at this level, where there’s a substantial risk of small children tackling your robot with a vicious high-speed hug, could be a uniquely Disney problem for a robot with this level of sophistication. And it’s one of the reasons they needed to build their own robot—when Universal Studios decided to try out a Steampunk Spot, for example, they had to put a fence plus a row of potted plants between it and any potential hugs, because Spot is very much not a hug-safe robot.  

So how the heck do you design a humanoid robot from scratch with personality and safe human interaction in mind? We asked Scott LaValley, Project Kiwi lead, who came to Disney Imagineering by way of Boston Dynamics and some of our favorite robots ever (including RHex, PETMAN, and Atlas), to explain how they pulled it off.

IEEE Spectrum: What are some of the constraints of Disney’s use case that meant you had to develop your own platform from the ground up?

Scott LaValley: First and foremost, we had to consider the packaging constraints. Our robot was always intended to serve as a bipedal character platform capable of taking on the role of a variety of our small-size characters. While we can sometimes take artistic liberties, for the most part, the electromechanical design had to fit within a minimal character profile to allow the robot to be fully themed with shells, skin, and costuming. When determining the scope of the project, a high-performance biped that matched our size constraints just did not exist. 

Equally important was the ability to move with style and personality, or the "emotion of motion." To really capture a specific character performance, a robotic platform must be capable of motions that range from fast and expressive to extremely slow and nuanced. In our case, this required developing custom high-speed actuators with the necessary torque density to be packaged into the mechanical structure. Each actuator is also equipped with a mechanical clutch and inline torque sensor to support low-stiffness control for compliant interactions and reduced vibration. 

Designing custom hardware also allowed us to include additional joints that are uncommon in humanoid robots. For example, the clavicle and shoulder alone include five degrees of freedom to support a shrug function and an extended configuration space for more natural gestures. We were also able to integrate onboard computing to support interactive behaviors.

What compromises were required to make sure that your robot was not only functional, but also capable of becoming an expressive character?

As mentioned previously, we face serious challenges in terms of packaging and component selection due to the small size and character profile. This has led to a few compromises on the design side. For example, we currently rely on rigid-flex circuit boards to fit our electronics onto the available surface area of our parts without additional cables or connectors. Unfortunately, these boards are harder to design and manufacture than standard rigid boards, increasing complexity, cost, and build time. We might also consider increasing the size of the hip and knee actuators if they no longer needed to fit within a themed costume.

Designing a reliable walking robot is in itself a significant challenge, but adding style and personality to each motion is a new layer of complexity. From a software perspective, we spend a significant amount of time developing motion planning and animation tools that allow animators to author stylized gaits, gestures, and expressions for physical characters. Unfortunately, unlike on-screen characters, we do not have the option to bend the laws of physics and must validate each motion through simulation. As a result, we are currently limited to stylized walking and dancing on mostly flat ground, but we hope to be skipping up stairs in the future!

Of course, there is always more that can be done to better match the performance you would expect from a character. We are excited about some things we have in the pipeline, including a next generation lower body and an improved locomotion planner.

How are you going to make this robot safe for guests to be around?

First let us say, we take safety extremely seriously, and it is a top priority for any Disney experience. Ultimately, we do intend to allow interactions with guests of all ages, but it will take a measured process to get there. Proper safety evaluation is a big part of productizing any Research & Development project, and we plan to conduct playtests with our Imagineers, cast members and guests along the way. Their feedback will help determine exactly what an experience with a robotic character will look like once implemented.

From a design standpoint, we believe that small characters are the safest type of biped for human-robot interaction due to their reduced weight and low center of mass. We are also employing compliant control strategies to ensure that the robot’s actuators are torque-limited and backdrivable. Perception and behavior design may also play a key role, but in the end, we will rely on proper show design to permit a safe level of interaction as the technology evolves.

What do you think other roboticists working on legged systems could learn from Project Kiwi?

We are often inspired by other roboticists working on legged systems ourselves but would be happy to share some lessons learned. Remember that robotics is fundamentally interdisciplinary, and a good team typically consists of a mix of hardware and software engineers in close collaboration. In our experience, however, artists and animators play an equally valuable role in bringing a new vision to life. We often pull in ideas from the character animation and game development world, and while robotic characters are far more constrained than their virtual counterparts, we are solving many of the same problems. Another tip is to leverage motion studies (either through animation, motion capture, and/or simulation tools) early in the design process to generate performance-driven requirements for any new robot.

Now that Project Kiwi has de-stealthed, I hope the Disney Imagineering folks will be able to be a little more open with all of the sweet goo inside of the fuzzy skin of this metaphor that has stopped making sense. Meeting a new humanoid robot is always exciting, and the approach here (with its technical capability combined with an emphasis on character and interaction) is totally unique. And if they need anyone to test Kiwi’s huggability, I volunteer! You know, for science.

As part of its emerging role as a global regulatory watchdog, the European Commission published a proposal on 21 April for regulations to govern artificial intelligence use in the European Union.

The economic stakes are high: the Commission predicts European public and private investment in AI reaching €20 billion a year this decade, and that was before the additional earmark of up to €134 billion earmarked for digital transitions in Europe’s Covid-19 pandemic recovery fund, some of which the Commission presumes will fund AI, too. Add to that  counting investments in AI outside the EU but which target EU residents, since these rules will apply to any use of AI in the EU, not just by EU-based companies or governments.

Things aren’t going to change overnight: the EU’s AI rules proposal is the result of three years of work by bureaucrats, industry experts, and public consultations and must go through the European Parliament—which requested it—before it can become law. EU member states then often take years to transpose EU-level regulations into their national legal codes. 

The proposal defines four tiers for AI-related activity and differing levels of oversight for each. The first tier is unacceptable risk: some AI uses would be banned outright in public spaces, with specific exceptions granted by national laws and subject to additional oversight and stricter logging and human oversight. The to-be-banned AI activity that has probably garnered the most attention is real-time remote biometric identification, i.e. facial recognition. The proposal also bans subliminal behavior modification and social scoring applications. The proposal suggests fines of up to 6 percent of commercial violators’ global annual revenue.

The proposal next defines a high-risk category, determined by the purpose of the system and the potential and probability of harm. Examples listed in the proposal include job recruiting, credit checks, and the justice system. The rules would require such AI applications to use high-quality datasets, document their traceability, share information with users, and account for human oversight. The EU would create a central registry of such systems under the proposed rules and require approval before deployment.

Limited-risk activities, such as the use of chatbots or deepfakes on a website, will have less oversight but will require a warning label, to allow users to opt in or out. Then finally there is a tier for applications judged to present minimal risk.

As often happens when governments propose dense new rulebooks (this one is 108 pages), the initial reactions from industry and civil society groups seem to be more about the existence and reach of industry oversight than the specific content of the rules. One tech-funded think tank told the Wall Street Journal that it could become “infeasible to build AI in Europe.” In turn, privacy-focused civil society groups such as European Digital Rights (EDRi) said in a statement that the “regulation allows too wide a scope for self-regulation by companies.”

“I think one of the ideas behind this piece of regulation was trying to balance risk and get people excited about AI and regain trust,” says Lisa-Maria Neudert, AI governance researcher at the University of Oxford, England, and the Weizenbaum Institut in Berlin, Germany. A 2019 Lloyds Register Foundation poll found that the global public is about evenly split between fear and excitement about AI. 

“I can imagine it might help if you have an experienced large legal team,” to help with compliance, Neudert says, and it may be “a difficult balance to strike” between rules that remain startup-friendly and succeed in reining in mega-corporations.

AI researchers Mona Sloane and Andrea Renda write in VentureBeat that the rules are weaker on monitoring of how AI plays out after approval and launch, neglecting “a crucial feature of AI-related risk: that it is pervasive, and it is emergent, often evolving in unpredictable ways after it has been developed and deployed.”

Europe has already been learning from the impact its sweeping 2018 General Data Protection Regulation (GDPR) had on global tech and privacy. Yes, some outside websites still serve Europeans a page telling them the website owners can’t be bothered to comply with GDPR, so Europeans can’t see any content. But most have found a way to adapt in order to reach this unified market of 448 million people.

“I don’t think we should generalize [from GDPR to the proposed AI rules], but it’s fair to assume that such a big piece of legislation will have effects beyond the EU,” Neudert says. It will be easier for legislators in other places to follow a template than to replicate the EU’s heavy investment in research, community engagement, and rule-writing.

While tech companies and their industry groups may grumble about the need to comply with the incipient AI rules, Register columnist Rupert Goodwin suggests they’d be better off focusing on forming the industry groups that will shape the implementation and enforcement of the rules in the future: “You may already be in one of the industry organizations for AI ethics or assessment; if not, then consider them the seeds from which influence will grow.”

The Ingenuity Mars Helicopter has been doing an amazing job flying on Mars. Over the last several weeks it has far surpassed its original goal of proving that flight on Mars was simply possible, and is now showing how such flights are not only practical but also useful.

To that end, NASA has decided that the little helicopter deserves to not freeze to death quite so soon, and the agency has extended its mission for at least another month, giving it the opportunity to scout a new landing site to keep up with Perseverance as the rover starts its own science mission.

Some quick context: the Mars Helicopter mission was originally scheduled to last 30 days, and we’re currently a few weeks into that. The helicopter has flown successfully four times; the most recent flight was on April 30, and was a 266 meter round-trip at 5 meters altitude that took 117 seconds. Everything has worked nearly flawlessly, with (as far as we know) the only hiccup being a minor software bug that has a small chance of preventing the helicopter from entering flight mode. This bug has kicked in once, but JPL just tried doing the flight again, and then everything was fine. 

In a press conference last week, NASA characterized Ingenuity’s technical performance as “exceeding all expectations,” and the helicopter met all of its technical goals (and then some) earlier than anyone expected. Originally, that wouldn’t have made a difference, and Perseverance would have driven off and left Ingenuity behind no matter how well it was performing. But some things have changed, allowing Ingenuity to transition from a tech demo into an extended operational demo, as Jennifer Trosper, Perseverance deputy project manager, explained:

“We had not originally planned to do this operational demo with the helicopter, but two things have happened that have enabled us to do it. The first thing is that originally, we thought that we’d be driving away from the location that we landed at, but the [Perseverance] science team is actually really interested in getting initial samples from this region that we’re in right now. Another thing that happened is that the helicopter is operating in a fantastic way. The communications link is overperforming, and even if we move farther away, we believe that the rover and the helicopter will still have strong communications, and we’ll be able to continue the operational demo.”

The communications link was one of the original reasons why Perseverance’s mission was going to be capped at 30 days. It’s a little bit counter-intuitive, but it turns out that the helicopter simply cannot keep up with the rover, which Ingenuity relies on for communication with Earth. Ingenuity is obviously faster in flight, but once you factor in recharge time, if the rover is driving a substantial distance, the helicopter would not be able to stay within communications range.

And there’s another issue with the communications link: as a tech demo, Ingenuity’s communication system wasn’t tested to make sure that it can’t be disrupted by electronic interference generated by other bits and pieces of the Perseverance rover. Consequently, Ingenuity’s 30-day mission was planned such that when the helicopter was in the air, Perseverance was perfectly stationary. This is why we don’t have video where Perseverance pans its cameras to follow the helicopter—using those actuators might have disrupted the communications link.

Going forward, Perseverance will be the priority, not Ingenuity. The helicopter will have to do its best to stay in contact with the rover as it starts focusing on its own science mission. Ingenuity will have to stay in range (within a kilometer or so) and communicate when it can, even if the rover is busy doing other stuff. This extended mission will initially last 30 more days, and if it turns out that Ingenuity can’t do what it needs to do without needing more from Perseverance, well, that’ll be the end of the Mars helicopter mission. Even best case, it sounds like we won’t be getting any more pictures of Ingenuity in flight, since planning that kind of stuff took up a lot of the rover’s time. 

With all that in mind, here’s what NASA says we should be expecting:

“With short drives expected for Perseverance in the near term, Ingenuity may execute flights that land near the rover’s current location or its next anticipated parking spot. The helicopter can use these opportunities to perform aerial observations of rover science targets, potential rover routes, and inaccessible features while also capturing stereo images for digital elevation maps. The lessons learned from these efforts will provide significant benefit to future mission planners. These scouting flights are a bonus and not a requirement for Perseverance to complete its science mission.

The cadence of flights during Ingenuity’s operations demonstration phase will slow from once every few days to about once every two or three weeks, and the forays will be scheduled to avoid interfering with Perseverance’s science operations. The team will assess flight operations after 30 sols and will complete flight operations no later than the end of August.”

Specifically, Ingenuity spent its recent Flight 4 scouting for a new airfield to land at, and Flight 5 will be the first flight of this new operations phase, where it’ll attempt to land at this new airfield, a place it’s never touched down before about 60m south of its current position on Mars. NASA expects that there might be one or two flights after this, but nobody’s quite sure how it’s going to go, and NASA wasn’t willing to speculate about what’ll happen longer term.

It’s important to remember that all of this is happening in the context of Ingenuity being a 30 day tech demo. The hardware on the helicopter was designed with that length of time in mind, and not a multi-month mission. NASA said during their press conference that the landing gear is probably good for at least 100 landings, and the solar panel and sun angle will be able to meet energy requirements for at least a few months. The expectation is that with enough day/night thermal cycles, a solder joint will snap, rendering Ingenuity inoperable in some way. Nobody knows when that’ll happen, but again, this is a piece of hardware designed to function for 30 days, and despite JPL’s legacy of ridiculously long-loved robotic explorers, we should adjust our expectations accordingly. MiMi Aung, Mars Helicopter Project Manager, has it exactly right when she says that “we will be celebrating each day that Ingenuity survives and operates beyond that original window.” We’re just glad that there will be more to celebrate going forward. 

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA IROS 2021 – September 27-1, 2021 – [Online Event] ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Ascend is a smart knee orthosis designed to improve mobility and relieve knee pain. The customized, lightweight, and comfortable design reduces burden on the knee and intuitively adjusts support as needed. Ascend provides a safe and non-surgical solution for patients with osteoarthritis, knee instability, and/or weak quadriceps.

Each one of these is custom-built, and you can pre-order one now.

[ Roam Robotics ]

Ingenuity’s third flight achieved a longer flight time and more sideways movement than previously attempted. During the 80-second flight, the helicopter climbed to 16 feet (5 meters) and flew 164 feet (50 meters) downrange and back, for a total distance of 328 feet (100 meters). The third flight test took place at “Wright Brothers Field” in Jezero Crater, Mars, on April 25, 2021.

[ NASA ]

This right here, the future of remote work.

The robot will run you about $3,000 USD.

[ VStone ] via [ Robotstart ]

Texas-based aerospace robotics company, Wilder Systems, enhanced their existing automation capabilities to aid in the fight against COVID-19. Their recent development of a robotic testing system is both increasing capacity for COVID-19 testing and delivering faster results to individuals. The system conducts saliva-based PCR tests, which is considered the gold standard for COVID testing. Based on a protocol developed by Yale and authorized by the FDA, the system does not need additional approvals. This flexible, modular system can run up to 2,000 test samples per day, and can be deployed anywhere where standard electric power is available.

[ ARM Institute ]

Tests show that people do not like being nearly hit by drones.

But seriously, this research has resulted in some useful potential lessons for deploying drones in areas where they have a chance of interacting with humans.

[ Paper ]

The Ingenuity helicopter made history on April 19, 2021, with the first powered, controlled flight of an aircraft on another planet. How do engineers talk to a helicopter all the way out on Mars? We’ll hear about it from Nacer Chahat of NASA’s Jet Propulsion Laboratory, who worked on the helicopter’s antenna and telecommunication system.

[ NASA ]

A team of scientists from the Max Planck Institute for Intelligent Systems has developed a system with which they can fabricate miniature robots building block by building block, which function exactly as required.

[ Max Planck Institute ]

Well this was inevitable, wasn't it?

The pilot regained control and the drone was fine, though.

[ PetaPixel ]

NASA’s Ingenuity Mars Helicopter takes off and lands in this video captured on April 25, 2021, by Mastcam-Z, an imager aboard NASA’s Perseverance Mars rover. As expected, the helicopter flew out of its field of vision while completing a flight plan that took it 164 feet (50 meters) downrange of the landing spot. Keep watching, the helicopter will return to stick the landing. Top speed for today's flight was about 2 meters per second, or about 4.5 miles-per-hour.

[ NASA ]

U.S. Naval Research Laboratory engineers recently demonstrated Hybrid Tiger, an electric unmanned aerial vehicle (UAV) with multi-day endurance flight capability, at Aberdeen Proving Grounds, Maryland.

[ NRL ]

This week's CMU RI Seminar is by Avik De from Ghost Robotics, on “Design and control of insect-scale bees and dog-scale quadrupeds.”

Did you watch the Q&A? If not, you should watch the Q&A.

[ CMU ]

Autonomous quadrotors will soon play a major role in search-and-rescue, delivery, and inspection missions, where a fast response is crucial. However, their speed and maneuverability are still far from those of birds and human pilots. What does it take to make drones navigate as good or even better than human pilots?

[ GRASP Lab ]

With the current pandemic accelerating the revolution of AI in healthcare, where is the industry heading in the next 5-10 years? What are the key challenges and most exciting opportunities? These questions will be answered by HAI’s Co-Director, Fei-Fei Li and the Founder of DeepLearning.AI, Andrew Ng in this fireside chat virtual event.

[ Stanford HAI ]

Autonomous robots have the potential to serve as versatile caregivers that improve quality of life for millions of people with disabilities worldwide. Yet, physical robotic assistance presents several challenges, including risks associated with physical human-robot interaction, difficulty sensing the human body, and a lack of tools for benchmarking and training physically assistive robots. In this talk, I will present techniques towards addressing each of these core challenges in robotic caregiving.

[ GRASP Lab ]

What does it take to empower persons with disabilities, and why is educating ourselves on this topic the first step towards better inclusion? Why is developing assistive technologies for people with disabilities important in order to contribute to their integration in society? How do we implement the policies and actions required to enable everyone to live their lives fully? ETH Zurich and the Global Shapers Zurich Hub invited to an online dialogue on the topic “For a World without Barriers-Removing Obstacles in Daily Life for People with Disabilities.”

[ Cybathlon ]

Drone autonomy is getting more and more impressive, but we’re starting to get to the point where it’s getting significantly more difficult to improve on existing capabilities. Companies like Skydio are selling (for cheap!) commercial drones that have no problem dynamically path planning around obstacles at high speeds while tracking you, which is pretty amazing, and it can also autonomously create 3D maps of structures. In both of these cases, there’s a human indirectly in the loop, either saying “follow me” or “map this specific thing.” In other words, the level of autonomous flight is very high, but there’s still some reliance on a human for high-level planning. Which, for what Skydio is doing, is totally fine and the right way to do it.

Exyn, a drone company with roots in the GRASP Lab at the University of Pennsylvania, has been developing drones for inspections of large unstructured spaces like mines. This is an incredibly challenging environment, being GPS-denied, dark, dusty, and dangerous, to name just a few of the challenges. While Exyn’s lidar-equipped drones have been autonomous for a while now, they’re now able to operate without any high-level planning from a human at all. At this level of autonomy, which Exyn calls Level 4A, the operator simply defines a volume for the drone to map, and then from takeoff to landing, the drone will methodically explore the entire space and generate a high resolution map all by itself, even if it goes far beyond communications range to do so.

Let’s be specific about what “Level 4A” autonomy means, because until now, there haven’t really been established autonomy levels for drones. And the reason that there are autonomy levels for drones all of a sudden is because Exyn just went ahead and invented some. To be fair, Exyn took inspiration from the SAE autonomy levels, so there is certainly some precedent here, but it’s still worth keeping in mind that this whole system is for the moment just something that Exyn came up with by themselves and applied to their own system. They did put a bunch of thought into it, at least, and you can read a whitepaper on the whole thing here.

Graphic: Exyn Larger version here.

A couple things about exactly what Exyn is doing: Their drone, which carries lights, a GoPro, some huge computing power, an even huger battery, and a rotating Velodyne lidar, is able to operate completely independently of a human operator or really any kind of external inputs at all. No GPS, no base station, no communications, no prior understanding of the space, nothing. You tell the drone where you want it to map, and it’ll take off and then decide on its own where and how to explore the space that it’s in, building up an obscenely high resolution lidar map as it goes and continuously expanding that map until it runs out of unexplored areas, at which point it’ll follow the map back home and land itself. “When we’re executing the exploration,” Exyn CTO Jason Derenick tells us, “what we’re doing is finding the boundary between the visible and explored space, and the unknown space. We then compute viewpoint candidates, which are locations along that boundary where we can infer how much potential information our sensors can gain, and then the system selects the one with the most opportunity for seeing as much of the environment as possible.”

Flying at up to 2 m/s, Exyn’s drone can explore 16 million cubic meters in a single flight (about nine football stadiums worth of volume), and if the area you want it to explore is larger than that, it can go back out for more rounds after a battery swap.

It’s important to understand, though, what the limitations of this drone’s autonomy are. We’re told that it can sense things like power lines, although probably not something narrow like fishing wire. Which so far hasn’t been a problem, because it’s an example of a “pathological” obstacle—something that is not normal, and would typically only be encountered if it was placed there specifically to screw you up. Dynamic obstacles (like humans or vehicles) moving at walking speed are also fine. Dust can be tricky at times, although the drone can identify excessive amounts of dust in the air, and it’ll wait a bit for the dust to settle before updating its map.

Photo; Exyn

The commercial applications of a totally hands-off system that’s able to autonomously generate detailed lidar maps of unconstrained spaces in near real-time are pretty clear. But what we’re most excited about are the potential search and rescue use cases, especially when Exyn starts to get multiple drones working together collaboratively. You can imagine a situation in which you need to find a lost person in a cave or a mine, and you unload a handful of drones at the entrance, tell them “go explore until you find a human,” and then just let them do their thing.

To make this happen, though, Exyn will need to add an additional level of understanding to their system, which is something they’re working on now, says Derenick. This means both understanding what objects are, as well as reasoning about them, which could mean what the object represents in a more abstract sense as well as how things like dynamic obstacles may move. Autonomous cars have to do this routinely, but for a drone with severe size and power constraints, it’s a much bigger challenge, but one that I’m pretty sure Exyn will figure out.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Within the last four days, the Ingenuity has flown twice (!) on Mars.

This is an enhanced video showing some of the dust that the helicopter kicked up as it took off:

Data is still incoming for the second flight, but we know that it went well, at least:

[ NASA ]

Can someone who knows a lot about HRI please explain to me why I'm absolutely fascinated by Flatcat?

You can now back Flatcat on Kickstarter for a vaguely distressing $1,200.

[ Flatcat ]

Digit navigates a novel indoor environment without pre-mapping or markers, with dynamic obstacle avoidance. Waypoints are defined relative to the global reference frame determined at power-on. No bins were harmed in filming.

[ Agility Robotics ]

The Yellow Drum Machine, popped up on YouTube again this week for some reason. And it's still one of my favorite robots of all time.

[ Robotshop ]

This video shows results of high-speed autonomous flight in a forest through trees. Path planning uses a trajectory library with pre-established correspondences for collision checking. Decisions are made in 0.2-0.3ms enabling the flight at the speed of 10m/s. No prior map is used.

[ Near Earth ]

We present ManipulaTHOR, a framework that facilitates visual manipulation of objects using a robotic arm. Our framework is built upon a physics engine and enables realistic interactions with objects while navigating through scenes and performing tasks.

[ Allen Institute ]

Well this is certainly one of the more unusual multirotor configurations I've ever seen.

[ KAIST ]

Thailand’s Mahidol University and the Institute of Molecular Biosciences chose ABB's YuMi cobot & IRB 1100 robot to work together to fast-track Covid-19 vaccine development. The robots quickly perform repetitive tasks such as unscrewing vials and transporting them to test stations, protecting human workers from injury or harm.

[ ABB ]

Skydio's 3D scan functionality is getting more and more impressive.

[ Skydio ]

With more than 50 service locations across Europe, Stadler Service is focused on increasing train availability, reliability, and safety. ANYbotics is partnering with Stadler Service to explore the potential of mobile robots to increase the efficiency and quality of routine inspection and maintenance of rolling stock.

[ ANYbotics ]

Inspection engineers at Kiwa Inspecta used the Elios 2 to inspect a huge decommissioned oil cavern. The inspection would have required six months and a million Euros if conducted manually but with the Elios 2 it was completed in just a few days at a significantly lower cost.

[ Flyability ]

RightHand Robotics builds a data-driven intelligent piece-picking platform, providing flexible and scalable automation for predictable order fulfillment. RightPick™ 3 is the newest generation of our award-winning autonomous, industrial robot system.

[ RightHand Robotics ]

NASA's Unmanned Aircraft Systems Traffic Management project, or UTM, is working to safely integrate drones into low-altitude airspace. In 2019, the project completed its final phase of flight tests. The research results are being transferred to the Federal Aviation Administration, who will continue development of the UTM system and implement it over time.

[ NASA ]

At the Multi-Robot Planning and Control lab, our research vision is to build multi-robot systems that are capable of acting competently in the real world. We study, develop and combine automated planning, coordination, and control methods to achieve this capability. We find that some of the most interesting basic research questions derive from the problem features and constraints imposed by real-world applications. This video illustrates some of these research questions.

[ Örebro ]

Thanks Fan!

The University of Texas at Austin’s Cockrell School of Engineering and College of Natural Sciences are partnering on life-changing research in artificial intelligence and robotics—ensuring that UT continues to lead the way in launching tomorrow’s technologies.

[ UT Robotics ]

Thanks Fan!

Over the past ten years various robotics and remote technologies have been introduced at Fukushima sites for such tasks as inspection, rubble removal, and sampling showing success and revealing challenges. Successful decommissioning will rely on the development of highly reliable robotic technologies that can be deployed rapidly and efficiently into the sites. The discussion will focus on the decommissioning challenges and robotic technologies that have been used in Fukushima. The panel will conclude with the lessons learned from Fukushima’s past 10-year experience and how robotics must prepare to be ready to respond in the event of future disasters.

[ IFRR ]

Neural control might one day help patients operate robotic prosthetics by thought. Now researchers find that with the help of physical therapy, patients could accomplish more with such neural control than scientists previously knew was possible.

Around the world, research teams are developing lower-body exoskeletons to help people walk. These devices are essentially walking robots users can strap to their legs to help them move.

These exoskeletons can often automatically perform preprogrammed cyclic motions such as walking. However, when it comes to helping patients with more complex activities, patients should ideally be able to control these robotic prosthetics by thought—for example, using sensors attached to legs that can detect bioelectric signals sent from your brain to your muscles telling them to move.

“Autonomous control works really well for walking, but when it comes to more than just walking, such as playing tennis or freestyle dancing, it'd be good to have neural control,” says study senior author Helen Huang, a biomedical engineer at North Carolina State University.

One question when it comes to neural control over robotic prosthetics is how well the nervous systems of patients can still activate the muscles that amputees still have left in a limb.

"During surgery, the original structures of muscles are changed," Huang says. "We've found that people can activate these residual muscles, but the way they contract them is different from that of an able-bodied person, so they need training on how to use these muscles."

In the new study, Huang and her colleagues had an amputee with a neurally controlled powered prosthetic ankle train with a physical therapist to practice tasks that are challenging with typical prostheses. The prosthetic received bioelectric signals from two residual calf muscles responsible for controlling ankle motion.

The 57-year-old volunteer lost his left leg about halfway between the knee and the ankle. He had five training sessions with a physical therapist, each lasting about two hours, over the course of two-and-a-half weeks. The physical therapist helped provide the volunteer feedback on what the joint was doing, and trained him “first on joint-level movements, and then full-body movements and full-body coordination,” Fleming says.

After training, the volunteer could perform a variety of tasks he found difficult before. These included going from sitting to standing without any external assistance, or squatting to pick up something off the ground without compensating for the motion with other body parts.

In addition, improvements in the volunteer's stability exceeded expectations, whether he was standing or moving. Amputees wearing lower-limb robotic prostheses often experience less stability while standing, as it is difficult for the machines to predict any disturbances or the ways in which a person might anticipate and compensate for such disruptions.

“That stability and subtle control while standing was pretty surprising,” says study lead author Aaron Fleming, a biomedical engineer at North Carolina State University.

The researchers now aim to examine more patients with robotic prosthetics and test them with more tasks, such as avoiding obstacles. They also want to investigate what the nervous systems of these volunteers might be doing during such training. “Are they restoring their original neural pathways?” Huang asks.

The scientists detailed their findings in a paper published this month in the journal Wearable Technologies.

Over the last few weeks, we’ve posted several articles about the next generation of warehouse manipulation robots designed to handle the non-stop stream of boxes that provide the foundation for modern ecommerce. But once these robots take boxes out of the back of a trailer or off of a pallet, there are yet more robots ready to autonomously continue the flow through a warehouse or distribution center. One of the beefiest of these autonomous mobile robots is the OTTO 1500, which is called the OTTO 1500 because (you guessed it) it can handle 1500 kg of cargo. Plus another 400kg of cargo, for a total of 1900 kg of cargo. Yeah, I don’t get it either. Anyway, it’s undergone a major update, which is a good excuse for us to ask OTTO CTO Ryan Gariepy some questions about it.

The earlier version, also named OTTO 1500, has over a million hours of real-world operation, which is impressive. Even more impressive is being able to move that much stuff that quickly without being a huge safety hazard in warehouse environments full of unpredictable humans. Although, that might become less of a problem over time, as other robots take over some of the tasks that humans have been doing. OTTO Motors and Clearpath Robotics have an ongoing partnership with Boston Dynamics, and we fully expect to see these AMRs hauling boxes for Stretch in the near future.

For a bit more, we spoke with OTTO CTO Ryan Gariepy via email.

IEEE Spectrum: What are the major differences between today’s OTTO 1500 and the one introduced six years ago, and why did you decide to make those changes?

Ryan Gariepy: Six years isn’t a long shelf life for an industrial product, but it’s a lifetime in the software world. We took the original OTTO 1500 and stripped it down to the chassis and drivetrain, and re-built it with more modern components (embedded controller, state-of-the-art sensors, next-generation lithium batteries, and more). But the biggest difference is in how we’ve integrated our autonomous software and our industrial safety systems. Our systems are safe throughout the entirety of the vehicle dynamics envelope from straight line motion to aggressive turning at speed in tight spaces. It corners at 2m/s and has 60% more throughput. No “simple rectangular” footprints here! On top of this, the entire customization, development, and validation process is done in a way which respects that our integration partners need to be able to take advantage of these capabilities themselves without needing to become experts in vehicle dynamics. 

As for “why now,” we’ve always known that an ecosystem of new sensors and controllers was going to emerge as the world caught on to the potential of heavy-load AMRs. We wanted to give the industry some time to settle out—making sure we had reliable and low-cost 3D sensors, for example, or industrial grade fanless computers which can still mount a reasonable GPU, or modular battery systems which are now built-in view of new certifications requirements. And, possibly most importantly, partners who see the promise of the market enough to accommodate our feedback in their product roadmaps.

How has the reception differed from the original introduction of the OTTO 1500 and the new version?
 
That’s like asking the difference between the public reception to the introduction of the first iPod in 2001 and the first iPhone in 2007. When we introduced our first AMR, very few people had even heard of them, let alone purchased one before. We spent a great deal of time educating the market on the basic functionality of an AMR: What it is and how it works kind of stuff. Today’s buyers are way more sophisticated, experienced, and approach automation from a more strategic perspective. What was once a tactical purchase to plug a hole is now part of a larger automation initiative. And while the next generation of AMRs closely resemble the original models from the outside, the software functionality and integration capabilities are night and day.

What’s the most valuable lesson you’ve learned?

We knew that our customers needed incredible uptime: 365 days, 24/7 for 10 years is the typical expectation. Some of our competitors have AMRs working in facilities where they can go offline for a few minutes or a few hours without any significant repercussions to the workflow. That’s not the case with our customers, where any stoppage at any point means everything shuts down. And, of course, Murphy’s law all but guarantees that it shuts down at 4:00 a.m. on Saturday, Japan Standard Time. So the humbling lesson wasn’t knowing that our customers wanted maintenance service levels with virtually no down time, the humbling part was the degree of difficulty in building out a service organization as rapidly as we rolled out customer deployments. Every customer in a new geography needed a local service infrastructure as well. Finally, service doesn’t mean anything without spare parts availability, which brings with it customs and shipping challenges. And, of course, as a Canadian company, we need to build all of that international service and logistics infrastructure right from the beginning. Fortunately, the groundwork we’d laid with Clearpath Robotics served as a good foundation for this.

How were you able to develop a new product with COVID restrictions in place?

We knew we couldn’t take an entire OTTO 1500 and ship it to every engineer’s home that needed to work on one, so we came up with the next best thing. We call it a ‘wall-bot’ and it’s basically a deconstructed 1500 that our engineers can roll into their garage. We were pleasantly surprised with how effective this was, though it might be the heaviest dev kit in the robot world. 

Also don’t forget that much of robotics is software driven. Our software development life cycle had already had a strong focus on Gazebo-based simulation for years due to it being unfeasible to give every in-office developer a multi-ton loaded robot to play with, and we’d already had a redundant VPN setup for the office. Finally, we’ve always been a remote-work-friendly culture ever since we started adopting telepresence robots and default-on videoconferencing in the pre-OTTO days. In retrospect, it seems like the largest area of improvement for us for the future is how quickly we could get people good home office setups while amid a pandemic.

.netNeutral tr td:nth-child(odd){ background: #d7c5b1; } .netNeutral tr td:nth-child(even){ background: #e9dfd3; } .netNeutral tr th { font-family: "Theinhardt-Medium",sans-serif; font-size: 14px; line-height: 1.5em; vertical-align: top; border: 2px solid white; text-align: left; color: #fff; letter-spacing: 1px; } .netNeutral tr .types { font-family: "Theinhardt-Medium",sans-serif; font-size: 14px; line-height: 1.25em; vertical-align: top; border: 2px solid white; text-align: center; background: #000; color: #fff; letter-spacing: 1px; } .netNeutral tr { border-bottom-width: 1px; border-bottom-style: solid; border-bottom-color: gray; } .netNeutral tbody tr td { font-family: "Theinhardt-Regular",sans-serif; font-size: 14px; line-height: 1.25em; vertical-align: top; font-weight: normal; border:2px solid white; text-align: left; } -->

The machine learning industry’s efforts to measure itself using a standard yardstick has reached a milestone. Forgive the mixed metaphor, but that’s actually what’s happened with the release of MLPerf Inference v1.0 today. Using a suite of benchmark neural networks measured under a standardized set of conditions, 1,994 AI systems battled it out to show how quickly their neural networks can process new data. Separately, MLPerf tested an energy efficiency benchmark, with some 850 entrants for that.

This contest was the first following a set of trial runs where the AI consortium MLPerf and its parent organization MLCommons worked out the best measurement criteria. But the big winner in this first official version was the same as it had been in those warm-up rounds—Nvidia.

Entries were combinations of software and systems that ranged in scale from Raspberry Pis to supercomputers. They were powered by processors and accelerator chips from AMD, Arm, Centaur Technology, Edgecortix, Intel, Nvidia, Qualcomm, and Xilininx. And entries came from 17 organizations including Alibaba, Centaur, Dell Fujitsu, Gigabyte, HPE, Inspur, Krai, Lenovo, Moblint, Neuchips, and Supermicro.

Despite that diversity most of the systems used Nvidia GPUs to accelerate their AI functions. There were some other AI accelerators on offer, notably Qualcomm’s AI 100 and Edgecortix’s DNA. But Edgecortix was the only one of the many, many AI accelerator startups to jump in. And Intel chose to show off how well its CPUs did instead of offering up something from its US $2-billion acquisition of AI hardware startup Habana.

Before we get into the details of whose what was how fast, you’re going to need some background on how these benchmarks work. MLPerf is nothing like the famously straightforward Top500 list of the supercomputing great and good, where a single value can tell you most of what you need to know. The consortium decided that the demands of machine learning is just too diverse to be boiled down to something like tera-operations per watt, a metric often cited in AI accelerator research.

First, systems were judged on six neural networks. Entrants did not have to compete on all six, however.

  • BERT, for Bi-directional Encoder Representation from Transformers, is a natural language processing AI contributed by Google. Given a question input, BERT predicts a suitable answer.
  • DLRM, for Deep Learning Recommendation Model is a recommender system that is trained to optimize click-through rates. It’s used to recommend items for online shopping and rank search results and social media content. Facebook was the major contributor of the DLRM code.
  • 3D U-Net is used in medical imaging systems to tell which 3D voxel in an MRI scan are parts of a tumor and which are healthy tissue. It’s trained on a dataset of brain tumors.
  • RNN-T, for Recurrent Neural Network Transducer, is a speech recognition model. Given a sequence of speech input, it predicts the corresponding text.
  • ResNet is the granddaddy of image classification algorithms. This round used ResNet-50 version 1.5.
  • SSD, for Single Shot Detector, spots multiple objects within an image. It’s the kind of thing a self-driving car would use to find important things like other cars. This was done using either MobileNet version 1 or ResNet-34 depending on the scale of the system.

Competitors were divided into systems meant to run in a datacenter and those designed for operation at the “edge”—in a store, embedded in a security camera, etc.

Datacenter entrants were tested under two conditions. The first was a situation, called “offline”, where all the data was available in a single database, so the system could just hoover it up as fast as it could handle. The second more closely simulated the real life of a datacenter server, where data arrives in bursts and the system has to be able to complete its work quickly and accurately enough to handle the next burst.

Edge entrants tackled the offline scenario as well. But they also had to handle a test where they are fed a single stream of data, say a single conversation for language processing, and a multistream situation like a self-driving car might have to deal with from its multiple cameras.

Got all that? No? Well, Nvidia summed it up in this handy slide:

Image: NVIDIA

And finally, the efficiency benchmarks were done by measuring the power draw at the wall plug and averaged over 10 minutes to smooth out the highs-and-lows caused by processors scaling their voltages and frequencies.

Here, then, are the tops for each category:

FASTEST

Datacenter (commercially available systems, ranked by server condition)

Image Classification Object Detection Medical Imaging Speech-to-Text Natural Language Processing Recommendation Submitter Inspur DellEMC NVIDIA DellEMC DellEMC Inspur System name NF5488A5 Dell EMC DSS 8440 (10x A100-PCIe-40GB) NVIDIA DGX-A100 (8x A100-SXM-80GB, TensorRT) Dell EMC DSS 8440 (10x A100-PCIe-40GB) Dell EMC DSS 8440 (10x A100-PCIe-40GB) NF5488A5 Processor AMD EPYC 7742 Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz AMD EPYC 7742 Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz AMD EPYC 7742 No. Processors 2 2 2 2 2 2 Accelerator NVIDIA A100-SXM-80GB NVIDIA A100-PCIe-40GB NVIDIA A100-SXM-80GB NVIDIA A100-PCIe-40GB NVIDIA A100-PCIe-40GB NVIDIA A100-SXM-80GB No. Accelerators 8 10 8 10 10 8 Server queries/s 271,246 8,265 479.65 107,987 26,749 2,432,860 Offline samples/s 307,252 7,612 479.65 107,269 29,265 2,455,010

Edge (commercially available, ranked by single-stream latency)

Image Classification Object Detection (small) Object Detection (large) Medical Imaging Speech-to-Text Natural Language Processing Submitter NVIDIA NVIDIA NVIDIA NVIDIA NVIDIA NVIDIA System name NVIDIA DGX-A100 (1x A100-SXM-80GB, TensorRT, Triton) NVIDIA DGX-A100 (1x A100-SXM-80GB, TensorRT, Triton) NVIDIA DGX-A100 (1x A100-SXM-80GB, TensorRT, Triton) NVIDIA DGX-A100 (1x A100-SXM-80GB, TensorRT) NVIDIA DGX-A100 (1x A100-SXM-80GB, TensorRT) NVIDIA DGX-A100 (1x A100-SXM-80GB, TensorRT) Processor AMD EPYC 7742 AMD EPYC 7742 AMD EPYC 7742 AMD EPYC 7742 AMD EPYC 7742 AMD EPYC 7742 No. Processors 2 2 2 2 2 2 Accelerator NVIDIA A100-SXM-80GB NVIDIA A100-SXM-80GB NVIDIA A100-SXM-80GB NVIDIA A100-SXM-80GB NVIDIA A100-SXM-80GB NVIDIA A100-SXM-80GB No. Accelerators 1 1 1 1 1 1 Single stream latency (milliseconds) 0.431369 0.25581 1.686353 19.919082 22.585203 1.708807 Multiple stream (streams) 1344 1920 56 Offline samples/s 38011.6 50926.6 985.518 60.6073 14007.6 3601.96

The Most Efficient

Datacenter

Image Classification Object Detection Medical Imaging Speech-to-Text Natural Language Processing Recommendation Submitter Qualcomm Qualcomm NVIDIA NVIDIA NVIDIA NVIDIA System name Gigabyte R282-Z93 5x QAIC100 Gigabyte R282-Z93 5x QAIC100 Gigabyte G482-Z54 (8x A100-PCIe, MaxQ, TensorRT) NVIDIA DGX Station A100 (4x A100-SXM-80GB, MaxQ, TensorRT) NVIDIA DGX Station A100 (4x A100-SXM-80GB, MaxQ, TensorRT) NVIDIA DGX Station A100 (4x A100-SXM-80GB, MaxQ, TensorRT) Processor AMD EPYC 7282 16-Core Processor AMD EPYC 7282 16-Core Processor AMD EPYC 7742 AMD EPYC 7742 AMD EPYC 7742 AMD EPYC 7742 No. Processors 2 2 2 1 1 1 Accelerator QUALCOMM Cloud AI 100 PCIe HHHL QUALCOMM Cloud AI 100 PCIe HHHL NVIDIA A100-PCIe-40GB NVIDIA A100-SXM-80GB NVIDIA A100-SXM-80GB NVIDIA A100-SXM-80GB No. Accelerators 5 5 8 4 4 4 Server queries/s 78,502 1557 372 43,389 10,203 890,334 System Power (Watts) 534 548 2261 1314 1302 1342 Queries/joule 147.06 2.83 0.16 33.03 7.83 663.61

Edge (commercially available, ranked by single-stream latency)

Image Classification Object Detection (large) Object Detection (small) Medical Imaging Speech-to-Text Natural Language Processing Submitter Qualcomm NVIDIA Qualcomm NVIDIA NVIDIA NVIDIA System name AI Development Kit NVIDIA Jetson Xavier NX (MaxQ, TensorRT) AI Development Kit NVIDIA Jetson Xavier NX (MaxQ, TensorRT) NVIDIA Jetson Xavier NX (MaxQ, TensorRT) NVIDIA Jetson Xavier NX (MaxQ, TensorRT) Processor Qualcomm Snapdragon 865 NVIDIA Carmel (ARMv8.2) Qualcomm Snapdragon 865 NVIDIA Carmel (ARMv8.2) NVIDIA Carmel (ARMv8.2) NVIDIA Carmel (ARMv8.2) No. Processors 1 1 1 1 1 1 Accelerator QUALCOMM Cloud AI 100 DM.2e NVIDIA Xavier NX QUALCOMM Cloud AI 100 DM.2 NVIDIA Xavier NX NVIDIA Xavier NX NVIDIA Xavier NX No. Accelerators 1 1 1 1 1 1 Single stream latency 0.85 1.67 30.44 819.08 372.37 57.54 System energy/stream (joules) 0.02 0.02 0.60 12.14 3.45 0.59

The continuing lack of entrants from AI hardware startups is glaring at this point, especially considering that many of them are members of MLCommons. When I’ve asked certain startups about it, they usually answer that the best measure of their hardware is how it runs their potential customers’ specific neural networks rather than how well they do on benchmarks.

That seems fair, of course, assuming these startups can get the attention of potential customers in the first place. It also assumes that customers actually know what they need.

“If you’ve never done AI, you don’t know what to expect; you don’t know what performance you want to hit; you don’t know what combinations you want with CPUs, GPUs, and accelerators,” says Armando Acosta, product manager for AI, high-performance computing, and data analytics at Dell Technologies. MLPerf, he says, “really gives customers a good baseline.”

Due to author error a mixed metaphor was labelled as a pun in an earlier version of this post.

Kate Darling is an expert on human robot interaction, robot ethics, intellectual property, and all sorts of other things at the MIT Media Lab. She’s written several excellent articles for us in the past, and we’re delighted to be able to share this excerpt from her new book, which comes out today. Entitled The New Breed: What Our History with Animals Reveals about Our Future with Robots, Kate’s book is an exploration of how animals can help us understand our robot relationships, and how far that comparison can really be extended. It’s solidly based on well-cited research, including many HRI studies that we’ve written about in the past, but Kate brings everything together and tells us what it all could mean as robots continue to integrate themselves into our lives. 

The following excerpt is The Power of Movement, a section from the chapter Robots Versus Toasters, which features one of the saddest robot videos I’ve ever seen, even after nearly a decade. Enjoy!

When the first black-and-white motion pictures came to the screen, an 1896 film showing in a Paris cinema is said to have caused a stampede: the first-time moviegoers, watching a giant train barrel toward them, jumped out of their seats and ran away from the screen in panic. According to film scholar Martin Loiperdinger, this story is no more than an urban legend. But this new media format, “moving pictures,” proved to be both immersive and compelling, and was here to stay. Thanks to a baked-in ability to interpret motion, we’re fascinated even by very simple animation because it tells stories we intuitively understand.

In a seminal study from the 1940s, psychologists Fritz Heider and Marianne Simmel showed participants a black-and-white movie of simple, geometrical shapes moving around on a screen. When instructed to describe what they were seeing, nearly every single one of their participants interpreted the shapes to be moving around with agency and purpose. They described the behavior of the triangles and circle the way we describe people’s behavior, by assuming intent and motives. Many of them went so far as to create a complex narrative around the moving shapes. According to one participant: “A man has planned to meet a girl and the girl comes along with another man. [ . . . ] The girl gets worried and races from one corner to the other in the far part of the room. [ . . . ] The girl gets out of the room in a sudden dash just as man number two gets the door open. The two chase around the outside of the room together, followed by man number one. But they finally elude him and get away. The first man goes back and tries to open his door, but he is so blinded by rage and frustration that he can not open it.”

What brought the shapes to life for Heider and Simmel’s participants was solely their movement. We can interpret certain movement in other entities as “worried,” “frustrated,” or “blinded by rage,” even when the “other” is a simple black triangle moving across a white background. A number of studies document how much information we can extract from very basic cues, getting us to assign emotions and gender identity to things as simple as moving points of light. And while we might not run away from a train on a screen, we’re still able to interpret the movement and may even get a little thrill from watching the train in a more modern 3D screening. (There are certainly some embarrassing videos of people—maybe even of me—when we first played games wearing virtual reality headsets.)

Many scientists believe that autonomous movement activates our “life detector.” Because we’ve evolved needing to quickly identify natural predators, our brains are on constant lookout for moving agents. In fact, our perception is so attuned to movement that we separate things into objects and agents, even if we’re looking at a still image. Researchers Joshua New, Leda Cosmides, and John Tooby showed people photos of a variety of scenes, like a nature landscape, a city scene, or an office desk. Then, they switched in an identical image with one addition; for example, a bird, a coffee mug, an elephant, a silo, or a vehicle. They measured how quickly the participants could identify the new appearance. People were substantially quicker and more accurate at detecting the animals compared to all of the other categories, including larger objects and vehicles.

The researchers also found evidence that animal detection activated an entirely different region of people’s brains. Research like this suggests that a specific part of our brain is constantly monitoring for lifelike animal movement. This study in particular also suggests that our ability to separate animals and objects is more likely to be driven by deep ancestral priorities than our own life experiences. Even though we have been living with cars for our whole lives, and they are now more dangerous to us than bears or tigers, we’re still much quicker to detect the presence of an animal.

The biological hardwiring that detects and interprets life in autonomous agent movement is even stronger when it has a body and is in the room with us. John Harris and Ehud Sharlin at the University of Calgary tested this projection with a moving stick. They took a long piece of wood, about the size of a twirler’s baton, and attached one end to a base with motors and eight degrees of freedom. This allowed the researchers to control the stick remotely and wave it around: fast, slow, doing figure eights, etc. They asked the experiment participants to spend some time alone in a room with the moving stick. Then, they had the participants describe their experience.

Only two of the thirty participants described the stick’s movement in technical terms. The others told the researchers that the stick was bowing or otherwise greeting them, claimed it was aggressive and trying to attack them, described it as pensive, “hiding something,” or even “purring happily.” At least ten people said the stick was “dancing.” One woman told the stick to stop pointing at her.

If people can imbue a moving stick with agency, what happens when they meet R2-D2? Given our social tendencies and ingrained responses to lifelike movement in our physical space, it’s fairly unsurprising that people perceive robots as being alive. Robots are physical objects in our space that often move in a way that seems (to our lizard brains) to have agency. A lot of the time, we don’t perceive robots as objects—to us, they are agents. And, while we may enjoy the concept of pet rocks, we love to anthropomorphize agent behavior even more.

We already have a slew of interesting research in this area. For example, people think a robot that’s present in a room with them is more enjoyable than the same robot on a screen and will follow its gaze, mimic its behavior, and be more willing to take the physical robot’s advice. We speak more to embodied robots, smile more, and are more likely to want to interact with them again. People are more willing to obey orders from a physical robot than a computer. When left alone in a room and given the opportunity to cheat on a game, people cheat less when a robot is with them. And children learn more from working with a robot compared to the same character on a screen. We are better at recognizing a robot’s emotional cues and empathize more with physical robots. When researchers told children to put a robot in a closet (while the robot protested and said it was afraid of the dark), many of the kids were hesitant. 

Even adults will hesitate to switch off or hit a robot, especially when they perceive it as intelligent. People are polite to robots and try to help them. People greet robots even if no greeting is required and are friendlier if a robot greets them first. People reciprocate when robots help them. And, like the socially inept [software office assistant] Clippy, when people don’t like a robot, they will call it names. What’s noteworthy in the context of our human comparison is that the robots don’t need to look anything like humans for this to happen. In fact, even very simple robots, when they move around with “purpose,” elicit an inordinate amount of projection from the humans they encounter. Take robot vacuum cleaners. By 2004, a million of them had been deployed and were sweeping through people’s homes, vacuuming dirt, entertaining cats, and occasionally getting stuck in shag rugs. The first versions of the disc-shaped devices had sensors to detect things like steep drop-offs, but for the most part they just bumbled around randomly, changing direction whenever they hit a wall or a chair.

iRobot, the company that makes the most popular version (the Roomba) soon noticed that their customers would send their vacuum cleaners in for repair with names (Dustin Bieber being one of my favorites). Some Roomba owners would talk about their robot as though it were a pet. People who sent in malfunctioning devices would complain about the company’s generous policy to offer them a brand-new replacement, demanding that they instead fix “Meryl Sweep” and send her back. The fact that the Roombas roamed around on their own lent them a social presence that people’s traditional, handheld vacuum cleaners lacked. People decorated them, talked to them, and felt bad for them when they got tangled in the curtains.

Tech journalists reported on the Roomba’s effect, calling robovacs “the new pet craze.” A 2007 study found that many people had a social relationship with their Roombas and would describe them in terms that evoked people or animals. Today, over 80 percent of Roombas have names. I don’t have access to naming statistics for the handheld Dyson vacuum cleaner, but I’m pretty sure the number is lower.

Robots are entering our lives in many shapes and forms, and even some of the most simple or mechanical robots can prompt a visceral response. And the design of robots isn’t likely to shift away from evoking our biological reactions—especially because some robots are designed to mimic lifelike movement on purpose.

Excerpted from THE NEW BREED: What Our History with Animals Reveals about Our Future with Robots by Kate Darling. Published by Henry Holt and Company. Copyright © 2021 by Kate Darling. All rights reserved.

Kate’s book is available today from Annie Bloom’s Books in SW Portland, Oregon. It’s also available from Powell’s Books, and if you don’t have the good fortune of living in Portland, you can find it in both print and digital formats pretty much everywhere else books are sold.

As for Robovie, the claustrophobic robot that kept getting shoved in a closet, we recently checked in with Peter Kahn, the researcher who created the experiment nearly a decade ago, to make sure that the poor robot ended up okay. “Robovie is doing well,” Khan told us. “He visited my lab on 2-3 other occasions and participated in other experiments. Now he’s back in Japan with the person who helped make him, and who cares a lot about him.” That person is Takayuki Kanda at ATR, who we’re happy to report is still working with Robovie in the context of human-robot interaction. Thanks Robovie! 

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

Researchers from the Biorobotics Lab in the School of Computer Science’s Robotics Institute at Carnegie Mellon University tested the hardened underwater modular robot snake (HUMRS) last month in the pool, diving the robot through underwater hoops, showing off its precise and smooth swimming, and demonstrating its ease of control.

The robot's modular design allows it to adapt to different tasks, whether squeezing through tight spaces under rubble, climbing up a tree or slithering around a corner underwater. For the underwater robot snake, the team used existing watertight modules that allow the robot to operate in bad conditions. They then added new modules containing the turbines and thrusters needed to maneuver the robot underwater.

[ CMU ]

Robots are learning how not to fall over after stepping on your foot and kicking you in the shin.

[ B-Human ]

Like boot prints on the Moon, NASA's OSIRIS-REx spacecraft left its mark on asteroid Bennu. Now, new images—taken during the spacecraft's final fly-over on April 7, 2021—reveal the aftermath of the historic Touch-and-Go (TAG) sample acquisition event from Oct. 20, 2020.

[ NASA ]

In recognition of National Robotics Week, Conan O'Brien thanks one of the robots that works for him.

[ YouTube ]

The latest from Wandercraft's self-balancing Atalante exo.

[ Wandercraft ]

Stocking supermarket shelves is one of those things that's much more difficult than it looks for robots, involving in-hand manipulation, motion planning, vision, and tactile sensing. Easy for humans, but robots are getting better.

[ Article ]

Thanks Marco!

Draganfly​ drone spraying Varigard disinfectant at the Smoothie King stadium. Our drone sanitization spraying technology is up to 100% more efficient and effective than conventional manual spray sterilization processes.

[ Draganfly ]

Baubot is a mobile construction robot that can do pretty much everything, apparently.

I’m pretty skeptical of robots like these; especially ones that bill themselves as platforms that can be monetized by third-party developers. From what we've seen, the most successful robots instead focus on doing one thing very well.

[ Baubot ]

In this demo, a remote operator sends an unmanned ground vehicle on an autonomous inspection mission via Clearpath’s web-based Outdoor Navigation Software.

[ Clearpath ]

Aurora’s Odysseus aircraft is a high-altitude pseudo-satellite that can change how we use the sky. At a fraction of the cost of a satellite and powered by the sun, Odysseus offers vast new possibilities for those who need to stay connected and informed.

[ Aurora ]

This video from 1999 discusses the soccer robot research activities at Carnegie Mellon University. CMUnited, the team of robots developed by Manuela Veloso and her students, won the small-size competition in both 1997 and 1998.

[ CMU ]

Thanks Fan!

This video propose an overview of our participation to the DARPA subterranean challenge, with a focus on the urban edition taking place Feb. 18-27, 2020, at Satsop Business Park west of Olympia, Washington.

[ Norlab ]

In today’s most advanced warehouses, Magazino’s autonomous robot TORU works side by side with human colleagues. The robot is specialized in picking, transporting, and stowing objects like shoe boxes in e-commerce warehouses.

[ Magazino ]

A look at the Control Systems Lab at the National Technical University of Athens.

[ CSL ]

Thanks Fan!

Doug Weber of MechE and the Neuroscience Institute discusses his group’s research on harnessing the nervous system's ability to control not only our bodies, but the machines and prostheses that can enhance our bodies, especially for those with disabilities.

[ CMU ]

Mark Yim, Director of the GRASP Lab at UPenn, gives a talk on “Is Cost Effective Robotics Interesting?” Yes, yes it is.

Robotic technologies have shown the capability to do amazing things. But many of those things are too expensive to be useful in any real sense. Cost reduction has often been shunned by research engineers and scientists in academia as “just engineering.” For robotics to make a larger impact on society the cost problem must be addressed.

[ CMU ]

There are all kinds of “killer robots” debates going on, but if you want an informed, grounded, nuanced take on AI and the future of war-fighting, you want to be watching debates like these instead. Professor Rebecca Crootof speaks with Brigadier General Patrick Huston, Assistant Judge Advocate General for Military Law and Operations, at Duke Law School's 26th Annual National Security Law conference.

[ Lawfire ]

This week’s Lockheed Martin Robotics Seminar is by Julie Adams from Oregon State, on “Human-Collective Teams: Algorithms, Transparency .”

Biological inspiration for artificial systems abounds. The science to support robotic collectives continues to emerge based on their biological inspirations, spatial swarms (e.g., fish and starlings) and colonies (e.g., honeybees and ants). Developing effective human-collective teams requires focusing on all aspects of the integrated system development. Many of these fundamental aspects have been developed independently, but our focus is an integrated development process to these complex research questions. This presentation will focus on three aspects: algorithms, transparency, and resilience for collectives.

[ UMD ]

Human-robot interaction goes both ways. You’ve got robots understanding (or attempting to understand) humans, as well as humans understanding (or attempting to understand) robots. Humans, in my experience, are virtually impossible to understand even under the best of circumstances. But going the other way, robots have all kinds of communication tools at their disposal. Lights, sounds, screens, haptics—there are lots of options. That doesn’t mean that robot to human (RtH) communication is easy, though, because the ideal communication modality is something that is low cost and low complexity while also being understandable to almost anyone.

One good option for something like a collaborative robot arm can be to use human-inspired gestures (since it doesn’t require any additional hardware), although it’s important to be careful when you start having robots doing human stuff, because it can set unreasonable expectations if people think of the robot in human terms. In order to get around this, roboticists from Aachen University are experimenting with animal-like gestures for cobots instead, modeled after the behavior of puppies. Puppies!

For robots that are low-cost and appearance-constrained, animal-inspired (zoomorphic) gestures can be highly effective at state communication. We know this because of tails on Roombas:

While this is an adorable experiment, adding tails to industrial cobots is probably not going to happen. That’s too bad, because humans have an intuitive understanding of dog gestures, and this extends even to people who aren’t dog owners. But tails aren’t necessary for something to display dog gestures; it turns out that you can do it with a standard robot arm:

In a recent preprint in IEEE Robotics and Automation Letters (RA-L), first author Vanessa Sauer used puppies to inspire a series of communicative gestures for a Franka Emika Panda arm. Specifically, the arm was to be used in a collaborative assembly task, and needed to communicate five states to the human user, including greeting the user, prompting the user to take a part, waiting for a new command, an error condition when a container was empty of parts, and then shutting down. From the paper:

For each use case, we mirrored the intention of the robot (e.g., prompting the user to take a part) to an intention, a dog may have (e.g., encouraging the owner to play). In a second step, we collected gestures that dogs use to express the respective intention by leveraging real-life interaction with dogs, online videos, and literature. We then translated the dog gestures into three distinct zoomorphic gestures by jointly applying the following guidelines inspired by:

  • Mimicry. We mimic specific dog behavior and body language to communicate robot states.
  • Exploiting structural similarities. Although the cobot is functionally designed, we exploit certain components to make the gestures more “dog-like,” e.g., the camera corresponds to the dog’s eyes, or the end-effector corresponds to the dog’s snout.
  • Natural flow. We use kinesthetic teaching and record a full trajectory to allow natural and flowing movements with increased animacy.

A user study comparing the zoomorphic gestures to a more conventional light display for state communication during the assembly task showed that the zoomorphic gestures were easily recognized by participants as dog-like, even if the participants weren’t dog people. And the zoomorphic gestures were also more intuitively understood than the light displays, although the classification of each gesture wasn’t perfect. People also preferred the zoomorphic gestures over more abstract gestures designed to communicate the same concept. Or as the paper puts it, “Zoomorphic gestures are significantly more attractive and intuitive and provide more joy when using.” An online version of the study is here, so give it a try and provide yourself with some joy.

While zoomorphic gestures (at least in this very preliminary research) aren’t nearly as accurate at state communication as using something like a screen, they’re appealing because they’re compelling, easy to understand, inexpensive to implement, and less restrictive than sounds or screens. And there’s no reason why you can’t use both!

For a few more details, we spoke with the first author on this paper, Vanessa Sauer. 

IEEE Spectrum: Where did you get the idea for this research from, and why do you think it hasn't been more widely studied or applied in the context of practical cobots?

Vanessa Sauer: I'm a total dog person. During a conversation about dogs and how their ways of communicating with their owner has evolved over time (e.g., more expressive face, easy to understand even without owning a dog), I got the rough idea for my research. I was curious to see if this intuitive understanding many people have of dog behavior could also be applied to cobots that communicate in a similar way. Especially in social robotics, approaches utilizing zoomorphic gestures have been explored. I guess due to the playful nature, less research and applications have been done in the context of industry robots, as they often have a stronger focus on efficiency.

How complex of a concept can be communicated in this way?

In our “proof-of-concept” style approach, we used rather basic robot states to be communicated. The challenge with more complex robot states would be to find intuitive parallels in dog behavior. Nonetheless, I believe that more complex states can also be communicated with dog-inspired gestures.

How would you like to see your research be put into practice?

I would enjoy seeing zoomorphic gestures offered as modality-option on cobots, especially cobots used in industry. I think that could have the potential to reduce inhibitions towards collaborating with robots and make the interaction more fun.

Photos, Robots: Franka Emika; Dogs: iStockphoto Zoomorphic Gestures for Communicating Cobot States, by Vanessa Sauer, Axel Sauer, and Alexander Mertens from Aachen University and TUM, will be published in RA-L.

Pages