Feed aggregator

When we look at quadruped robots, it’s impossible not to compare them to quadruped animals like dogs and cats. Over the last several years, such robots have begun to approach the capabilities of their biological counterparts under just a few very specific situations, like walking without falling over. Biology provides a gold standard that robots are striving to reach, and it’s going to take us a very long time to make quadrupeds that can do everything that animals can.

The cool thing about robots, though, is that they don’t have to be constrained by biology, meaning that there’s always the potential for them to learn new behaviors that animals simply aren’t designed for. At IROS 2019 last week, we saw one such example, with a quadruped robot that’s able to climb vertical ladders.

Photo: Tokyo Metropolitan University To generate the robot’s autonomous climbing behavior, the researchers used a recurrent neural network that trained it to ascend the ladder. The behavior was created for this specific ladder, but the researchers plan to generalize the system so that the robot can climb new ladders without prior training.

A casual Google search makes it seems like vertical ladder climbing is quite challenging for biological quadrupeds. Dogs can do it, although usually you see them climbing up ladders that are angled (leaning against something) rather than vertical. Cats are a bit better, but vertical ladders still look like a challenge for them, especially if they can’t use their claws to grip. The problem is as the steepness of a ladder increases to vertical, your center of mass moves farther and farther away from the rungs, and you have to support an increasing amount of your own weight by actively gripping rungs rather than just standing on them, which is a problem for animals that don’t have robust grasping systems.

Image: Tokyo Metropolitan University To climb the ladder, the quadruped robot is equipped with an inertial measurement unit (IMU), time-of-flight 3D camera on its face, and touch and force sensors on each claw. An Intel NUC computer acts as the main control system, with an Arduino used as a secondary controller to manage the input-output signals of the internal sensors (force, touch, and IMU). The robot has 23 degrees of freedom (DoF): 5 DoF in each leg, 2 DoF for the dual laser rangefinder sensors, and 1 DoF for the head.

Most robotic quadrupeds don’t have robust grasping systems either, but adding such a system to a robot seems like a promising idea to explore. Roboticists at Tokyo Metropolitan University have built a cute little (7 kilograms) quadruped with 5 degrees-of-freedom legs that include a sort of opposable thumb that turn its feet into grippers. It’s able to use those grippers to climb vertical handrail-free ladders fully autonomously.

That transition from the ladder to the upper surface seems quite tricky to perform, and it’s particularly clever how the robot uses its hind legs to grasp the top rung and use it to propel itself onto the platform. It’s also worth noting that the autonomous system was trained on this specific ladder, and that it took five tries to get it right, although the researchers say that the failures were due to lack of actuator torque rather than their overall approach. They plan to fix this in future work, as well as to generalize the system so that it can climb new ladders without prior training.

“A Novel Capabilities of Quadruped Robot Moving Through Vertical Ladder Without Handrail Support,” by Azhar Aulia Saputra, Yuichiro Toda, Naoyuki Takesue, and Naoyuki Kubota from Tokyo Metropolitan University and Okayama University, was presented at IROS 2019 in Macau.

When we look at quadruped robots, it’s impossible not to compare them to quadruped animals like dogs and cats. Over the last several years, such robots have begun to approach the capabilities of their biological counterparts under just a few very specific situations, like walking without falling over. Biology provides a gold standard that robots are striving to reach, and it’s going to take us a very long time to make quadrupeds that can do everything that animals can.

The cool thing about robots, though, is that they don’t have to be constrained by biology, meaning that there’s always the potential for them to learn new behaviors that animals simply aren’t designed for. At IROS 2019 last week, we saw one such example, with a quadruped robot that’s able to climb vertical ladders.

Photo: Tokyo Metropolitan University To generate the robot’s autonomous climbing behavior, the researchers used a recurrent neural network that trained it to ascend the ladder. The behavior was created for this specific ladder, but the researchers plan to generalize the system so that the robot can climb new ladders without prior training.

A casual Google search makes it seems like vertical ladder climbing is quite challenging for biological quadrupeds. Dogs can do it, although usually you see them climbing up ladders that are angled (leaning against something) rather than vertical. Cats are a bit better, but vertical ladders still look like a challenge for them, especially if they can’t use their claws to grip. The problem is as the steepness of a ladder increases to vertical, your center of mass moves farther and farther away from the rungs, and you have to support an increasing amount of your own weight by actively gripping rungs rather than just standing on them, which is a problem for animals that don’t have robust grasping systems.

Image: Tokyo Metropolitan University To climb the ladder, the quadruped robot is equipped with an inertial measurement unit (IMU), time-of-flight 3D camera on its face, and touch and force sensors on each claw. An Intel NUC computer acts as the main control system, with an Arduino used as a secondary controller to manage the input-output signals of the internal sensors (force, touch, and IMU). The robot has 23 degrees of freedom (DoF): 5 DoF in each leg, 2 DoF for the dual laser rangefinder sensors, and 1 DoF for the head.

Most robotic quadrupeds don’t have robust grasping systems either, but adding such a system to a robot seems like a promising idea to explore. Roboticists at Tokyo Metropolitan University have built a cute little (7 kilograms) quadruped with 5 degrees-of-freedom legs that include a sort of opposable thumb that turn its feet into grippers. It’s able to use those grippers to climb vertical handrail-free ladders fully autonomously.

That transition from the ladder to the upper surface seems quite tricky to perform, and it’s particularly clever how the robot uses its hind legs to grasp the top rung and use it to propel itself onto the platform. It’s also worth noting that the autonomous system was trained on this specific ladder, and that it took five tries to get it right, although the researchers say that the failures were due to lack of actuator torque rather than their overall approach. They plan to fix this in future work, as well as to generalize the system so that it can climb new ladders without prior training.

“A Novel Capabilities of Quadruped Robot Moving Through Vertical Ladder Without Handrail Support,” by Azhar Aulia Saputra, Yuichiro Toda, Naoyuki Takesue, and Naoyuki Kubota from Tokyo Metropolitan University and Okayama University, was presented at IROS 2019 in Macau.

Learning from Demonstration (LfD) is a family of methods used to teach robots specific tasks. It is used to assist them with the increasing difficulty of performing manipulation tasks in a scalable manner. The state-of-the-art in collaborative robots allows for simple LfD approaches that can handle limited parameter changes of a task. These methods however typically approach the problem from a control perspective and therefore are tied to specific robot platforms. In contrast, this paper proposes a novel motion planning approach that combines the benefits of LfD approaches with generic motion planning that can provide robustness to the planning process as well as scaling task learning both in number of tasks and number of robot platforms. Specifically, it introduces Dynamical Movement Primitives (DMPs) based LfD as initial trajectories for the Stochastic Optimization for Motion Planning (STOMP) framework. This allows for successful task execution even when the task parameters and the environment change. Moreover, the proposed approach allows for skill transfer between robots. In this case a task is demonstrated to one robot via kinesthetic teaching and can be successfully executed by a different robot. The proposed approach, coined Guided Stochastic Optimization for Motion Planning (GSTOMP) is evaluated extensively using two different manipulator systems in simulation and in real conditions. Results show that GSTOMP improves task success compared to simple LfD approaches employed by the state-of-the-art collaborative robots. Moreover, it is shown that transferring skills is feasible and with good performance. Finally, the proposed approach is compared against a plethora of state-of-the-art motion planners. The results show that the motion planning performance is comparable or better than the state-of-the-art.

Soft robots have recently received much attention with their infinite degrees of freedoms and continuously deformable structures, which allow them to adapt well to the unstructured environment. A new type of soft actuator, namely, dielectric elastomer actuator (DEA) which has several excellent properties such as large deformation and high energy density is investigated in this study. Furthermore, a DEA-based soft robot is designed and developed. Due to the difficulty of accurate modeling caused by nonlinear electromechanical coupling and viscoelasticity, the iterative learning control (ILC) method is employed for the motion trajectory tracking with an uncertain model of the DEA. A D2 type ILC algorithm is proposed for the task. Furthermore, a knowledge-based model framework with kinematic analysis is explored to prove the convergence of the proposed ILC. Finally, both simulations and experiments are conducted to demonstrate the effectiveness of the ILC, which results show that excellent tracking performance can be achieved by the soft crawling robot.

This is part three of a six-part series on the history of natural language processing.

In 1913, the Russian mathematician Andrey Andreyevich Markov sat down in his study in St. Petersburg with a copy of Alexander Pushkin’s 19th century verse novel, Eugene Onegin, a literary classic at the time. Markov, however, did not start reading Pushkin’s famous text. Rather, he took a pen and piece of drafting paper, and wrote out the first 20,000 letters of the book in one long string of letters, eliminating all punctuation and spaces. Then he arranged these letters in 200 grids (10-by-10 characters each) and began counting the vowels in every row and column, tallying the results.

To an onlooker, Markov’s behavior would have appeared bizarre. Why would someone deconstruct a work of literary genius in this way, rendering it incomprehensible? But Markov was not reading the book to learn lessons about life and human nature; he was searching for the text's more fundamental mathematical structure.

Markov was searching for the text's fundamental mathematical structure.

In separating the vowels from the consonants, Markov was testing a theory of probability that he had been developing since 1909. Up until that point, the field of probability had been mostly limited to analyzing phenomena like roulette or coin flipping, where the outcome of previous events does not change the probability of current events. But Markov felt that most things happen in chains of causality and are dependent on prior outcomes. He wanted a way of modeling these occurrences through probabilistic analysis.

Language, Markov believed, was an example of a system where past occurrences partly determine present outcomes. To demonstrate this, he wanted to show that in a text like Pushkin’s novel, the chance of a certain letter appearing at some point in the text is dependent, to some extent, on the letter that came before it.

To do so, Markov began counting vowels in Eugene Onegin, and found that 43 percent of letters were vowels and 57 percent were consonants. Then Markov separated the 20,000 letters into pairs of vowels and consonant combinations: He found that there were 1,104 vowel-vowel pairs, 3,827 consonant-consonant pairs, and 15,069 vowel-consonant and consonant-vowel pairs. What this demonstrated, statistically speaking, was that for any given letter in Pushkin’s text, if it was a vowel, odds were that the next letter would be a consonant, and vice versa. 

Markov used this analysis to demonstrate that Pushkin’s Eugene Onegin wasn’t just a random distribution of letters but had some underlying statistical qualities that could be modeled. The enigmatic research paper that came out of this study, entitled “An Example of Statistical Investigation of the Text Eugene Onegin Concerning the Connection of Samples in Chains,” was not widely cited in Markov’s lifetime, and not translated to English until 2006. But some of its central concepts around probability and language spread across the globe, eventually finding re-articulation in Claude Shannon’s hugely influential paper, “A Mathematical Theory of Communication,” which came out in 1948. 

Shannon’s paper outlined a way to precisely measure the quantity of information in a message, and in doing so, set the foundations for a theory of information that would come to define the digital age. Shannon was fascinated by Markov’s idea that in a given text, the likelihood of some letter or word appearing could be approximated. Like Markov, Shannon demonstrated this by performing some textual experiments that involved making a statistical model of language, then took a step further by trying to use the model to generate text according to those statistical rules.

In an initial control experiment, he started by generating a sentence by picking letters randomly from a 27-symbol alphabet (26 letters, plus a space), and got the following output:

XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZLHJQD

The sentence was meaningless noise, Shannon said, because when we communicate we don’t choose letters with equal probability. As Markov had shown, consonants are more likely than vowels. But at a greater level of granularity, E’s are more common than S’s which are more common than Q’s. To account for this, Shannon amended his original alphabet so that it modeled the probability of English more closely—he was 11 percent more likely to draw an E from the alphabet than a Q. When he again drew letters at random from this recalibrated corpus he got a sentence that came a bit closer to English.

OCRO HLI RGWR NMIELWIS EU LL NBNESEBYA TH EEI ALHENHTTPA OOBTTVA NAH BRL.

In a series of subsequent experiments, Shannon demonstrated that as you make the statistical model even more complex, you get increasingly more comprehensible results. Shannon, via Markov, revealed a statistical framework for the English language, and showed that by modeling this framework—by analyzing the dependent probabilities of letters and words appearing in combination with each other—he could actually generate language.

“THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED” —Claude Shannon’s language generating model

The more complex the statistical model of a given text, the more accurate the language generation becomes—or as Shannon put it, the greater “resemblance to ordinary English text.” In the final experiment, Shannon drew from a corpus of words instead of letters and achieved the following:

THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.

For both Shannon and Markov, the insight that language’s statistical properties could be modeled offered a way to re-think broader problems that they were working on.

For Markov, it extended the study of stochasticity beyond mutually independent events, paving the way for a new era in probability theory. For Shannon, it helped him formulate a precise way of measuring and encoding units of information in a message, which revolutionized telecommunications and, eventually, digital communication. But their statistical approach to language modeling and generation also ushered in a new era for natural language processing, which has ramified through the digital age to this day.

This is the third installment of a six-part series on the history of natural language processing. Last week’s post described Leibniz’s proposal for a machine that combined concepts to form reasoned arguments. Come back next Monday for part four, “Why People Demanded Privacy to Confide in the World’s First Chatbot.”

You can also check out our prior series on the untold history of AI.

Sure, artificial intelligence is transforming the world’s societies and economies—but can an AI come up with plausible ideas for a Halloween costume? 

Janelle Shane has been asking such probing questions since she started her AI Weirdness blog in 2016. She specializes in training neural networks (which underpin most of today’s machine learning techniques) on quirky data sets such as compilations of knitting instructions, ice cream flavors, and names of paint colors. Then she asks the neural net to generate its own contributions to these categories—and hilarity ensues. AI is not likely to disrupt the paint industry with names like “Ronching Blue,” “Dorkwood,” and “Turdly.” 

Shane’s antics have a serious purpose. She aims to illustrate the serious limitations of today’s AI, and to counteract the prevailing narrative that describes AI as well on its way to superintelligence and complete human domination. “The danger of AI is not that it’s too smart,” Shane writes in her new book, “but that it’s not smart enough.” 

The book, which came out on Tuesday, is called You Look Like a Thing and I Love You. It takes its odd title from a list of AI-generated pick-up lines, all of which would at least get a person’s attention if shouted, preferably by a robot, in a crowded bar. Shane’s book is shot through with her trademark absurdist humor, but it also contains real explanations of machine learning concepts and techniques. It’s a painless way to take AI 101. 

She spoke with IEEE Spectrum about the perils of placing too much trust in AI systems, the strange AI phenomenon of “giraffing,” and her next potential Halloween costume. 

Janelle Shane on . . .

  1. The un-delicious origin of her blog
  2. “The narrower the problem, the smarter the AI will seem”
  3. Why overestimating AI is dangerous
  4. Giraffing!
  5. Machine and human creativity
  1. The un-delicious origin of her blog

    IEEE Spectrum: You studied electrical engineering as an undergrad, then got a master’s degree in physics. How did that lead to you becoming the comedian of AI? 

    Janelle Shane: I’ve been interested in machine learning since freshman year of college. During orientation at Michigan State, a professor who worked on evolutionary algorithms gave a talk about his work. It was full of the most interesting anecdotes–some of which I’ve used in my book. He told an anecdote about people setting up a machine learning algorithm to do lens design, and the algorithm did end up designing an optical system that works… except one of the lenses was 50 feet thick, because they didn’t specify that it couldn’t do that.  

    I started working in his lab on optics, doing ultra-short laser pulse work. I ended up doing a lot more optics than machine learning, but I always found it interesting. One day I came across a list of recipes that someone had generated using a neural net, and I thought it was hilarious and remembered why I thought machine learning was so cool. That was in 2016, ages ago in machine learning land.

    Spectrum: So you decided to “establish weirdness as your goal” for your blog. What was the first weird experiment that you blogged about? 

    Shane: It was generating cookbook recipes. The neural net came up with ingredients like: “Take ¼ pounds of bones or fresh bread.” That recipe started out: “Brown the salmon in oil, add creamed meat to the mixture.” It was making mistakes that showed the thing had no memory at all. 

    Spectrum: You say in the book that you can learn a lot about AI by giving it a task and watching it flail. What do you learn?

    Shane: One thing you learn is how much it relies on surface appearances rather than deep understanding. With the recipes, for example: It got the structure of title, category, ingredients, instructions, yield at the end. But when you look more closely, it has instructions like “Fold the water and roll it into cubes.” So clearly this thing does not understand water, let alone the other things. It’s recognizing certain phrases that tend to occur, but it doesn’t have a concept that these recipes are describing something real. You start to realize how very narrow the algorithms in this world are. They only know exactly what we tell them in our data set. 

    BACK TO TOP↑ “The narrower the problem, the smarter the AI will seem”

    Spectrum: That makes me think of DeepMind’s AlphaGo, which was universally hailed as a triumph for AI. It can play the game of Go better than any human, but it doesn’t know what Go is. It doesn’t know that it’s playing a game. 

    Shane: It doesn’t know what a human is, or if it’s playing against a human or another program. That’s also a nice illustration of how well these algorithms do when they have a really narrow and well-defined problem. 

    The narrower the problem, the smarter the AI will seem. If it’s not just doing something repeatedly but instead has to understand something, coherence goes down. For example, take an algorithm that can generate images of objects. If the algorithm is restricted to birds, it could do a recognizable bird. If this same algorithm is asked to generate images of any animal, if its task is that broad, the bird it generates becomes an unrecognizable brown feathered smear against a green background.

    Spectrum: That sounds… disturbing. 

    Shane: It’s disturbing in a weird amusing way. What’s really disturbing is the humans it generates. It hasn’t seen them enough times to have a good representation, so you end up with an amorphous, usually pale-faced thing with way too many orifices. If you asked it to generate an image of a person eating pizza, you’ll have blocks of pizza texture floating around. But if you give that image to an image-recognition algorithm that was trained on that same data set, it will say, “Oh yes, that’s a person eating pizza.”

    BACK TO TOP↑ Why overestimating AI is dangerous

    Spectrum: Do you see it as your role to puncture the AI hype? 

    Shane: I do see it that way. Not a lot of people are bringing out this side of AI. When I first started posting my results, I’d get people saying, “I don’t understand, this is AI, shouldn’t it be better than this? Why doesn't it understand?” Many of the impressive examples of AI have a really narrow task, or they’ve been set up to hide how little understanding it has. There’s a motivation, especially among people selling products based on AI, to represent the AI as more competent and understanding than it actually is. 

    Spectrum: If people overestimate the abilities of AI, what risk does that pose? 

    Shane: I worry when I see people trusting AI with decisions it can’t handle, like hiring decisions or decisions about moderating content. These are really tough tasks for AI to do well on. There are going to be a lot of glitches. I see people saying, “The computer decided this so it must be unbiased, it must be objective.” 

    “If the algorithm’s task is to replicate human hiring decisions, it’s going to glom onto gender bias and race bias.” —Janelle Shane, AI Weirdness blogger

    That’s another thing I find myself highlighting in the work I’m doing. If the data includes bias, the algorithm will copy that bias. You can’t tell it not to be biased, because it doesn’t understand what bias is. I think that message is an important one for people to understand. 

    If there’s bias to be found, the algorithm is going to go after it. It’s like, “Thank goodness, finally a signal that’s reliable.” But for a tough problem like: Look at these resumes and decide who’s best for the job. If its task is to replicate human hiring decisions, it’s going to glom onto gender bias and race bias. There’s an example in the book of a hiring algorithm that Amazon was developing that discriminated against women, because the historical data it was trained on had that gender bias. 

    Spectrum: What are the other downsides of using AI systems that don’t really understand their tasks? 

    Shane: There is a risk in putting too much trust in AI and not examining its decisions. Another issue is that it can solve the wrong problems, without anyone realizing it. There have been a couple of cases in medicine. For example, there was an algorithm that was trained to recognize things like skin cancer. But instead of recognizing the actual skin condition, it latched onto signals like the markings a surgeon makes on the skin, or a ruler placed there for scale. It was treating those things as a sign of skin cancer. It’s another indication that these algorithms don’t understand what they’re looking at and what the goal really is. 

    BACK TO TOP↑ Giraffing

    Spectrum: In your blog, you often have neural nets generate names for things—such as ice cream flavors, paint colors, cats, mushrooms, and types of apples. How do you decide on topics?

    Shane: Quite often it’s because someone has written in with an idea or a data set. They’ll say something like, “I’m the MIT librarian and I have a whole list of MIT thesis titles.” That one was delightful. Or they’ll say, “We are a high school robotics team, and we know where there’s a list of robotics team names.” It’s fun to peek into a different world. I have to be careful that I’m not making fun of the naming conventions in the field. But there’s a lot of humor simply in the neural net’s complete failure to understand. Puns in particular—it really struggles with puns. 

    Spectrum: Your blog is quite absurd, but it strikes me that machine learning is often absurd in itself. Can you explain the concept of giraffing?

    Shane: This concept was originally introduced by [internet security expert] Melissa Elliott. She proposed this phrase as a way to describe the algorithms’ tendency to see giraffes way more often than would be likely in the real world. She posted a whole bunch of examples, like a photo of an empty field in which an image-recognition algorithm has confidently reported that there are giraffes. Why does it think giraffes are present so often when they’re actually really rare? Because they’re trained on data sets from online. People tend to say, “Hey look, a giraffe!” And then take a photo and share it. They don’t do that so often when they see an empty field with rocks. 

    There’s also a chatbot that has a delightful quirk. If you show it some photo and ask it how many giraffes are in the picture, it will always answer with some non zero number. This quirk comes from the way the training data was generated: These were questions asked and answered by humans online. People tended not to ask the question “How many giraffes are there?” when the answer was zero. So you can show it a picture of someone holding a Wii remote. If you ask it how many giraffes are in the picture, it will say two. 

    BACK TO TOP↑ Machine and human creativity

    Spectrum: AI can be absurd, and maybe also creative. But you make the point that AI art projects are really human-AI collaborations: Collecting the data set, training the algorithm, and curating the output are all artistic acts on the part of the human. Do you see your work as a human-AI art project?

    Shane: Yes, I think there is artistic intent in my work; you could call it literary or visual. It’s not so interesting to just take a pre-trained algorithm that’s been trained on utilitarian data, and tell it to generate a bunch of stuff. Even if the algorithm isn’t one that I’ve trained myself, I think about, what is it doing that’s interesting, what kind of story can I tell around it, and what do I want to show people. 

    The Halloween costume algorithm “was able to draw on its knowledge of which words are related to suggest things like sexy barnacle.”  —Janelle Shane, AI Weirdness blogger

    Spectrum: For the past three years you’ve been getting neural nets to generate ideas for Halloween costumes. As language models have gotten dramatically better over the past three years, are the costume suggestions getting less absurd? 

    Shane: Yes. Before I would get a lot more nonsense words. This time I got phrases that were related to real things in the data set. I don’t believe the training data had the words Flying Dutchman or barnacle. But it was able to draw on its knowledge of which words are related to suggest things like sexy barnacle and sexy Flying Dutchman. 

    Spectrum: This year, I saw on Twitter that someone made the gothy giraffe costume happen. Would you ever dress up for Halloween in a costume that the neural net suggested? 

    Shane: I think that would be fun. But there would be some challenges. I would love to go as the sexy Flying Dutchman. But my ambition may constrict me to do something more like a list of leg parts. 

    BACK TO TOP↑

China says it’s ready to attempt something only NASA has so far achieved—successfully landing a rover on Mars.

#China unveils first picture of its Mars explorer https://t.co/FnSu04Uv0h pic.twitter.com/0coWStBZxV

— CGTN (@CGTNOfficial) October 12, 2019

It will be China’s first independent attempt at an interplanetary mission, and comes with two ambitious goals. Launching in 2020, China’s Mars mission will attempt to put a probe in orbit around Mars and, separately, land a rover on the red planet. 

The mission was approved in early 2016 but updates have few and far between. Last week, a terse update (available here in Chinese) from the Xi'an Aerospace Propulsion Institute, a subsidiary of CASC, China's main space contractor, revealed that the spacecraft’s propulsion system had passed all necessary tests. 

According to the report, the Shanghai Institute of Space Propulsion has completed tests of the spacecraft's propulsion system for the hovering, hazard avoidance, slow-down, and landing stages of a Mars landing attempt. The successful tests verified the performance and control of the propulsion system, in which one engine producing 7,500 Newtons of thrust will provide the majority of force required to decelerate the spacecraft for landing.

Having previously completed tests of supersonic parachutes needed to slow the craft’s entry into the Martian atmosphere, this means China’s Mars spacecraft is close to ready for its mission.

China was initially considering several sites within two broad landing areas near Chryse Planitia, close to the landing sites of Viking 1 and Pathfinder, and another covering Isidis Planitia and stretching to the western edge of the Elysium Mons region.

According to a presentation at the European Planetary Science Congress-Division for Planetary Sciences Joint Meeting in Geneva in September, China has now chosen two preliminary sites near Utopia Planitia. The mission will have landing ellipses—the areas in which the spacecraft is statistically likely to land—of around 100 x 40 kilometers. 

Image: JPL/Texas A&M/Cornell/NASA NASA's Spirit rover captured this stunning view as the Sun sank below the rim of Gusev crater on Mars on 19 May, 2005.

China’s solar-powered Mars rover will, at 240 kilograms, be twice the mass of China’s two lunar rovers. It will carry navigation, topography, and multispectral cameras, a subsurface detection radar, a laser-induced breakdown spectroscopy instrument similar to Curiosity’s LIBS instrument, a Martian surface magnetic field detector, and a climate detector.

The orbiter will be equipped with a suite of science instruments including moderate- and high-resolution imagers. The pair of cameras will be used once in Mars orbit to image the preselected landing sites ahead of separation of the orbiter and rover.

The main barrier to China launching its mission is the status of the Long March 5 rocket required to get the 5-metric-ton spacecraft on its way to Mars. 

The Long March 5 is China’s largest launch vehicle, which had its first flight in 2016. However the second launch, in July 2017, failed to achieve orbit. Following at least two redesigns of the engines which power the rocket’s first stage, the Long March 5 is now ready to return to flight. 

The rocket is currently being assembled at the Wenchang Satellite Launch Center on Hainan island in southern China, with launch expected in late December. The mission will aim to send a large satellite into geostationary orbit, and in doing so prove the rocket is ready for the later Mars mission launch.

If all goes well, China will join NASA’s Mars 2020 mission, the United Arab Emirates’ Hope Mars Mission and, if parachute issues can be overcome, the ExoMars 2020 mission, in launching during a roughly three-week window from late July to early August 2020. With the advantage of favorable relative positions of Earth and Mars at that time—creating an efficient path known as the Hohmann transfer—the spacecraft would arrive at the red planet around February 2021. 

If the Long March 5 does not come through its big test in late December, China will need to wait 26 months before the next Hohmann transfer window opens for Mars, in late 2022.

Getting to Mars is only part of the job. China has already landed spacecraft on the near and far sides of the moon, and members of the successful 2013 Chang’e-3 lunar mission team were assigned to the Mars project. However, landing on Mars presents extra challenges.

The surface gravity of Mars is just 38 percent that of Earth. Simulating the Martian gravitational field adds complexity to terrestrial testing of entry, descent, and landing (EDL) sequences.

Mars has an atmosphere which is too thin to properly aid descent, but thick enough to threaten fast-moving spacecraft with extreme heat from atmospheric friction and compression. This requires a spacecraft to have a heat shield and complex parachute systems which need to be deployed and jettisoned at precisely the right moments.

When the spacecraft arrives at Mars, it will be around 150 million kilometers from Earth, meaning commands traveling at the speed of light will take around 8 minutes to reach their target. This means the entire landing process must be automated. For NASA’s 2012 landing of the Curiosity rover, the team called this period the “7 minutes of terror.” 

Several Mars missions have failed during that critical stage, including a 2016 effort by the European Space Agency and Roscosmos of Russia to plant the ExoMars Schiaparelli EDM lander, as well as numerous Soviet missions and NASA’s attempt with its 1999 Mars Polar Lander.

During robot-aided rehabilitation exercises, monotonous, and repetitive actions can, to the subject, feel tedious and tiring, so improving the subject's motivation and active participation in the training is very important. A novel robot-aided upper limb rehabilitation training system, based on multimodal feedback, is proposed in this investigation. To increase the subject's interest and participation, a friendly graphical user interface and diversiform game-based rehabilitation training tasks incorporating multimodal feedback are designed, to provide the subject with colorful and engaging motor training. During this training, appropriate visual, auditory, and tactile feedback is employed to improve the subject's motivation via multi-sensory incentives relevant to the training performance. This approach is similar to methods applied by physiotherapists to keep the subject focused on motor training tasks. The experimental results verify the effectiveness of the designed multimodal feedback strategy in promoting the subject's participation and motivation.

During both positive and negative dyadic exchanges, individuals will often unconsciously imitate their partner. A substantial amount of research has been made on this phenomenon, and such studies have shown that synchronization between communication partners can improve interpersonal relationships. Automatic computational approaches for recognizing synchrony are still in their infancy. In this study, we extend on previous work in which we applied a novel method utilizing hand-crafted low-level acoustic descriptors and autoencoders (AEs) to analyse synchrony in the speech domain. For this purpose, a database consisting of 394 in-the-wild speakers from six different cultures, is used. For each speaker in the dyadic exchange, two AEs are implemented. Post the training phase, the acoustic features for one of the speakers is tested using the AE trained on their dyadic partner. In this same way, we also explore the benefits that deep representations from audio may have, implementing the state-of-the-art Deep Spectrum toolkit. For all speakers at varied time-points during their interaction, the calculation of reconstruction error from the AE trained on their respective dyadic partner is made. The results obtained from this acoustic analysis are then compared with the linguistic experiments based on word counts and word embeddings generated by our word2vec approach. The results demonstrate that there is a degree of synchrony during all interactions. We also find that, this degree varies across the 6 cultures found in the investigated database. These findings are further substantiated through the use of 4,096 dimensional Deep Spectrum features.

Have you ever encountered a lifelike humanoid robot or a realistic computer-generated face that seem a bit off or unsettling, though you can’t quite explain why?

Take for instance AVA, one of the “digital humans” created by New Zealand tech startup Soul Machines as an on-screen avatar for Autodesk. Watching a lifelike digital being such as AVA can be both fascinating and disconcerting. AVA expresses empathy through her demeanor and movements: slightly raised brows, a tilt of the head, a nod.

By meticulously rendering every lash and line in its avatars, Soul Machines aimed to create a digital human that is virtually undistinguishable from a real one. But to many, rather than looking natural, AVA actually looks creepy. There’s something about it being almost human but not quite that can make people uneasy.

Like AVA, many other ultra-realistic avatars, androids, and animated characters appear stuck in a disturbing in-between world: They are so lifelike and yet they are not “right.” This void of strangeness is known as the uncanny valley.

Uncanny Valley: Definition and History

The uncanny valley is a concept first introduced in the 1970s by Masahiro Mori, then a professor at the Tokyo Institute of Technology. The term describes Mori’s observation that as robots appear more humanlike, they become more appealing—but only up to a certain point. Upon reaching the uncanny valley, our affinity descends into a feeling of strangeness, a sense of unease, and a tendency to be scared or freaked out.

Image: Masahiro Mori The uncanny valley as depicted in Masahiro Mori’s original graph: As a robot’s human likeness [horizontal axis] increases, our affinity towards the robot [vertical axis] increases too, but only up to a certain point. For some lifelike robots, our response to them plunges, and they appear repulsive or creepy. That’s the uncanny valley.

In his seminal essay for Japanese journal Energy, Mori wrote:

I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley.

Later in the essay, Mori describes the uncanny valley by using an example—the first prosthetic hands:

One might say that the prosthetic hand has achieved a degree of resemblance to the human form, perhaps on a par with false teeth. However, when we realize the hand, which at first site looked real, is in fact artificial, we experience an eerie sensation. For example, we could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny.

In an interview with IEEE Spectrum, Mori explained how he came up with the idea for the uncanny valley:

“Since I was a child, I have never liked looking at wax figures. They looked somewhat creepy to me. At that time, electronic prosthetic hands were being developed, and they triggered in me the same kind of sensation. These experiences had made me start thinking about robots in general, which led me to write that essay. The uncanny valley was my intuition. It was one of my ideas.”

Uncanny Valley Examples

To better illustrate how the uncanny valley works, here are some examples of the phenomenon. Prepare to be freaked out.

1. Telenoid

Photo: Hiroshi Ishiguro/Osaka University/ATR

Taking the top spot in the “creepiest” rankings of IEEE Spectrum’s Robots Guide, Telenoid is a robotic communication device designed by Japanese roboticist Hiroshi Ishiguro. Its bald head, lifeless face, and lack of limbs make it seem more alien than human.

2. Diego-san

Photo: Andrew Oh/Javier Movellan/Calit2

Engineers and roboticists at the University of California San Diego’s Machine Perception Lab developed this robot baby to help parents better communicate with their infants. At 1.2 meters (4 feet) tall and weighing 30 kilograms (66 pounds), Diego-san is a big baby—bigger than an average 1-year-old child.

“Even though the facial expression is sophisticated and intuitive in this infant robot, I still perceive a false smile when I’m expecting the baby to appear happy,” says Angela Tinwell, a senior lecturer at the University of Bolton in the U.K. and author of The Uncanny Valley in Games and Animation. “This, along with a lack of detail in the eyes and forehead, can make the baby appear vacant and creepy, so I would want to avoid those ‘dead eyes’ rather than interacting with Diego-san.”

​3. Geminoid HI

Photo: Osaka University/ATR/Kokoro

Another one of Ishiguro’s creations, Geminoid HI is his android replica. He even took hair from his own scalp to put onto his robot twin. Ishiguro says he created Geminoid HI to better understand what it means to be human.

4. Sophia

Photo: Mikhail Tereshchenko/TASS/Getty Images

Designed by David Hanson of Hanson Robotics, Sophia is one of the most famous humanoid robots. Like Soul Machines’ AVA, Sophia displays a range of emotional expressions and is equipped with natural language processing capabilities.

5. Anthropomorphized felines

The uncanny valley doesn’t only happen with robots that adopt a human form. The 2019 live-action versions of the animated film The Lion King and the musical Cats brought the uncanny valley to the forefront of pop culture. To some fans, the photorealistic computer animations of talking lions and singing cats that mimic human movements were just creepy.

Are you feeling that eerie sensation yet?

Uncanny Valley: Science or Pseudoscience?

Despite our continued fascination with the uncanny valley, its validity as a scientific concept is highly debated. The uncanny valley wasn’t actually proposed as a scientific concept, yet has often been criticized in that light.

Mori himself said in his IEEE Spectrum interview that he didn’t explore the concept from a rigorous scientific perspective but as more of a guideline for robot designers:

Pointing out the existence of the uncanny valley was more of a piece of advice from me to people who design robots rather than a scientific statement.

Karl MacDorman, an associate professor of human-computer interaction at Indiana University who has long studied the uncanny valley, interprets the classic graph not as expressing Mori’s theory but as a heuristic for learning the concept and organizing observations.

“I believe his theory is instead expressed by his examples, which show that a mismatch in the human likeness of appearance and touch or appearance and motion can elicit a feeling of eeriness,” MacDorman says. “In my own experiments, I have consistently reproduced this effect within and across sense modalities. For example, a mismatch in the human realism of the features of a face heightens eeriness; a robot with a human voice or a human with a robotic voice is eerie.”

How to Avoid the Uncanny Valley

Unless you intend to create creepy characters or evoke a feeling of unease, you can follow certain design principles to avoid the uncanny valley. “The effect can be reduced by not creating robots or computer-animated characters that combine features on different sides of a boundary—for example, human and nonhuman, living and nonliving, or real and artificial,” MacDorman says.

To make a robot or avatar more realistic and move it beyond the valley, Tinwell says to ensure that a character’s facial expressions match its emotive tones of speech, and that its body movements are responsive and reflect its hypothetical emotional state. Special attention must also be paid to facial elements such as the forehead, eyes, and mouth, which depict the complexities of emotion and thought. “The mouth must be modeled and animated correctly so the character doesn’t appear aggressive or portray a ‘false smile’ when they should be genuinely happy,” she says.

For Christoph Bartneck, an associate professor at the University of Canterbury in New Zealand, the goal is not to avoid the uncanny valley, but to avoid bad character animations or behaviors, stressing the importance of matching the appearance of a robot with its ability. “We’re trained to spot even the slightest divergence from ‘normal’ human movements or behavior,” he says. “Hence, we often fail in creating highly realistic, humanlike characters.”

But he warns that the uncanny valley appears to be more of an uncanny cliff. “We find the likability to increase and then crash once robots become humanlike,” he says. “But we have never observed them ever coming out of the valley. You fall off and that’s it.”

Have you ever encountered a lifelike humanoid robot or a realistic computer-generated face that seem a bit off or unsettling, though you can’t quite explain why?

Take for instance AVA, one of the “digital humans” created by New Zealand tech startup Soul Machines as an on-screen avatar for Autodesk. Watching a lifelike digital being such as AVA can be both fascinating and disconcerting. AVA expresses empathy through her demeanor and movements: slightly raised brows, a tilt of the head, a nod.

By meticulously rendering every lash and line in its avatars, Soul Machines aimed to create a digital human that is virtually undistinguishable from a real one. But to many, rather than looking natural, AVA actually looks creepy. There’s something about it being almost human but not quite that can make people uneasy.

Like AVA, many other ultra-realistic avatars, androids, and animated characters appear stuck in a disturbing in-between world: They are so lifelike and yet they are not “right.” This void of strangeness is known as the uncanny valley.

Uncanny Valley: Definition and History

The uncanny valley is a concept first introduced in the 1970s by Masahiro Mori, then a professor at the Tokyo Institute of Technology. The term describes Mori’s observation that as robots appear more humanlike, they become more appealing—but only up to a certain point. Upon reaching the uncanny valley, our affinity descends into a feeling of strangeness, a sense of unease, and a tendency to be scared or freaked out.

Image: Masahiro Mori The uncanny valley as depicted in Masahiro Mori’s original graph: As a robot’s human likeness [horizontal axis] increases, our affinity towards the robot [vertical axis] increases too, but only up to a certain point. For some lifelike robots, our response to them plunges, and they appear repulsive or creepy. That’s the uncanny valley.

In his seminal essay for Japanese journal Energy, Mori wrote:

I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley.

Later in the essay, Mori describes the uncanny valley by using an example—the first prosthetic hands:

One might say that the prosthetic hand has achieved a degree of resemblance to the human form, perhaps on a par with false teeth. However, when we realize the hand, which at first site looked real, is in fact artificial, we experience an eerie sensation. For example, we could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny.

In an interview with IEEE Spectrum, Mori explained how he came up with the idea for the uncanny valley:

“Since I was a child, I have never liked looking at wax figures. They looked somewhat creepy to me. At that time, electronic prosthetic hands were being developed, and they triggered in me the same kind of sensation. These experiences had made me start thinking about robots in general, which led me to write that essay. The uncanny valley was my intuition. It was one of my ideas.”

Uncanny Valley Examples

To better illustrate how the uncanny valley works, here are some examples of the phenomenon. Prepare to be freaked out.

1. Telenoid

Photo: Hiroshi Ishiguro/Osaka University/ATR

Taking the top spot in the “creepiest” rankings of IEEE Spectrum’s Robots Guide, Telenoid is a robotic communication device designed by Japanese roboticist Hiroshi Ishiguro. Its bald head, lifeless face, and lack of limbs make it seem more alien than human.

2. Diego-san

Photo: Andrew Oh/Javier Movellan/Calit2

Engineers and roboticists at the University of California San Diego’s Machine Perception Lab developed this robot baby to help parents better communicate with their infants. At 1.2 meters (4 feet) tall and weighing 30 kilograms (66 pounds), Diego-san is a big baby—bigger than an average 1-year-old child.

“Even though the facial expression is sophisticated and intuitive in this infant robot, I still perceive a false smile when I’m expecting the baby to appear happy,” says Angela Tinwell, a senior lecturer at the University of Bolton in the U.K. and author of The Uncanny Valley in Games and Animation. “This, along with a lack of detail in the eyes and forehead, can make the baby appear vacant and creepy, so I would want to avoid those ‘dead eyes’ rather than interacting with Diego-san.”

​3. Geminoid HI

Photo: Osaka University/ATR/Kokoro

Another one of Ishiguro’s creations, Geminoid HI is his android replica. He even took hair from his own scalp to put onto his robot twin. Ishiguro says he created Geminoid HI to better understand what it means to be human.

4. Sophia

Photo: Mikhail Tereshchenko/TASS/Getty Images

Designed by David Hanson of Hanson Robotics, Sophia is one of the most famous humanoid robots. Like Soul Machines’ AVA, Sophia displays a range of emotional expressions and is equipped with natural language processing capabilities.

5. Anthropomorphized felines

The uncanny valley doesn’t only happen with robots that adopt a human form. The 2019 live-action versions of the animated film The Lion King and the musical Cats brought the uncanny valley to the forefront of pop culture. To some fans, the photorealistic computer animations of talking lions and singing cats that mimic human movements were just creepy.

Are you feeling that eerie sensation yet?

Uncanny Valley: Science or Pseudoscience?

Despite our continued fascination with the uncanny valley, its validity as a scientific concept is highly debated. The uncanny valley wasn’t actually proposed as a scientific concept, yet has often been criticized in that light.

Mori himself said in his IEEE Spectrum interview that he didn’t explore the concept from a rigorous scientific perspective but as more of a guideline for robot designers:

Pointing out the existence of the uncanny valley was more of a piece of advice from me to people who design robots rather than a scientific statement.

Karl MacDorman, an associate professor of human-computer interaction at Indiana University who has long studied the uncanny valley, interprets the classic graph not as expressing Mori’s theory but as a heuristic for learning the concept and organizing observations.

“I believe his theory is instead expressed by his examples, which show that a mismatch in the human likeness of appearance and touch or appearance and motion can elicit a feeling of eeriness,” MacDorman says. “In my own experiments, I have consistently reproduced this effect within and across sense modalities. For example, a mismatch in the human realism of the features of a face heightens eeriness; a robot with a human voice or a human with a robotic voice is eerie.”

How to Avoid the Uncanny Valley

Unless you intend to create creepy characters or evoke a feeling of unease, you can follow certain design principles to avoid the uncanny valley. “The effect can be reduced by not creating robots or computer-animated characters that combine features on different sides of a boundary—for example, human and nonhuman, living and nonliving, or real and artificial,” MacDorman says.

To make a robot or avatar more realistic and move it beyond the valley, Tinwell says to ensure that a character’s facial expressions match its emotive tones of speech, and that its body movements are responsive and reflect its hypothetical emotional state. Special attention must also be paid to facial elements such as the forehead, eyes, and mouth, which depict the complexities of emotion and thought. “The mouth must be modeled and animated correctly so the character doesn’t appear aggressive or portray a ‘false smile’ when they should be genuinely happy,” she says.

For Christoph Bartneck, an associate professor at the University of Canterbury in New Zealand, the goal is not to avoid the uncanny valley, but to avoid bad character animations or behaviors, stressing the importance of matching the appearance of a robot with its ability. “We’re trained to spot even the slightest divergence from ‘normal’ human movements or behavior,” he says. “Hence, we often fail in creating highly realistic, humanlike characters.”

But he warns that the uncanny valley appears to be more of an uncanny cliff. “We find the likability to increase and then crash once robots become humanlike,” he says. “But we have never observed them ever coming out of the valley. You fall off and that’s it.”

Pages