Feed aggregator



A few years ago, Martin Ford published a book called Architects of Intelligence, in which he interviewed 23 of the most experienced AI and robotics researchers in the world. Those interviews are just as fascinating to read now as they were in 2018, but Ford's since had some extra time to chew on them, in the context of a several years of somewhat disconcertingly rapid AI progress (and hype), coupled with the economic upheaval caused by the pandemic.

In his new book, Rule of the Robots: How Artificial Intelligence Will Transform Everything, Ford takes a markedly well-informed but still generally optimistic look at where AI is taking us as a society. It's not all good, and there are still a lot of unknowns, but Ford has a perspective that's both balanced and nuanced, and I can promise you that the book is well worth a read.

The following excerpt is a section entitled "Warning Signs," from the chapter "Deep Learning and the Future of Artificial Intelligence."

—Evan Ackerman

The 2010s were arguably the most exciting and consequential decade in the history of artificial intelligence. Though there have certainly been conceptual improvements in the algorithms used in AI, the primary driver of all this progress has simply been deploying more expansive deep neural networks on ever faster computer hardware where they can hoover up greater and greater quantities of training data. This "scaling" strategy has been explicit since the 2012 ImageNet competition that set off the deep learning revolution. In November of that year, a front-page New York Times article was instrumental in bringing awareness of deep learning technology to the broader public sphere. The article, written by reporter John Markoff, ends with a quote from Geoff Hinton: "The point about this approach is that it scales beautifully. Basically you just need to keep making it bigger and faster, and it will get better. There's no looking back now."

There is increasing evidence, however, that this primary engine of progress is beginning to sputter out. According to one analysis by the research organization OpenAI, the computational resources required for cutting-edge AI projects is "increasing exponentially" and doubling about every 3.4 months.

In a December 2019 Wired magazine interview, Jerome Pesenti, Facebook's Vice President of AI, suggested that even for a company with pockets as deep as Facebook's, this would be financially unsustainable:

When you scale deep learning, it tends to behave better and to be able to solve a broader task in a better way. So, there's an advantage to scaling. But clearly the rate of progress is not sustainable. If you look at top experiments, each year the cost [is] going up 10-fold. Right now, an experiment might be in seven figures, but it's not going to go to nine or ten figures, it's not possible, nobody can afford that.

Pesenti goes on to offer a stark warning about the potential for scaling to continue to be the primary driver of progress: "At some point we're going to hit the wall. In many ways we already have." Beyond the financial limits of scaling to ever larger neural networks, there are also important environmental considerations. A 2019 analysis by researchers at the University of Massachusetts, Amherst, found that training a very large deep learning system could potentially emit as much carbon dioxide as five cars over their full operational lifetimes.

Even if the financial and environmental impact challenges can be overcome—perhaps through the development of vastly more efficient hardware or software—scaling as a strategy simply may not be sufficient to produce sustained progress. Ever-increasing investments in computation have produced systems with extraordinary proficiency in narrow domains, but it is becoming increasingly clear that deep neural networks are subject to reliability limitations that may make the technology unsuitable for many mission critical applications unless important conceptual breakthroughs are made. One of the most notable demonstrations of the technology's weaknesses came when a group of researchers at Vicarious, small company focused on building dexterous robots, performed an analysis of the neural network used in Deep-Mind's DQN, the system that had learned to dominate Atari video games. One test was performed on Breakout, a game in which the player has to manipulate a paddle to intercept a fast-moving ball. When the paddle was shifted just a few pixels higher on the screen—a change that might not even be noticed by a human player—the system's previously superhuman performance immediately took a nose dive. DeepMind's software had no ability to adapt to even this small alteration. The only way to get back to top-level performance would have been to start from scratch and completely retrain the system with data based on the new screen configuration.

What this tells us is that while DeepMind's powerful neural networks do instantiate a representation of the Breakout screen, this representation remains firmly anchored to raw pixels even at the higher levels of abstraction deep in the network. There is clearly no emergent understanding of the paddle as an actual object that can be moved. In other words, there is nothing close to a human-like comprehension of the material objects that the pixels on the screen represent or the physics that govern their movement. It's just pixels all the way down. While some AI researchers may continue to believe that a more comprehensive understanding might eventually emerge if only there were more layers of artificial neurons, running on faster hardware and consuming still more data, I think this is very unlikely. More fundamental innovations will be required before we begin to see machines with a more human-like conception of the world.

This general type of problem, in which an AI system is inflexible and unable to adapt to even small unexpected changes in its input data, is referred to, among researchers, as "brittleness." A brittle AI application may not be a huge problem if it results in a warehouse robot occasionally packing the wrong item into a box. In other applications, however, the same technical shortfall can be catastrophic. This explains, for example, why progress toward fully autonomous self-driving cars has not lived up to some of the more exuberant early predictions.

As these limitations came into focus toward the end of the decade, there was a gnawing fear that the field had once again gotten over its skis and that the hype cycle had driven expectations to unrealistic levels. In the tech media and on social media, one of the most terrifying phrases in the field of artificial intelligence—"AI winter"—was making a reappearance. In a January 2020 interview with the BBC, Yoshua Bengio said that "AI's abilities were somewhat overhyped . . . by certain companies with an interest in doing so."

My own view is that if another AI winter indeed looms, it's likely to be a mild one. Though the concerns about slowing progress are well founded, it remains true that over the past few years AI has been deeply integrated into the infrastructure and business models of the largest technology companies. These companies have seen significant returns on their massive investments in computing resources and AI talent, and they now view artificial intelligence as absolutely critical to their ability to compete in the marketplace. Likewise, nearly every technology startup is now, to some degree, investing in AI, and companies large and small in other industries are beginning to deploy the technology. This successful integration into the commercial sphere is vastly more significant than anything that existed in prior AI winters, and as a result the field benefits from an army of advocates throughout the corporate world and has a general momentum that will act to moderate any downturn.

There's also a sense in which the fall of scalability as the primary driver of progress may have a bright side. When there is a widespread belief that simply throwing more computing resources at a problem will produce important advances, there is significantly less incentive to invest in the much more difficult work of true innovation. This was arguably the case, for example, with Moore's Law. When there was near absolute confidence that computer speeds would double roughly every two years, the semiconductor industry tended to focus on cranking out ever faster versions of the same microprocessor designs from companies like Intel and Motorola. In recent years, the acceleration in raw computer speeds has become less reliable, and our traditional definition of Moore's Law is approaching its end game as the dimensions of the circuits imprinted on chips shrink to nearly atomic size. This has forced engineers to engage in more "out of the box" thinking, resulting in innovations such as software designed for massively parallel computing and entirely new chip architectures—many of which are optimized for the complex calculations required by deep neural networks. I think we can expect the same sort of idea explosion to happen in deep learning, and artificial intelligence more broadly, as the crutch of simply scaling to larger neural networks becomes a less viable path to progress.

Excerpted from "Rule of the Robots: How Artificial Intelligence will Transform Everything." Copyright 2021 Basic Books. Available from Basic Books, an imprint of Hachette Book Group, Inc.



Robotics, prosthetics, and other engineering applications routinely use actuators that imitate the contraction of animal muscles. However, the speed and efficiency of natural muscle fibers is a demanding benchmark. Despite new developments in actuation technologies, for the most past artificial muscles are either too large, too slow, or too weak.

Recently, a team of engineers from the University of California San Diego (UCSD) have described a new artificial microfiber made from liquid crystal elastomer (LCE) that replicates the tensile strength, quick responsiveness, and high power density of human muscles. "[The LCE] polymer is a soft material and very stretchable," says Qiguang He, the first author of their research paper. "If we apply external stimuli such as light or heat, this material will contract along one direction."

Though LCE-based soft actuators are common and can generate excellent actuation strain—between 50 and 80 percent—their response time, says He, is typically "very, very slow." The simplest way to make the fibers both responsive and fast was to reduce their diameter. To do so, the UCSD researchers used a technique called electrospinning, which involves the ejection of a polymer solution through a syringe or spinneret under high voltage to produce ultra-fine fibers. Electrospinning is used for the fabrication of small-scale materials, to produce microfibers with diameters between 10 and 100 micrometers. It is favored for its ability to create fibers with different morphological structures, and is routinely used in various research and commercial contexts.

The microfibers fabricated by the UCSD researchers were between 40 and 50 micrometers, about the width of human hair, and much smaller than existing LCE fibers, some of which can be more than 0.3 millimeters thick. "We are not the first to use this technique to fabricate LCE fibers, but we are the first…to push this fiber further," He says. "We demonstrate how to control the actuation of the [fibers and measure their] actuation performance."

University of California, San Diego/Science Robotics

As proof-of-concept, the researchers constructed three different microrobotic devices using their electrospun LCE fibers. Their LSE actuators can be controlled thermo-electrically or using a near-infrared laser. When the LCE material is at room temperature, it is in a nematic phase: He explains that in this state, "the liquid crystals are randomly [located] with all their long axes pointing in essentially the same direction." When the temperature is increased, the material transitions into what is called an isotropic phase, in which its properties are uniform in all directions, resulting in a contraction of the fiber.

The results showed an actuation strain of up to 60 percent—which means, a 10-centimeter-long fiber will contract to 4 centimeters—with a response speed of less than 0.2 seconds, and a power density of 400 watts per kilogram. This is comparable to human muscle fibers.

An electrically controlled soft actuator, the researchers note, allows easy integrations with low-cost electronic devices, which is a plus for microrobotic systems and devices. Electrospinning is a very efficient fabrication technique as well: "You can get 10,000 fibers in 15 minutes," He says.

That said, there are a number of challenges that need to be addressed still. "The one limitation of this work is…[when we] apply heat or light to the LCE microfiber, the energy efficiency is very small—it's less than 1 percent," says He. "So, in future work, we may think about how to trigger the actuation in a more energy-efficient way."

Another constraint is that the nematic–isotropic phase transition in the electrospun LCE material takes place at a very high temperature, over 90 C. "So, we cannot directly put the fiber into the human body [which] is at 35 degrees." One way to address this issue might be to use a different kind of liquid crystal: "Right now we use RM 257 as a liquid crystal [but] we can change [it] to another type [to reduce] the phase transition temperature."

He, though, is optimistic about the possibilities to expand this research in electrospun LCE microfiber actuators. "We have also demonstrated [that] we can arrange multiple LCE fibers in parallel…and trigger them simultaneously [to increase force output]… This is a future work [in which] we will try to see if it's possible for us to integrate these muscle fiber bundles into biomedical tissue."



Robotics, prosthetics, and other engineering applications routinely use actuators that imitate the contraction of animal muscles. However, the speed and efficiency of natural muscle fibers is a demanding benchmark. Despite new developments in actuation technologies, for the most past artificial muscles are either too large, too slow, or too weak.

Recently, a team of engineers from the University of California San Diego (UCSD) have described a new artificial microfiber made from liquid crystal elastomer (LCE) that replicates the tensile strength, quick responsiveness, and high power density of human muscles. "[The LCE] polymer is a soft material and very stretchable," says Qiguang He, the first author of their research paper. "If we apply external stimuli such as light or heat, this material will contract along one direction."

Though LCE-based soft actuators are common and can generate excellent actuation strain—between 50 and 80 percent—their response time, says He, is typically "very, very slow." The simplest way to make the fibers both responsive and fast was to reduce their diameter. To do so, the UCSD researchers used a technique called electrospinning, which involves the ejection of a polymer solution through a syringe or spinneret under high voltage to produce ultra-fine fibers. Electrospinning is used for the fabrication of small-scale materials, to produce microfibers with diameters between 10 and 100 micrometers. It is favored for its ability to create fibers with different morphological structures, and is routinely used in various research and commercial contexts.

The microfibers fabricated by the UCSD researchers were between 40 and 50 micrometers, about the width of human hair, and much smaller than existing LCE fibers, some of which can be more than 0.3 millimeters thick. "We are not the first to use this technique to fabricate LCE fibers, but we are the first…to push this fiber further," He says. "We demonstrate how to control the actuation of the [fibers and measure their] actuation performance."

University of California, San Diego/Science Robotics

As proof-of-concept, the researchers constructed three different microrobotic devices using their electrospun LCE fibers. Their LSE actuators can be controlled thermo-electrically or using a near-infrared laser. When the LCE material is at room temperature, it is in a nematic phase: He explains that in this state, "the liquid crystals are randomly [located] with all their long axes pointing in essentially the same direction." When the temperature is increased, the material transitions into what is called an isotropic phase, in which its properties are uniform in all directions, resulting in a contraction of the fiber.

The results showed an actuation strain of up to 60 percent—which means, a 10-centimeter-long fiber will contract to 4 centimeters—with a response speed of less than 0.2 seconds, and a power density of 400 watts per kilogram. This is comparable to human muscle fibers.

An electrically controlled soft actuator, the researchers note, allows easy integrations with low-cost electronic devices, which is a plus for microrobotic systems and devices. Electrospinning is a very efficient fabrication technique as well: "You can get 10,000 fibers in 15 minutes," He says.

That said, there are a number of challenges that need to be addressed still. "The one limitation of this work is…[when we] apply heat or light to the LCE microfiber, the energy efficiency is very small—it's less than 1 percent," says He. "So, in future work, we may think about how to trigger the actuation in a more energy-efficient way."

Another constraint is that the nematic–isotropic phase transition in the electrospun LCE material takes place at a very high temperature, over 90 C. "So, we cannot directly put the fiber into the human body [which] is at 35 degrees." One way to address this issue might be to use a different kind of liquid crystal: "Right now we use RM 257 as a liquid crystal [but] we can change [it] to another type [to reduce] the phase transition temperature."

He, though, is optimistic about the possibilities to expand this research in electrospun LCE microfiber actuators. "We have also demonstrated [that] we can arrange multiple LCE fibers in parallel…and trigger them simultaneously [to increase force output]… This is a future work [in which] we will try to see if it's possible for us to integrate these muscle fiber bundles into biomedical tissue."

In medical tasks such as human motion analysis, computer-aided auxiliary systems have become the preferred choice for human experts for their high efficiency. However, conventional approaches are typically based on user-defined features such as movement onset times, peak velocities, motion vectors, or frequency domain analyses. Such approaches entail careful data post-processing or specific domain knowledge to achieve a meaningful feature extraction. Besides, they are prone to noise and the manual-defined features could hardly be re-used for other analyses. In this paper, we proposed probabilistic movement primitives (ProMPs), a widely-used approach in robot skill learning, to model human motions. The benefit of ProMPs is that the features are directly learned from the data and ProMPs can capture important features describing the trajectory shape, which can easily be extended to other tasks. Distinct from previous research, where classification tasks are mostly investigated, we applied ProMPs together with a variant of Kullback-Leibler (KL) divergence to quantify the effect of different transcranial current stimulation methods on human motions. We presented an initial result with 10 participants. The results validate ProMPs as a robust and effective feature extractor for human motions.

When will it make sense to consider robots candidates for moral standing? Major disagreements exist between those who find that question important and those who do not, and also between those united in their willingness to pursue the question. I narrow in on the approach to robot rights called relationalism, and ask: if we provide robots moral standing based on how humans relate to them, are we moving past human chauvinism, or are we merely putting a new dress on it? The background for the article is the clash between those who argue that robot rights are possible and those who see a fight for robot rights as ludicrous, unthinkable, or just outright harmful and disruptive for humans. The latter group are by some branded human chauvinists and anthropocentric, and they are criticized and portrayed as backward, unjust, and ignorant of history. Relationalism, in contrast, purportedly opens the door for considering robot rights and moving past anthropocentrism. However, I argue that relationalism is, quite to the contrary, a form of neo-anthropocentrism that recenters human beings and their unique ontological properties, perceptions, and values. I do so by raising three objections: 1) relationalism centers human values and perspectives, 2) it is indirectly a type of properties-based approach, and 3) edge cases reveal potentially absurd implications in practice.

Can robots help children be more creative? In this work, we posit social robots as creativity support tools for children in collaborative interactions. Children learn creative expressions and behaviors through social interactions with others during playful and collaborative tasks, and socially emulate their peers’ and teachers’ creativity. Social robots have a unique ability to engage in social and emotional interactions with children that can be leveraged to foster creative expression. We focus on two types of social interactions: creativity demonstration, where the robot exhibits creative behaviors, and creativity scaffolding, where the robot poses challenges, suggests ideas, provides positive reinforcement, and asks questions to scaffold children’s creativity. We situate our research in three playful and collaborative tasks - the Droodle Creativity game (that affords verbal creativity), the MagicDraw game (that affords figural creativity), and the WeDo construction task (that affords constructional creativity), that children play with Jibo, a social robot. To evaluate the efficacy of the robot’s social behaviors in enhancing creative behavior and expression in children, we ran three randomized controlled trials with 169 children in the 5–10 yr old age group. In the first two tasks, the robot exhibited creativity demonstration behaviors. We found that children who interacted with the robot exhibiting high verbal creativity in the Droodle game and high figural creativity in the MagicDraw game also exhibited significantly higher creativity than a control group of participants who interacted with a robot that did not express creativity (p < 0.05*). In the WeDo construction task, children who interacted with the robot that expressed creative scaffolding behaviors (asking reflective questions, generating ideas and challenges, and providing positive reinforcement) demonstrated higher creativity than participants in the control group by expressing a greater number of ideas, more original ideas, and more varied use of available materials (p < 0.05*). We found that both creativity demonstration and creativity scaffolding can be leveraged as social mechanisms for eliciting creativity in children using a social robot. From our findings, we suggest design guidelines for pedagogical tools and social agent interactions to better support children’s creativity.

Reinforcement Learning (RL) controllers have proved to effectively tackle the dual objectives of path following and collision avoidance. However, finding which RL algorithm setup optimally trades off these two tasks is not necessarily easy. This work proposes a methodology to explore this that leverages analyzing the performance and task-specific behavioral characteristics for a range of RL algorithms applied to path-following and collision-avoidance for underactuated surface vehicles in environments of increasing complexity. Compared to the introduced RL algorithms, the results show that the Proximal Policy Optimization (PPO) algorithm exhibits superior robustness to changes in the environment complexity, the reward function, and when generalized to environments with a considerable domain gap from the training environment. Whereas the proposed reward function significantly improves the competing algorithms’ ability to solve the training environment, an unexpected consequence of the dimensionality reduction in the sensor suite, combined with the domain gap, is identified as the source of their impaired generalization performance.

Soft pneumatic actuators have become indispensable for many robotic applications due to their reliability, safety, and design flexibility. However, the currently available actuator designs can be challenging to fabricate, requiring labor-intensive and time-consuming processes like reinforcing fiber wrapping and elastomer curing. To address this issue, we propose to use simple-to-fabricate kirigami skins—plastic sleeves with carefully arranged slit cuts—to construct pneumatic actuators with pre-programmable motion capabilities. Such kirigami skin, wrapped outside a cylindrical balloon, can transform the volumetric expansion from pneumatic pressure into anisotropic stretching and shearing, creating a combination of axial extension and twisting in the actuator. Moreover, the kirigami skin exhibits out-of-plane buckling near the slit cut, which enables high stretchability. To capture such complex deformations, we formulate and experimentally validates a new kinematics model to uncover the linkage between the kirigami cutting pattern design and the actuator’s motion characteristics. This model uses a virtual fold and rigid-facet assumption to simplify the motion analysis without sacrificing accuracy. Moreover, we tested the pressure-stroke performance and elastoplastic behaviors of the kirigami-skinned actuator to establish an operation protocol for repeatable performance. Analytical and experimental parametric analysis shows that one can effectively pre-program the actuator’s motion performance, with considerable freedom, simply by adjusting the angle and length of the slit cuts. The results of this study can establish the design and analysis framework for a new family of kirigami-skinned pneumatic actuators for many robotic applications.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USAWeRobot 2021 – September 23-25, 2021 – [Online Event]IROS 2021 – September 27-1, 2021 – [Online Event]Robo Boston – October 1-2, 2021 – Boston, MA, USAROSCon 2021 – October 20-21, 2021 – [Online Event]

Let us know if you have suggestions for next week, and enjoy today's videos.

Gaze is an extremely powerful and important signal during human-human communication and interaction, conveying intentions and informing about other's decisions. What happens when a robot and a human interact looking at each other? Researchers at the Italian Institute of Technology (IIT) investigated whether a humanoid robot's gaze influences the way people reason in a social decision-making context.

[ Science Robotics ]

Reachy is here to help you make pancakes, for some value of "help."

Mmm, extra crunchy!

[ Pollen Robotics ]

It's surprising that a physical prototype of this unicorn (?) robot for kids even exists, but there's no way they're going to get it to run.

And it's supposed to be rideable, which seems like a fun, terrible idea.

[ Xpeng ] via [ Engadget ]

Segway's got a new robot mower now, which appears to use GPS (maybe enhanced with a stationary beacon?) to accurately navigate your lawn.

[ Segway ]

AVITA is a new robotic avatar company founded by Hiroshi Ishiguro. They've raised about $5 million USD in funding to start making Ishiguro's dreams come true, which is money well spent, I'd say.

[ Impress ]

It's interesting how sophisticated legged robots from Japan often start out with a very obvious "we're only working on the legs" design, where the non-legged part of the robot is an unapologetic box. Asimo and Schaft both had robots like this, and here's another one, a single-leg hopping robot from Toyota Technological Institute.

[ TTI ] via [ New Scientist ]

Thanks, Fan!

How to make a robot walking over an obstacle course more fun: costumes and sound effects!

These same researchers have an IROS paper with an untethered version of their robot; you can see it walking at about 10:30 in this presentation video.

[ Tsinghua ]

Thanks, Fan!

Bilateral teleoperation provides humanoid robots with human planning intelligence while enabling the human to feel what the robot feels. It has the potential to transform physically capable humanoid robots into dynamically intelligent ones. However, dynamic bilateral locomotion teleoperation remains as a challenge due to the complex dynamics it involves. This work presents our initial step to tackle this challenge via the concept of wheeled humanoid robot locomotion teleoperation by body tilt.

[ RoboDesign Lab ]

This is an innovative design for a powered exoskeleton of sorts that can move on wheels but transform into legged mode to be able to climb stairs.

[ Atoun ]

Thanks, Fan!

I still have no idea why the Telexistence robot looks the way it does, but I love it.

[ Telexistence ]

In this video, we go over how SLAMcore's standard SDK can be integrated with the ROS1 Navigation Stack, enabling autonomous navigation of a kobuki robot with an Intel RealSense D435i depth camera.

[ SLAMcore ]

Thanks, Fan!

Normally, I wouldn't recommend a two hour long video with just talking heads. But when one of those talking heads is Rod Brooks, you know that the entire two hours will be worth it.

[ Lex Fridman ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USAWeRobot 2021 – September 23-25, 2021 – [Online Event]IROS 2021 – September 27-1, 2021 – [Online Event]Robo Boston – October 1-2, 2021 – Boston, MA, USAROSCon 2021 – October 20-21, 2021 – [Online Event]

Let us know if you have suggestions for next week, and enjoy today's videos.

Gaze is an extremely powerful and important signal during human-human communication and interaction, conveying intentions and informing about other's decisions. What happens when a robot and a human interact looking at each other? Researchers at the Italian Institute of Technology (IIT) investigated whether a humanoid robot's gaze influences the way people reason in a social decision-making context.

[ Science Robotics ]

Reachy is here to help you make pancakes, for some value of "help."

Mmm, extra crunchy!

[ Pollen Robotics ]

It's surprising that a physical prototype of this unicorn (?) robot for kids even exists, but there's no way they're going to get it to run.

And it's supposed to be rideable, which seems like a fun, terrible idea.

[ Xpeng ] via [ Engadget ]

Segway's got a new robot mower now, which appears to use GPS (maybe enhanced with a stationary beacon?) to accurately navigate your lawn.

[ Segway ]

AVITA is a new robotic avatar company founded by Hiroshi Ishiguro. They've raised about $5 million USD in funding to start making Ishiguro's dreams come true, which is money well spent, I'd say.

[ Impress ]

It's interesting how sophisticated legged robots from Japan often start out with a very obvious "we're only working on the legs" design, where the non-legged part of the robot is an unapologetic box. Asimo and Schaft both had robots like this, and here's another one, a single-leg hopping robot from Toyota Technological Institute.

[ TTI ] via [ New Scientist ]

Thanks, Fan!

How to make a robot walking over an obstacle course more fun: costumes and sound effects!

These same researchers have an IROS paper with an untethered version of their robot; you can see it walking at about 10:30 in this presentation video.

[ Tsinghua ]

Thanks, Fan!

Bilateral teleoperation provides humanoid robots with human planning intelligence while enabling the human to feel what the robot feels. It has the potential to transform physically capable humanoid robots into dynamically intelligent ones. However, dynamic bilateral locomotion teleoperation remains as a challenge due to the complex dynamics it involves. This work presents our initial step to tackle this challenge via the concept of wheeled humanoid robot locomotion teleoperation by body tilt.

[ RoboDesign Lab ]

This is an innovative design for a powered exoskeleton of sorts that can move on wheels but transform into legged mode to be able to climb stairs.

[ Atoun ]

Thanks, Fan!

I still have no idea why the Telexistence robot looks the way it does, but I love it.

[ Telexistence ]

In this video, we go over how SLAMcore's standard SDK can be integrated with the ROS1 Navigation Stack, enabling autonomous navigation of a kobuki robot with an Intel RealSense D435i depth camera.

[ SLAMcore ]

Thanks, Fan!

Normally, I wouldn't recommend a two hour long video with just talking heads. But when one of those talking heads is Rod Brooks, you know that the entire two hours will be worth it.

[ Lex Fridman ]



New advances in robotics can help push the limits of the human body to make us faster or stronger. But now researchers from the Biorobotics Laboratory at Seoul National University (SNU) have designed an exosuit that corrects body posture. Their recent paper describes the Movement Reshaping (MR) Exosuit, which, rather than augmenting any part of the human body, couples the motion of one joint to lock or unlock the motion of another joint. It works passively, without any motors or batteries.

For instance, when attempting to lift a heavy object off the floor, most of us stoop from the waist, which is an injury-inviting posture. The SNU device hinders the stooping posture and helps correct it to a (safer) squatting one. "We call our methodology 'body-powered variable impedance'," says, Kyu-Jin Cho, a biorobotics engineer and one of the authors, "[as] we can change the impedance of a joint by moving another."

Most lift-assist devices—such as Karl Zelik's HeroWear—are designed to reduce the wearer's fatigue by providing extra power and minimizing interference in their volitional movements, says co-author Jooeun Ahn. "On the other hand, our MR Exosuit is focusing on reshaping the wearer's lifting motion into a safe squatting form, as well as providing extra assistive force."

Movement reshaping exo-suit for safe lifting

The MR suit has been designed to mitigate injuries for workers in factories and warehouses who undertake repetitive lifting work. "Many lift-related injuries are caused not only by muscle fatigue but also by improper lifting posture," adds Keewon Kim, a rehabilitation medicine specialist at SNU College of Medicine, who also contributed to the study. Stooping is easier than squatting, and humans tend to choose the more comfortable strategy. "Because the deleterious effects of such comfortable but unsafe motion develop slowly, people do not perceive the risk in time, as in the case of disk degeneration."

The researchers designed a mechanism to lock the hip flexion when a person tries to stoop and unlock it when they tried to squat. "We connected the top of the back to the foot with a unique tendon structure consisting of vertical brake cables and a horizontal rubber band," graduate researcher and first author of the study, Sung-Sik Yoon, explains. "When the hip is flexed while the knee is not flexed, the hip flexion torque is delivered to the foot through the brake cable, causing strong resistance to the movement. However, if the knees are widened laterally for squatting, the angle of the tendons changes, and the hip flexion torque is switched to be supported by the rubber band."

The device was tested on ten human participants, who were first-time users of the suit. Nine out of ten participants changed their motion pattern closer to the squatting form while wearing the exosuit. This, says Ahn, is a 35% improvement in the average postural index of 10 participants. They also noticed a 5.3% reduction in the average metabolic energy consumption of the participants. "We are now working on improving the MR Exosuit in order to test it in a real manual working place," Ahn adds. "We are going to start a field test soon."

“Wearable devices do not have to mimic the original anatomical structure of humans."

The researchers plan to commercialize the device next year, but there are still some kinks to work out. While the effectiveness of the suit has been verified in their paper, the long-term effects of wearing have not. "In the future, we plan to conduct a longitudinal experiment in various fields that require lift posture training such as industrial settings, gyms, and rehabilitation centers," says Cho.

They are also planning a follow-up study to expand the principle of body-powered variable impedance to sports applications. "Many sports that utilize the whole body, such as golf, swimming, and running, require proper movement training to improve safety and performance," Cho continues. "As in this study, we will develop sportswear for motion training suitable for various sports activities using soft materials such as cables and rubber bands."

This study shows that artificial tendons whose structure is different from that of humans can effectively assist humans by reshaping the motor pattern, says Ahn. The current version of the exosuit can also be used to prevent inappropriate lifting motions of patients with poor spinal conditions. He and his colleagues expect that their design will lead to changes in future research on wearable robotics: "We demonstrated that wearable devices do not have to mimic the original anatomical structure of humans."



New advances in robotics can help push the limits of the human body to make us faster or stronger. But now researchers from the Biorobotics Laboratory at Seoul National University (SNU) have designed an exosuit that corrects body posture. Their recent paper describes the Movement Reshaping (MR) Exosuit, which, rather than augmenting any part of the human body, couples the motion of one joint to lock or unlock the motion of another joint. It works passively, without any motors or batteries.

For instance, when attempting to lift a heavy object off the floor, most of us stoop from the waist, which is an injury-inviting posture. The SNU device hinders the stooping posture and helps correct it to a (safer) squatting one. "We call our methodology 'body-powered variable impedance'," says, Kyu-Jin Cho, a biorobotics engineer and one of the authors, "[as] we can change the impedance of a joint by moving another."

Most lift-assist devices—such as Karl Zelik's HeroWear—are designed to reduce the wearer's fatigue by providing extra power and minimizing interference in their volitional movements, says co-author Jooeun Ahn. "On the other hand, our MR Exosuit is focusing on reshaping the wearer's lifting motion into a safe squatting form, as well as providing extra assistive force."

Movement reshaping exo-suit for safe lifting

The MR suit has been designed to mitigate injuries for workers in factories and warehouses who undertake repetitive lifting work. "Many lift-related injuries are caused not only by muscle fatigue but also by improper lifting posture," adds Keewon Kim, a rehabilitation medicine specialist at SNU College of Medicine, who also contributed to the study. Stooping is easier than squatting, and humans tend to choose the more comfortable strategy. "Because the deleterious effects of such comfortable but unsafe motion develop slowly, people do not perceive the risk in time, as in the case of disk degeneration."

The researchers designed a mechanism to lock the hip flexion when a person tries to stoop and unlock it when they tried to squat. "We connected the top of the back to the foot with a unique tendon structure consisting of vertical brake cables and a horizontal rubber band," graduate researcher and first author of the study, Sung-Sik Yoon, explains. "When the hip is flexed while the knee is not flexed, the hip flexion torque is delivered to the foot through the brake cable, causing strong resistance to the movement. However, if the knees are widened laterally for squatting, the angle of the tendons changes, and the hip flexion torque is switched to be supported by the rubber band."

The device was tested on ten human participants, who were first-time users of the suit. Nine out of ten participants changed their motion pattern closer to the squatting form while wearing the exosuit. This, says Ahn, is a 35% improvement in the average postural index of 10 participants. They also noticed a 5.3% reduction in the average metabolic energy consumption of the participants. "We are now working on improving the MR Exosuit in order to test it in a real manual working place," Ahn adds. "We are going to start a field test soon."

“Wearable devices do not have to mimic the original anatomical structure of humans."

The researchers plan to commercialize the device next year, but there are still some kinks to work out. While the effectiveness of the suit has been verified in their paper, the long-term effects of wearing have not. "In the future, we plan to conduct a longitudinal experiment in various fields that require lift posture training such as industrial settings, gyms, and rehabilitation centers," says Cho.

They are also planning a follow-up study to expand the principle of body-powered variable impedance to sports applications. "Many sports that utilize the whole body, such as golf, swimming, and running, require proper movement training to improve safety and performance," Cho continues. "As in this study, we will develop sportswear for motion training suitable for various sports activities using soft materials such as cables and rubber bands."

This study shows that artificial tendons whose structure is different from that of humans can effectively assist humans by reshaping the motor pattern, says Ahn. The current version of the exosuit can also be used to prevent inappropriate lifting motions of patients with poor spinal conditions. He and his colleagues expect that their design will lead to changes in future research on wearable robotics: "We demonstrated that wearable devices do not have to mimic the original anatomical structure of humans."



The power that computer vision has gained over the last decade or so has been astonishing. Thanks to machine learning techniques applied to large datasets of images and video, it's now much easier for robots to recognize (if not exactly understand) the world around them, and take intelligent (or at least significantly less unintelligent) actions based on what they see. This has empowered sophisticated autonomy in cars, but we haven't yet seen it applied to home robots, mostly because there aren't a lot of home robots around. Except, of course, robot vacuums.

Today, iRobot is announcing the j7, which the company calls its "most thoughtful robot vacuum." The reason they call it that is because in a first for Roombas, the j7 has a front-facing visible light camera along with the hardware and software necessary to identify common floor-level obstacles and react to them in an intelligent way. This enables some useful new capabilities for the j7 in the short term, but it's the long-term potential for a camera-equipped in-home machine-learning platform that we find really intriguing. If, that is, iRobot can manage to make their robots smarter while keeping our data private at the same time.

Here's the new iRobot j7. Note that the j7+ is the version with the automatic dirt dock, but that when we're talking about the robot itself, it's just j7.

Roomba® j7 Robot Vacuum Product Overview www.youtube.com

Obviously, the big news here on the hardware side is the camera, and we're definitely going to talk about that, especially since it enables software features that are unique to the j7. But iRobot is also releasing a major (and free) software update for all Roombas, called Genius 3.0. A year ago, we spoke with iRobot about their shift from autonomy to human-robot collaboration when it comes to home robot interaction, and and Genius 3.0 adds some useful features based on this philosophy, including:

  • Clean While I'm Away: with your permission, the iRobot app will use your phone's location services to start cleaning when you leave the house, and pause cleaning when you return.
  • Cleaning Time Estimates: Roombas with mapping capability will now estimate how long a job will take them.
  • Quiet Drive: If you ask a Roomba to clean a specific area not adjacent to its dock, it will turn off its vacuum motor on the way there and the way back so as not to bother you more than it has to. For what it's worth, this has been the default behavior for Neato robots for years.

Broadly, this is part of iRobot's push to get people away from using the physical "Clean" button to just tackle every room at once, and to instead have a robot clean more frequently and in more targeted ways, like by vacuuming specific rooms at specific times that make sense within your schedule. This is a complicated thing to try to do, because every human is different, and that means that every home operates differently, leading to the kind of uncertainty that robots tend not to be great at.

"The operating system for the home already exists," iRobot CEO Colin Angle tells us. "It's completely organic, and humans live it every day." Angle is talking about the spoken and unspoken rules that you have in your home. Some of them might be obvious, like whether you wear shoes indoors. Some might be a little less obvious, like which doors tend to stay open and which ones are usually closed, or which lights are on or off and when. Some rules we're acutely aware of, and some are more like established habits that we don't want to change. "Robots, and technology in general, didn't have enough context to follow rules in the home," Angle says. "But that's no longer true, because we know where rooms are, we know what kind of day it is, and we know a lot about what's going on in the home. So, we should take this on, and start building technology that follows house rules."

The reason why it's important for home robots to learn and follow rules this is because they're annoying, and iRobot has data to back this up: "The most lethal thing to a Roomba is a human being annoyed by its noise," Angle tells us. In other words, the most common reason Roombas don't complete jobs is because a human cancels it partway through. iRobot, obviously, would prefer that its robots did not annoy you, and Genius 3.0 is trying to make that happen by finding ways for cleaning to happen in a rule-respecting manner.

"Alignment of expectation is incredibly important—if the robot doesn't do what you expect, you're going to be upset, the robot's going to take the abuse, and we really want to protect the mental well-being of our robots." -Colin Angle

Of course, very few people want to actually program all of these fiddly little human-centric schedules into their Roombas, which is too bad, because that would be the easiest way to solve a very challenging problem: understanding what a human would like a robot to do at any given time. Thanks to mapping and app connectivity, Roombas may have a much better idea of what's going on in the home than they used to, but humans are complicated and our homes and lives are complicated, too. iRobot is expanding ways in which it uses smart home data to influence the operation of its robots. Geofencing to know when you're home or not is one example of this, but it's easy to imagine other ways in which this could work. For instance, if your Roomba is vacuuming, and you get a phone call, it would be nice if the robot was clever enough to pause what it was doing until your call was done, right?

"It's absolutely all about these possibilities," Angle says. "It's about understanding more and more elements. How does your robot know if you're on the phone? What about if someone else is on the phone? Or if the kids are playing on the floor, maybe you don't want your robot to vacuum, but if they're playing but not on the floor, it's okay. Understanding the context of all of that and how it goes together is really where I think the differentiating features will be. But we're starting with what's most important and what will make the biggest change for users, and then we can customize from there."

"Having this idea of house rules, and starting to capture high level preferences as to what your smart home is, how it's supposed to behave, and enabling that with a continuously updating and transferable set of knowledge—we think this is a big, big deal." -Colin Angle

Unfortunately, the possibilities for customization rapidly start to get tricky from a privacy perspective. We'll get to the potential privacy issues with j7's front-facing camera in a little bit, but as we think about ways in which robots could better understand us, it's all about data. The more data that you give a home robot, the better it'll be able to fit into your life, but that might involve some privacy compromises, like sharing your location data, or giving a company access to information about your home, including, with the j7, floor level imagery of wherever you want vacuumed.

The j7 is not iRobot's first Roomba with a camera. It's also not iRobot's first Roomba with a front-facing sensor. It is iRobot's first Roomba with a front-facing visible light camera, though, which means a lot of things, most of them good.

The flagship feature with the j7 is that it can use its front-facing camera to recognize and react to specific objects in the home. This includes basic stuff like making maps and understanding what kind of room it's in based on what furniture it sees. It also more complicated things. For one, the j7 can identify and avoid headphones and power cords. It can also recognize shoes and socks (things that are most commonly found on floors), plus its own dock. And it can spot pet waste, because there's nothing more unpleasant than a Roomba shoving poo all over a floor that you were hoping to have cleaned.

Getting these object detection algorithms involved a huge amount of training, and iRobot has internally collected and labeled more than a million images from more than a thousand homes around the world. Including, of course, images of poo.

"This is one of those stupid, glorious things. I don't know how many hundreds of models of poo we created out of Play-Doh and paint, and everyone [at iRobot] with a dog was instructed to take pictures whenever their dog pooed. And we actually made synthetic models of poo to try to further grow our database." -Colin Angle

Angle says that iRobot plans to keep adding more and more things that the j7 can recognize; they actually have more than 170 objects that they're working on right now, but just these four (shoes, socks, cords, and poo) are at a point where iRobot is confident enough to deploy the detectors on consumer robots. Cords in particular are impressive, especially when you start to consider how difficult it is to detect a pair of white Apple headphones on a white carpet, or a black power cord running across a carpet with a pattern of black squiggly lines all across it. This, incidentally, is why the j7 has a front LED on it: improving cord detection.

So far, all of this stuff is done entirely on-robot—the robot is doing object detection internally, as opposed to sending images to the cloud to be identified. But for more advanced behaviors, images do have to leave the robot, which is going to be a (hopefully small) privacy compromise. One advanced behavior is for the robot to send you a picture of an obstacle on the ground and ask you if you'd like to create a keep-out zone around that obstacle. If it's something temporary, like a piece of clothing that you're going to pick up, you'd tell the robot to avoid it this time but vacuum there next time. If it's a power strip, you'd tell the robot to avoid it permanently. iRobot doesn't get to see the pictures that the robot sends you as part of this process, but it does have to travel from the robot through a server and onto your phone, and while it's end-to-end encrypted, that does add a bit of potential risk that Roombas didn't have before.

One way that iRobot is trying to mitigate this privacy risk is to run a separate on-robot human detector. The job of the human detector is to identify images with humans in them, and make sure they get immediately deleted without going anywhere. I asked whether this is simply a face detector, or whether it could also detect (say) someone's butt after they'd just stepped out of the shower, and I was assured that it could recognize and delete human forms as well.

If you're less concerned about privacy and want to help iRobot make your Roomba (and every other Roomba) smarter, these obstacle queries that the robot sends you will also include the option to anonymously share the image with iRobot. This is explicitly opt-in, but iRobot is hoping that people will be willing to participate.

The camera and software is obviously what's most interesting here, but I suppose we can spare a few sentences for the j7's design. A beveled edge around the robot makes it a little better at not getting stuck under things, and the auto-emptying clean base has been rearranged (rotated 90 degrees, in fact) to make it easier to fit under things.

Interestingly, the j7 is not a new flagship Roomba for iRobot—that honor still belongs to the s9, which has a bigger motor and a structured light 3D sensor at the front rather than a visible light camera. Apparently when the s9 was designed, iRobot didn't feel like cameras were quite good enough for what they wanted to do, especially with the s9's D-shape making precision navigation more difficult. But at this point, Angle says that the j7 is smarter, and will do better than the s9 in more complex home environments. I asked him to elaborate a bit:

I believe that the primary sensor for a robot should be a vision system. That doesn't mean that stereo vision isn't cool too, and there might be some things where some 3D range sensing can be helpful as a crutch. But I would tell you that in the autonomous car industry, turn the dial forward enough and you won't have scanning lasers. You'll just have vision. I think [lidar] is going to be necessary for a while, just because the stakes of screwing up with an autonomous driving car are just so high. But I'm saying that the end state of an autonomous driving car is going to be all about vision. And based on the world that Roombas live in, I think the end state of a Roomba is going to be a hundred percent vision sooner than autonomous cars. There's a question of can you extract depth from monocular vision well enough, or do we need to use stereo or something else while we're figuring that out, because ultimately, we want to pick stuff up. We want to manipulate the environment. And having rich 3D models of the world is going to be really important.

IEEE Spectrum: Can you tell me more about picking stuff up and manipulating the environment?

Colin Angle: Nope! I would just say, it's really exciting to watch us get closer to the day where manipulation will make sense in the home, because we're starting to know where stuff is, which is kind of the precursor for manipulation to start making any sense. In my prognostication or prediction mode, I would say that we're certainly within 10 years of seeing the first consumer robots with some kind of manipulation.

IEEE Spectrum: Do you feel like home cleaning robots have already transitioned from being primarily differentiated by better hardware to being primarily differentiated by better software?

Colin Angle: I'm gonna say that we're there, but I don't know whether the consumer realizes that we're there. And so we're in this moment where it's becoming true, and yet it's not generally understood to be true. Software is rapidly growing in its importance and ultimately will become the primary decision point in what kind of robots consumers want.

Finally, I asked Angle about what the capability for collecting camera data in users' homes means long-term. The context here has parallels with autonomous cars: One of the things that enabled the success of autonomous cars was the collection and analysis of massive amounts of data, but we simply don't have ways of collecting in-home data at that scale. Arguably, the j7 is the first camera-equipped mobile robot that's likely to see distribution into homes on any kind of appreciable scale, which could potentially provide an enormous amount of value to a company like iRobot. But can we trust iRobot to handle that data responsibly? Here is what Angle has to say:

The word 'responsibly' is a super important word. A big difference between outside and inside is that the inside of a home is a very private place, it's your sanctuary. A good way for us to really screw this up is to overreach, so we're airing on the side of full disclosure and caution. We've pledged that we'll never sell your data, and we try to retain only the data that are useful and valuable to doing the job that we're doing.

We believe that as we unlock new things that we could do, if we only had the data, we can then generate that data with user permission fairly quickly, because we have one of the largest installed fleets of machine learning capable devices in the world—we're getting close to double digit millions a year of Roombas sold, and that's pretty cool. So I think that there's a way to do this where we are trust-first, and if we can get permission to use data by offering a benefit, we could pretty rapidly grow our data set.

The iRobot j7 will be available in Europe and the United States within the next week or so for $650. The j7+, which includes the newly redesigned automatic dirt dock, will run you $850. And the Genius 3.0 software should now be available to all Roombas via an app update today.



The power that computer vision has gained over the last decade or so has been astonishing. Thanks to machine learning techniques applied to large datasets of images and video, it's now much easier for robots to recognize (if not exactly understand) the world around them, and take intelligent (or at least significantly less unintelligent) actions based on what they see. This has empowered sophisticated autonomy in cars, but we haven't yet seen it applied to home robots, mostly because there aren't a lot of home robots around. Except, of course, robot vacuums.

Today, iRobot is announcing the j7, which the company calls its "most thoughtful robot vacuum." The reason they call it that is because in a first for Roombas, the j7 has a front-facing visible light camera along with the hardware and software necessary to identify common floor-level obstacles and react to them in an intelligent way. This enables some useful new capabilities for the j7 in the short term, but it's the long-term potential for a camera-equipped in-home machine-learning platform that we find really intriguing. If, that is, iRobot can manage to make their robots smarter while keeping our data private at the same time.

Here's the new iRobot j7. Note that the j7+ is the version with the automatic dirt dock, but that when we're talking about the robot itself, it's just j7.

Roomba® j7 Robot Vacuum Product Overview www.youtube.com

Obviously, the big news here on the hardware side is the camera, and we're definitely going to talk about that, especially since it enables software features that are unique to the j7. But iRobot is also releasing a major (and free) software update for all Roombas, called Genius 3.0. A year ago, we spoke with iRobot about their shift from autonomy to human-robot collaboration when it comes to home robot interaction, and and Genius 3.0 adds some useful features based on this philosophy, including:

  • Clean While I'm Away: with your permission, the iRobot app will use your phone's location services to start cleaning when you leave the house, and pause cleaning when you return.
  • Cleaning Time Estimates: Roombas with mapping capability will now estimate how long a job will take them.
  • Quiet Drive: If you ask a Roomba to clean a specific area not adjacent to its dock, it will turn off its vacuum motor on the way there and the way back so as not to bother you more than it has to. For what it's worth, this has been the default behavior for Neato robots for years.

Broadly, this is part of iRobot's push to get people away from using the physical "Clean" button to just tackle every room at once, and to instead have a robot clean more frequently and in more targeted ways, like by vacuuming specific rooms at specific times that make sense within your schedule. This is a complicated thing to try to do, because every human is different, and that means that every home operates differently, leading to the kind of uncertainty that robots tend not to be great at.

"The operating system for the home already exists," iRobot CEO Colin Angle tells us. "It's completely organic, and humans live it every day." Angle is talking about the spoken and unspoken rules that you have in your home. Some of them might be obvious, like whether you wear shoes indoors. Some might be a little less obvious, like which doors tend to stay open and which ones are usually closed, or which lights are on or off and when. Some rules we're acutely aware of, and some are more like established habits that we don't want to change. "Robots, and technology in general, didn't have enough context to follow rules in the home," Angle says. "But that's no longer true, because we know where rooms are, we know what kind of day it is, and we know a lot about what's going on in the home. So, we should take this on, and start building technology that follows house rules."

The reason why it's important for home robots to learn and follow rules this is because they're annoying, and iRobot has data to back this up: "The most lethal thing to a Roomba is a human being annoyed by its noise," Angle tells us. In other words, the most common reason Roombas don't complete jobs is because a human cancels it partway through. iRobot, obviously, would prefer that its robots did not annoy you, and Genius 3.0 is trying to make that happen by finding ways for cleaning to happen in a rule-respecting manner.

"Alignment of expectation is incredibly important—if the robot doesn't do what you expect, you're going to be upset, the robot's going to take the abuse, and we really want to protect the mental well-being of our robots." -Colin Angle

Of course, very few people want to actually program all of these fiddly little human-centric schedules into their Roombas, which is too bad, because that would be the easiest way to solve a very challenging problem: understanding what a human would like a robot to do at any given time. Thanks to mapping and app connectivity, Roombas may have a much better idea of what's going on in the home than they used to, but humans are complicated and our homes and lives are complicated, too. iRobot is expanding ways in which it uses smart home data to influence the operation of its robots. Geofencing to know when you're home or not is one example of this, but it's easy to imagine other ways in which this could work. For instance, if your Roomba is vacuuming, and you get a phone call, it would be nice if the robot was clever enough to pause what it was doing until your call was done, right?

"It's absolutely all about these possibilities," Angle says. "It's about understanding more and more elements. How does your robot know if you're on the phone? What about if someone else is on the phone? Or if the kids are playing on the floor, maybe you don't want your robot to vacuum, but if they're playing but not on the floor, it's okay. Understanding the context of all of that and how it goes together is really where I think the differentiating features will be. But we're starting with what's most important and what will make the biggest change for users, and then we can customize from there."

"Having this idea of house rules, and starting to capture high level preferences as to what your smart home is, how it's supposed to behave, and enabling that with a continuously updating and transferable set of knowledge—we think this is a big, big deal." -Colin Angle

Unfortunately, the possibilities for customization rapidly start to get tricky from a privacy perspective. We'll get to the potential privacy issues with j7's front-facing camera in a little bit, but as we think about ways in which robots could better understand us, it's all about data. The more data that you give a home robot, the better it'll be able to fit into your life, but that might involve some privacy compromises, like sharing your location data, or giving a company access to information about your home, including, with the j7, floor level imagery of wherever you want vacuumed.

The j7 is not iRobot's first Roomba with a camera. It's also not iRobot's first Roomba with a front-facing sensor. It is iRobot's first Roomba with a front-facing visible light camera, though, which means a lot of things, most of them good.

The flagship feature with the j7 is that it can use its front-facing camera to recognize and react to specific objects in the home. This includes basic stuff like making maps and understanding what kind of room it's in based on what furniture it sees. It also more complicated things. For one, the j7 can identify and avoid headphones and power cords. It can also recognize shoes and socks (things that are most commonly found on floors), plus its own dock. And it can spot pet waste, because there's nothing more unpleasant than a Roomba shoving poo all over a floor that you were hoping to have cleaned.

Getting these object detection algorithms involved a huge amount of training, and iRobot has internally collected and labeled more than a million images from more than a thousand homes around the world. Including, of course, images of poo.

"This is one of those stupid, glorious things. I don't know how many hundreds of models of poo we created out of Play-Doh and paint, and everyone [at iRobot] with a dog was instructed to take pictures whenever their dog pooed. And we actually made synthetic models of poo to try to further grow our database." -Colin Angle

Angle says that iRobot plans to keep adding more and more things that the j7 can recognize; they actually have more than 170 objects that they're working on right now, but just these four (shoes, socks, cords, and poo) are at a point where iRobot is confident enough to deploy the detectors on consumer robots. Cords in particular are impressive, especially when you start to consider how difficult it is to detect a pair of white Apple headphones on a white carpet, or a black power cord running across a carpet with a pattern of black squiggly lines all across it. This, incidentally, is why the j7 has a front LED on it: improving cord detection.

So far, all of this stuff is done entirely on-robot—the robot is doing object detection internally, as opposed to sending images to the cloud to be identified. But for more advanced behaviors, images do have to leave the robot, which is going to be a (hopefully small) privacy compromise. One advanced behavior is for the robot to send you a picture of an obstacle on the ground and ask you if you'd like to create a keep-out zone around that obstacle. If it's something temporary, like a piece of clothing that you're going to pick up, you'd tell the robot to avoid it this time but vacuum there next time. If it's a power strip, you'd tell the robot to avoid it permanently. iRobot doesn't get to see the pictures that the robot sends you as part of this process, but it does have to travel from the robot through a server and onto your phone, and while it's end-to-end encrypted, that does add a bit of potential risk that Roombas didn't have before.

One way that iRobot is trying to mitigate this privacy risk is to run a separate on-robot human detector. The job of the human detector is to identify images with humans in them, and make sure they get immediately deleted without going anywhere. I asked whether this is simply a face detector, or whether it could also detect (say) someone's butt after they'd just stepped out of the shower, and I was assured that it could recognize and delete human forms as well.

If you're less concerned about privacy and want to help iRobot make your Roomba (and every other Roomba) smarter, these obstacle queries that the robot sends you will also include the option to anonymously share the image with iRobot. This is explicitly opt-in, but iRobot is hoping that people will be willing to participate.

The camera and software is obviously what's most interesting here, but I suppose we can spare a few sentences for the j7's design. A beveled edge around the robot makes it a little better at not getting stuck under things, and the auto-emptying clean base has been rearranged (rotated 90 degrees, in fact) to make it easier to fit under things.

Interestingly, the j7 is not a new flagship Roomba for iRobot—that honor still belongs to the s9, which has a bigger motor and a structured light 3D sensor at the front rather than a visible light camera. Apparently when the s9 was designed, iRobot didn't feel like cameras were quite good enough for what they wanted to do, especially with the s9's D-shape making precision navigation more difficult. But at this point, Angle says that the j7 is smarter, and will do better than the s9 in more complex home environments. I asked him to elaborate a bit:

I believe that the primary sensor for a robot should be a vision system. That doesn't mean that stereo vision isn't cool too, and there might be some things where some 3D range sensing can be helpful as a crutch. But I would tell you that in the autonomous car industry, turn the dial forward enough and you won't have scanning lasers. You'll just have vision. I think [lidar] is going to be necessary for a while, just because the stakes of screwing up with an autonomous driving car are just so high. But I'm saying that the end state of an autonomous driving car is going to be all about vision. And based on the world that Roombas live in, I think the end state of a Roomba is going to be a hundred percent vision sooner than autonomous cars. There's a question of can you extract depth from monocular vision well enough, or do we need to use stereo or something else while we're figuring that out, because ultimately, we want to pick stuff up. We want to manipulate the environment. And having rich 3D models of the world is going to be really important.

IEEE Spectrum: Can you tell me more about picking stuff up and manipulating the environment?

Colin Angle: Nope! I would just say, it's really exciting to watch us get closer to the day where manipulation will make sense in the home, because we're starting to know where stuff is, which is kind of the precursor for manipulation to start making any sense. In my prognostication or prediction mode, I would say that we're certainly within 10 years of seeing the first consumer robots with some kind of manipulation.

IEEE Spectrum: Do you feel like home cleaning robots have already transitioned from being primarily differentiated by better hardware to being primarily differentiated by better software?

Colin Angle: I'm gonna say that we're there, but I don't know whether the consumer realizes that we're there. And so we're in this moment where it's becoming true, and yet it's not generally understood to be true. Software is rapidly growing in its importance and ultimately will become the primary decision point in what kind of robots consumers want.

Finally, I asked Angle about what the capability for collecting camera data in users' homes means long-term. The context here has parallels with autonomous cars: One of the things that enabled the success of autonomous cars was the collection and analysis of massive amounts of data, but we simply don't have ways of collecting in-home data at that scale. Arguably, the j7 is the first camera-equipped mobile robot that's likely to see distribution into homes on any kind of appreciable scale, which could potentially provide an enormous amount of value to a company like iRobot. But can we trust iRobot to handle that data responsibly? Here is what Angle has to say:

The word 'responsibly' is a super important word. A big difference between outside and inside is that the inside of a home is a very private place, it's your sanctuary. A good way for us to really screw this up is to overreach, so we're airing on the side of full disclosure and caution. We've pledged that we'll never sell your data, and we try to retain only the data that are useful and valuable to doing the job that we're doing.

We believe that as we unlock new things that we could do, if we only had the data, we can then generate that data with user permission fairly quickly, because we have one of the largest installed fleets of machine learning capable devices in the world—we're getting close to double digit millions a year of Roombas sold, and that's pretty cool. So I think that there's a way to do this where we are trust-first, and if we can get permission to use data by offering a benefit, we could pretty rapidly grow our data set.

The iRobot j7 will be available in Europe and the United States within the next week or so for $650. The j7+, which includes the newly redesigned automatic dirt dock, will run you $850. And the Genius 3.0 software should now be available to all Roombas via an app update today.

The influence of human-care service robots in human–robot interaction is becoming of great importance, because of the roles that the robots are taking in today’s and future society. Thus, we need to identify how humans can interact, collaborate, and learn from social robots more efficiently. Additionally, it is important to determine the robots’ modalities that can increase the humans’ perceived likeness and knowledge acquisition and enhance human–robot collaboration. The present study aims to identify the optimal social service robots’ modalities that enhance the human learning process and level of enjoyment from the interaction and even attract the humans’ attention to choosing a robot to collaborate with it. Our target group was college students, pre-service teachers. For this purpose, we designed two experiments, each one split in two parts. Both the experiments were between groups, and human participants had the chance to watch the Nao robot performing a storytelling exercise about the history of robots in a museum-educational activity via video annotations. The robot’s modalities were manipulated on its body movements (expressive arm and head gestures) while performing the storytelling, friendly attitude expressions and storytelling, and personality traits. After the robot’s storytelling, participants filled out a knowledge acquisition questionnaire and a self-reported enjoyment level questionnaire. In the second part, we introduce the idea of participants witnessing a conversation between the robots with the different modalities, and they were asked to choose the robot with which they want to collaborate in a similar activity. Results indicated that participants prefer to collaborate with robots with a cheerful personality and expressive body movements. Especially when they were asked to choose between two robots that were cheerful and had expressive body movements, they preferred the one which originally told them the story. Moreover, participants did not prefer to collaborate with a robot with an extremely friendly attitude and storytelling style.

In remote applications that mandate human supervision, shared control can prove vital by establishing a harmonious balance between the high-level cognition of a user and the low-level autonomy of a robot. Though in practice, achieving this balance is a challenging endeavor that largely depends on whether the operator effectively interprets the underlying shared control. Inspired by recent works on using immersive technologies to expose the internal shared control, we develop a virtual reality system to visually guide human-in-the-loop manipulation. Our implementation of shared control teleoperation employs end effector manipulability polytopes, which are geometrical constructs that embed joint limit and environmental constraints. These constructs capture a holistic view of the constrained manipulator’s motion and can thus be visually represented as feedback for users on their operable space of movement. To assess the efficacy of our proposed approach, we consider a teleoperation task where users manipulate a screwdriver attached to a robotic arm’s end effector. A pilot study with prospective operators is first conducted to discern which graphical cues and virtual reality setup are most preferable. Feedback from this study informs the final design of our virtual reality system, which is subsequently evaluated in the actual screwdriver teleoperation experiment. Our experimental findings support the utility of using polytopes for shared control teleoperation, but hint at the need for longer-term studies to garner their full benefits as virtual guides.

Laser microsurgery is the current gold standard surgical technique for the treatment of selected diseases in delicate organs such as the larynx. However, the operations require large surgical expertise and dexterity, and face significant limitations imposed by available technology, such as the requirement for direct line of sight to the surgical field, restricted access, and direct manual control of the surgical instruments. To change this status quo, the European project μRALP pioneered research towards a complete redesign of current laser microsurgery systems, focusing on the development of robotic micro-technologies to enable endoscopic operations. This has fostered awareness and interest in this field, which presents a unique set of needs, requirements and constraints, leading to research and technological developments beyond μRALP and its research consortium. This paper reviews the achievements and key contributions of such research, providing an overview of the current state of the art in robot-assisted endoscopic laser microsurgery. The primary target application considered is phonomicrosurgery, which is a representative use case involving highly challenging microsurgical techniques for the treatment of glottic diseases. The paper starts by presenting the motivations and rationale for endoscopic laser microsurgery, which leads to the introduction of robotics as an enabling technology for improved surgical field accessibility, visualization and management. Then, research goals, achievements, and current state of different technologies that can build-up to an effective robotic system for endoscopic laser microsurgery are presented. This includes research in micro-robotic laser steering, flexible robotic endoscopes, augmented imaging, assistive surgeon-robot interfaces, and cognitive surgical systems. Innovations in each of these areas are shown to provide sizable progress towards more precise, safer and higher quality endoscopic laser microsurgeries. Yet, major impact is really expected from the full integration of such individual contributions into a complete clinical surgical robotic system, as illustrated in the end of this paper with a description of preliminary cadaver trials conducted with the integrated μRALP system. Overall, the contribution of this paper lays in outlining the current state of the art and open challenges in the area of robot-assisted endoscopic laser microsurgery, which has important clinical applications even beyond laryngology.

Robots can play a significant role as assistive devices for people with movement impairment and mild cognitive deficit. In this paper we present an overview of the lightweight i-Walk intelligent robotic rollator, which offers cognitive and mobility assistance to the elderly and to people with light to moderate mobility impairment. The utility, usability, safety and technical performance of the device is investigated through a clinical study, which took place at a rehabilitation center in Greece involving real patients with mild to moderate cognitive and mobility impairment. This first evaluation study comprised a set of scenarios in a number of pre-defined use cases, including physical rehabilitation exercises, as well as mobility and ambulation involved in typical daily living activities of the patients. The design and implementation of this study is discussed in detail, along with the obtained results, which include both an objective and a subjective evaluation of the system operation, based on a set of technical performance measures and a validated questionnaire for the analysis of qualitative data, respectively. The study shows that the technical modules performed satisfactory under real conditions, and that the users generally hold very positive views of the platform, considering it safe and reliable.

One of the key distinguishing aspects of underwater manipulation tasks is the perception challenges of the ocean environment, including turbidity, backscatter, and lighting effects. Consequently, underwater perception often relies on sonar-based measurements to estimate the vehicle’s state and surroundings, either standalone or in concert with other sensing modalities, to support the perception necessary to plan and control manipulation tasks. Simulation of the multibeam echosounder, while not a substitute for in-water testing, is a critical capability for developing manipulation strategies in the complex and variable ocean environment. Although several approaches exist in the literature to simulate synthetic sonar images, the methods in the robotics community typically use image processing and video rendering software to comply with real-time execution requirements. In addition to a lack of physics-based interaction model between sound and the scene of interest, several basic properties are absent in these rendered sonar images–notably the coherent imaging system and coherent speckle that cause distortion of the object geometry in the sonar image. To address this deficiency, we present a physics-based multibeam echosounder simulation method to capture these fundamental aspects of sonar perception. A point-based scattering model is implemented to calculate the acoustic interaction between the target and the environment. This is a simplified representation of target scattering but can produce realistic coherent image speckle and the correct point spread function. The results demonstrate that this multibeam echosounder simulator generates qualitatively realistic images with high efficiency to provide the sonar image and the physical time series signal data. This synthetic sonar data is a key enabler for developing, testing, and evaluating autonomous underwater manipulation strategies that use sonar as a component of perception.

Origami has been a source of inspiration for the design of robots because it can be easily produced using 2D materials and its motions can be well quantified. However, most applications to date have utilised origami patterns for thin sheet materials with a negligible thickness. If the thickness of the material cannot be neglected, commonly known as the thick panel origami, the creases need to be redesigned. One approach is to place creases either on top or bottom surfaces of a sheet of finite thickness. As a result, spherical linkages in the zero-thickness origami are replaced by spatial linkages in the thick panel one, leading to a reduction in the overall degrees of freedom (DOFs). For instance, a waterbomb pattern for a zero-thickness sheet shows multiple DOFs while its thick panel counterpart has only one DOF, which significantly reduces the complexity of motion control. In this article, we present a robotic gripper derived from a unit that is based on the thick panel six-crease waterbomb origami. Four such units complete the gripper. Kinematically, each unit is a plane-symmetric Bricard linkage, and the gripper can be modelled as an assembly of Bricard linkages, giving it single mobility. A gripper prototype was made using 3D printing technology, and its motion was controlled by a set of tendons tied to a single motor. Detailed kinematic modelling was done, and experiments were carried out to characterise the gripper’s behaviours. The positions of the tips on the gripper, the actuation force on tendons, and the grasping force generated on objects were analysed and measured. The experimental results matched well with the analytical ones, and the repeated tests demonstrate that the concept is viable. Furthermore, we observed that the gripper was also capable of grasping non-symmetrical objects, and such performance is discussed in detail in the paper.

Pages