Feed aggregator

Recognizing the actions, plans, and goals of a person in an unconstrained environment is a key feature that future robotic systems will need in order to achieve a natural human-machine interaction. Indeed, we humans are constantly understanding and predicting the actions and goals of others, which allows us to interact in intuitive and safe ways. While action and plan recognition are tasks that humans perform naturally and with little effort, they are still an unresolved problem from the point of view of artificial intelligence. The immense variety of possible actions and plans that may be encountered in an unconstrained environment makes current approaches be far from human-like performance. In addition, while very different types of algorithms have been proposed to tackle the problem of activity, plan, and goal (intention) recognition, these tend to focus in only one part of the problem (e.g., action recognition), and techniques that address the problem as a whole have been not so thoroughly explored. This review is meant to provide a general view of the problem of activity, plan, and goal recognition as a whole. It presents a description of the problem, both from the human perspective and from the computational perspective, and proposes a classification of the main types of approaches that have been proposed to address it (logic-based, classical machine learning, deep learning, and brain-inspired), together with a description and comparison of the classes. This general view of the problem can help on the identification of research gaps, and may also provide inspiration for the development of new approaches that address the problem in a unified way.

Phase-change material–elastomer composite (PCMEC) actuators are composed of a soft elastomer matrix embedding a phase-change fluid, typically ethanol, in microbubbles. When increasing the temperature, the phase change in each bubble induces a macroscopic expansion of the matrix. This class of actuators is promising for soft robotic applications because of their high energy density and actuation strain, and their low cost and easy manufacturing. However, several limitations must be addressed, such as the high actuation temperature and slow actuation speed. Moreover, the lack of a consistent design approach limits the possibility to build PCMEC-based soft robots able to achieve complex tasks. In this work, a new approach to manufacture PCMEC actuators with different fluid–elastomer combinations without altering the quality of the samples is proposed. The influence of the phase-change fluid and the elastomer on free elongation and bending is investigated. We demonstrate that choosing an appropriate fluid increases the actuation strain and speed, and decreases the actuation temperature compared with ethanol, allowing PCMECs to be used in close contact with the human body. Similarly, by using different elastomer materials, the actuator stiffness can be modified, and the experimental results showed that the curvature is roughly proportional to the inverse of Young’s modulus of the pure matrix. To demonstrate the potential of the optimized PCMECs, a kirigami-inspired voxel-based design approach is proposed. PCMEC cubes are molded and reinforced externally by paper. Cuts in the paper induce anisotropy into the structure. Elementary voxels deforming according to the basic kinematics (bending, torsion, elongation, compression and shear) are presented. The combination of these voxels into modular and reconfigurable structures could open new possibilities towards the design of flexible robots able to perform complex tasks.

Frames—discursive structures that make dimensions of a situation more or less salient—are understood to influence how people understand novel technologies. As technological agents are increasingly integrated into society, it becomes important to discover how native understandings (i.e., individual frames) of social robots are associated with how they are characterized by media, technology developers, and even the agents themselves (i.e., produced frames). Moreover, these individual and produced frames may influence the ways in which people see social robots as legitimate and trustworthy agents—especially in the face of (im)moral behavior. This three-study investigation begins to address this knowledge gap by 1) identifying individually held frames for explaining an android’s (im)moral behavior, and experimentally testing how produced frames prime judgments about an android’s morally ambiguous behavior in 2) mediated representations and 3) face-to-face exposures. Results indicate that people rely on discernible ground rules to explain social robot behaviors; these frames induced only limited effects on responsibility judgments of that robot’s morally ambiguous behavior. Evidence also suggests that technophobia-induced reactance may move people to reject a produced frame in favor of a divergent individual frame.

Soft pneumatic actuators have been explored for endoscopic applications, but challenges in fabricating complex geometry with desirable dimensions and compliance remain. The addition of an endoscopic camera or tool channel is generally not possible without significant change in the diameter of the actuator. Radial expansion and ballooning of actuator walls during bending is undesirable for endoscopic applications. The inclusion of strain limiting methods like, wound fibre, mesh, or multi-material molding have been explored, but the integration of these design approaches with endoscopic requirements drastically increases fabrication complexity, precluding reliable translation into functional endoscopes. For the first time in soft robotics, we present a multi-channel, single material elastomeric actuator with a fully corrugated design (inspired by origami); offering specific functionality for endoscopic applications. The features introduced in this design include i) fabrication of multi-channel monolithic structure of 8.5 mm diameter, ii) incorporation of the benefits of corrugated design in a single material (i.e., limited radial expansion and improved bending efficiency), iii) design scalability (length and diameter), and iv) incorporation of a central hollow channel for the inclusion of an endoscopic camera. Two variants of the actuator are fabricated which have different corrugated or origami length, i.e., 30 mm and 40 mm respectively). Each of the three actuator channels is evaluated under varying volumetric (0.5 mls-1 and 1.5 mls-1 feed rate) and pressurized control to achieve a similar bending profile with the maximum bending angle of 150°. With the intended use for single use upper gastrointestinal endoscopic application, it is desirable to have linear relationships between actuation and angular position in soft pneumatic actuators with high bending response at low pressures; this is where the origami actuator offers contribution. The soft pneumatic actuator has been demonstrated to achieve a maximum bending angle of 200° when integrated with manually driven endoscope. The simple 3-step fabrication technique produces a complex origami pattern in a soft robotic structure, which promotes low pressure bending through the opening of the corrugation while retaining a small diameter and a central lumen, required for successful endoscope integration.

This paper presents an intraoperative MRI-guided, patient-mounted robotic system for shoulder arthrography procedures in pediatric patients. The robot is designed to be compact and lightweight and is constructed with nonmagnetic materials for MRI safety. Our goal is to transform the current two-step arthrography procedure (CT/x-ray-guided needle insertion followed by diagnostic MRI) into a streamlined single-step ionizing radiation-free procedure under MRI guidance. The MR-conditional robot was evaluated in a Thiel embalmed cadaver study and healthy volunteer studies. The robot was attached to the shoulder using straps and ten locations in the shoulder joint space were selected as targets. For the first target, contrast agent (saline) was injected to complete the clinical workflow. After each targeting attempt, a confirmation scan was acquired to analyze the needle placement accuracy. During the volunteer studies, a more comfortable and ergonomic shoulder brace was used, and the complete clinical workflow was followed to measure the total procedure time. In the cadaver study, the needle was successfully placed in the shoulder joint space in all the targeting attempts with translational and rotational accuracy of 2.07 ± 1.22 mm and 1.46 ± 1.06 degrees, respectively. The total time for the entire procedure was 94 min and the average time for each targeting attempt was 20 min in the cadaver study, while the average time for the entire workflow for the volunteer studies was 36 min. No image quality degradation due to the presence of the robot was detected. This Thiel-embalmed cadaver study along with the clinical workflow studies on human volunteers demonstrated the feasibility of using an MR-conditional, patient-mounted robotic system for MRI-guided shoulder arthrography procedure. Future work will be focused on moving the technology to clinical practice.

New technology is of little use if it is not adopted, and surveys show that less than 10% of firms use Artificial Intelligence. This paper studies the uptake of AI-driven automation and its impact on employment, using a dynamic agent-based model (ABM). It simulates the adoption of automation software as well as job destruction and job creation in its wake. There are two types of agents: manufacturing firms and engineering services firms. The agents choose between two business models: consulting or automated software. From the engineering firms’ point of view, the model exhibits static economies of scale in the software model and dynamic (learning by doing) economies of scale in the consultancy model. From the manufacturing firms’ point of view, switching to the software model requires restructuring of production and there are network effects in switching. The ABM matches engineering and manufacturing agents and derives employment of engineers and the tasks they perform, i.e. consultancy, software development, software maintenance, or employment in manufacturing. We find that the uptake of software is gradual; slow in the first few years and then accelerates. Software is fully adopted after about 18 years in the base line run. Employment of engineers shifts from consultancy to software development and to new jobs in manufacturing. Spells of unemployment may occur if skilled jobs creation in manufacturing is slow. Finally, the model generates boom and bust cycles in the software sector.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA IROS 2021 – September 27-1, 2021 – [Online Event] ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

With rapidly growing demands on health care systems, nurses typically spend 18 to 40 percent of their time performing direct patient care tasks, oftentimes for many patients and with little time to spare. Personal care robots that brush your hair could provide substantial help and relief.

While the hardware set-up looks futuristic and shiny, the underlying model of the hair fibers is what makes it tick. CSAIL postdoc Josie Hughes and her team’s approach examined entangled soft fiber bundles as sets of entwined double helices - think classic DNA strands. This level of granularity provided key insights into mathematical models and control systems for manipulating bundles of soft fibers, with a wide range of applications in the textile industry, animal care, and other fibrous systems.

[ MIT CSAIL ]

Sometimes CIA​ needs to get creative when collecting intelligence. Charlie, for instance, is a robotic catfish that collects water samples. While never used operationally, the unmanned underwater vehicle (UUV) fish was created to study aquatic robot technology.

[ CIA ]

It's really just a giant drone, even if it happens to be powered by explosions.

[ SpaceX ]

Somatic's robot will clean your bathrooms for 40 hours a week and will cost you just $1,000 a month. It looks like it works quite well, as long as your bathrooms are the normal level of gross as opposed to, you know, super gross.

[ Somatic ]

NASA’s Ingenuity Mars Helicopter successfully completed a fourth, more challenging flight on the Red Planet on April 30, 2021. Flight Test No. 4 aimed for a longer flight time, longer distance, and more image capturing to begin to demonstrate its ability to serve as a scout on Mars. Ingenuity climbed to an altitude of 16 feet (5 meters) before flying south and back for an 872-foot (266-meter) round trip. In total, Ingenuity was in the air for 117 seconds, another set of records for the helicopter.

[ Ingenuity ]

The Perseverance rover is all new and shiny, but let's not forget about Curiosity, still hard at work over in Gale crater.

NASA’s Curiosity Mars rover took this 360-degree panorama while atop “Mont Mercou,” a rock formation that offered a view into Gale Crater below. The panorama is stitched together from 132 individual images taken on April 15, 2021, the 3,090th Martian day, or sol, of the mission. The panorama has been white-balanced so that the colors of the rock materials resemble how they would appear under daytime lighting conditions on Earth. Images of the sky and rover hardware were not included in this terrain mosaic.

[ MSL ]

Happy Star Wars Day from Quanser!

[ Quanser ]

Thanks Arman!

Lingkang Zhang's 12 DOF Raspberry Pi-powered quadruped robot, Yuki Mini, is complete!

Adorable, right? It runs ROS and the hardware is open source as well.

[ Yuki Mini ]

Thanks Lingkang!

Honda and AutoX have been operating a fully autonomous, no safety driver taxi service in China for a couple of months now.

If you thought SF was hard, well, I feel like this is even harder.

[ AutoX ]

This is the kind of drone delivery that I can get behind.

[ WeRobotics ]

The Horizon 2020 EU-funded PRO-ACT project will aim to develop and demonstrate a cooperation and manipulation capabilities between three robots for assembling an in-situ resource utilisation (ISRU) plant. PRO-ACT will show how robot working agents, or RWAs, can work together collaboratively to achieve a common goal.

[ Pro-Act ]

Thanks Fan!

This brief quadruped simulation video, from Jerry Pratt at IHMC, dates back to 2003 (!).

[ IHMC ]

Extend Robotics' vision is to extend human capability beyond physical presence​. We build affordable robotic arms capable of remote operation from anywhere in the world, using cloud-based teleoperation software​.

[ Extend Robotics ]

Meet Maria Vittoria Minniti, robotics engineer and PhD student at NCCR Digital Fabrication and ETH Zurich. Maria Vittoria makes it possible for simple robots to do complicated things.

[ NCCR Women ]

Thanks Fan!

iCub has been around for 10 years now, and it's almost like it hasn't gotten any taller! This IFRR Robotics Global Colloquium celebrates the past decade of iCub.

[ iCub ]

This CMU RI Seminar is by Cynthia Sung from UPenn, on Dynamical Robots via Origami-Inspired Design.

Origami-inspired engineering produces structures with high strength-to-weight ratios and simultaneously lower manufacturing complexity. This reliable, customizable, cheap fabrication and component assembly technology is ideal for robotics applications in remote, rapid deployment scenarios that require platforms to be quickly produced, reconfigured, and deployed. Unfortunately, most examples of folded robots are appropriate only for small-scale, low-load applications. In this talk, I will discuss efforts in my group to expand origami-inspired engineering to robots with the ability to withstand and exert large loads and to execute dynamic behaviors.

[ CMU RI ]

How can feminist methodologies and approaches be applied and be transformative when developing AI and ADM systems? How can AI innovation and social systems innovation be catalyzed concomitantly to create a positive movement for social change larger than the sum of the data science or social science parts? How can we produce actionable research that will lead to the profound changes needed—from scratch—in the processes to produce AI? In this seminar, 2020 CCSRE Race and Technology Practitioner Fellow Renata Avila discusses ideas and experiences from different disciplines that could help draft a blueprint for a better modeled digital future.

[ CMU RI ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA IROS 2021 – September 27-1, 2021 – [Online Event] ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

With rapidly growing demands on health care systems, nurses typically spend 18 to 40 percent of their time performing direct patient care tasks, oftentimes for many patients and with little time to spare. Personal care robots that brush your hair could provide substantial help and relief.

While the hardware set-up looks futuristic and shiny, the underlying model of the hair fibers is what makes it tick. CSAIL postdoc Josie Hughes and her team’s approach examined entangled soft fiber bundles as sets of entwined double helices - think classic DNA strands. This level of granularity provided key insights into mathematical models and control systems for manipulating bundles of soft fibers, with a wide range of applications in the textile industry, animal care, and other fibrous systems.

[ MIT CSAIL ]

Sometimes CIA​ needs to get creative when collecting intelligence. Charlie, for instance, is a robotic catfish that collects water samples. While never used operationally, the unmanned underwater vehicle (UUV) fish was created to study aquatic robot technology.

[ CIA ]

It's really just a giant drone, even if it happens to be powered by explosions.

[ SpaceX ]

Somatic's robot will clean your bathrooms for 40 hours a week and will cost you just $1,000 a month. It looks like it works quite well, as long as your bathrooms are the normal level of gross as opposed to, you know, super gross.

[ Somatic ]

NASA’s Ingenuity Mars Helicopter successfully completed a fourth, more challenging flight on the Red Planet on April 30, 2021. Flight Test No. 4 aimed for a longer flight time, longer distance, and more image capturing to begin to demonstrate its ability to serve as a scout on Mars. Ingenuity climbed to an altitude of 16 feet (5 meters) before flying south and back for an 872-foot (266-meter) round trip. In total, Ingenuity was in the air for 117 seconds, another set of records for the helicopter.

[ Ingenuity ]

The Perseverance rover is all new and shiny, but let's not forget about Curiosity, still hard at work over in Gale crater.

NASA’s Curiosity Mars rover took this 360-degree panorama while atop “Mont Mercou,” a rock formation that offered a view into Gale Crater below. The panorama is stitched together from 132 individual images taken on April 15, 2021, the 3,090th Martian day, or sol, of the mission. The panorama has been white-balanced so that the colors of the rock materials resemble how they would appear under daytime lighting conditions on Earth. Images of the sky and rover hardware were not included in this terrain mosaic.

[ MSL ]

Happy Star Wars Day from Quanser!

[ Quanser ]

Thanks Arman!

Lingkang Zhang's 12 DOF Raspberry Pi-powered quadruped robot, Yuki Mini, is complete!

Adorable, right? It runs ROS and the hardware is open source as well.

[ Yuki Mini ]

Thanks Lingkang!

Honda and AutoX have been operating a fully autonomous, no safety driver taxi service in China for a couple of months now.

If you thought SF was hard, well, I feel like this is even harder.

[ AutoX ]

This is the kind of drone delivery that I can get behind.

[ WeRobotics ]

The Horizon 2020 EU-funded PRO-ACT project will aim to develop and demonstrate a cooperation and manipulation capabilities between three robots for assembling an in-situ resource utilisation (ISRU) plant. PRO-ACT will show how robot working agents, or RWAs, can work together collaboratively to achieve a common goal.

[ Pro-Act ]

Thanks Fan!

This brief quadruped simulation video, from Jerry Pratt at IHMC, dates back to 2003 (!).

[ IHMC ]

Extend Robotics' vision is to extend human capability beyond physical presence​. We build affordable robotic arms capable of remote operation from anywhere in the world, using cloud-based teleoperation software​.

[ Extend Robotics ]

Meet Maria Vittoria Minniti, robotics engineer and PhD student at NCCR Digital Fabrication and ETH Zurich. Maria Vittoria makes it possible for simple robots to do complicated things.

[ NCCR Women ]

Thanks Fan!

iCub has been around for 10 years now, and it's almost like it hasn't gotten any taller! This IFRR Robotics Global Colloquium celebrates the past decade of iCub.

[ iCub ]

This CMU RI Seminar is by Cynthia Sung from UPenn, on Dynamical Robots via Origami-Inspired Design.

Origami-inspired engineering produces structures with high strength-to-weight ratios and simultaneously lower manufacturing complexity. This reliable, customizable, cheap fabrication and component assembly technology is ideal for robotics applications in remote, rapid deployment scenarios that require platforms to be quickly produced, reconfigured, and deployed. Unfortunately, most examples of folded robots are appropriate only for small-scale, low-load applications. In this talk, I will discuss efforts in my group to expand origami-inspired engineering to robots with the ability to withstand and exert large loads and to execute dynamic behaviors.

[ CMU RI ]

How can feminist methodologies and approaches be applied and be transformative when developing AI and ADM systems? How can AI innovation and social systems innovation be catalyzed concomitantly to create a positive movement for social change larger than the sum of the data science or social science parts? How can we produce actionable research that will lead to the profound changes needed—from scratch—in the processes to produce AI? In this seminar, 2020 CCSRE Race and Technology Practitioner Fellow Renata Avila discusses ideas and experiences from different disciplines that could help draft a blueprint for a better modeled digital future.

[ CMU RI ]

Freezing-of-Gait (FoG) is a movement disorder that mostly appears in the late stages of Parkinson's Disease (PD). It causes incapability of walking, despite the PD patient's intention, resulting in loss of coordination that increases the risk of falls and injuries, and severely affects PD patient's quality of life. Stress, emotional stimulus and multitasking have been encountered to be associated with appearance of FoG episodes, while patient's functionality and self-confidence are constantly deteriorating. This study suggests a non-invasive encountering of FoG episodes, by analyzing inertial measurement unit (IMU) data, towards a real-time intervention via a rhythmic auditory stimulation (RAS) and hand vibration. Specifically, accelerometer and gyroscope data from 11 PD subjects, as captured from a single wrist-worn IMU sensor during continuous walking, are processed via Deep Learning for window-based detection of the FoG events. The proposed approach, namely DeepFoG, was evaluated in a leave-one-subject-out (LOSO) cross-validation (CV) and 10-fold CV fashion schemes against its ability to correctly estimate the existence or not of a FoG episode at each data window. Experimental results have shown that DeepFoG performs satisfactorily, as it achieves 83\%/88\% and 86\%/90\% sensitivity/specificity, for LOSO CV and 10-fold CV schemes, respectively. The promising performance of the proposed DeepFoG reveals the potentiality of single-arm IMU-based real-time FoG detection that could guide effective interventions via stimuli, such as RAS and hand vibration. In this way, DeepFoG scaffolds the elimination of risk of falls in PD patients, sustaining their quality of life in everyday living activities.

Healthcare workers face a high risk of contagion during a pandemic due to their close proximity to patients. The situation is further exacerbated in the case of a shortage of personal protective equipment that can increase the risk of exposure for the healthcare workers and even non-pandemic related patients, such as those on dialysis. In this study, we propose an emergency, non-invasive remote monitoring and control response system to retrofit dialysis machines with robotic manipulators for safely supporting the treatment of patients with acute kidney disease. Specifically, as a proof-of-concept, we mock-up the touchscreen instrument control panel of a dialysis machine and live-stream it to a remote user’s tablet computer device. Then, the user performs touch-based interactions on the tablet device to send commands to the robot to manipulate the instrument controls on the touchscreen of the dialysis machine. To evaluate the performance of the proposed system, we conduct an accuracy test. Moreover, we perform qualitative user studies using two modes of interaction with the designed system to measure the user task load and system usability and to obtain user feedback. The two modes of interaction included a touch-based interaction using a tablet device and a click-based interaction using a computer. The results indicate no statistically significant difference in the relatively low task load experienced by the users for both modes of interaction. Moreover, the system usability survey results reveal no statistically significant difference in the user experience for both modes of interaction except that users experienced a more consistent performance with the click-based interaction vs. the touch-based interaction. Based on the user feedback, we suggest an improvement to the proposed system and illustrate an implementation that corrects the distorted perception of the instrumentation control panel live-stream for a better and consistent user experience.

We present two frameworks for design optimization of a multi-chamber pneumatic-driven soft actuator to optimize its mechanical performance. The design goal is to achieve maximal horizontal motion of the top surface of the actuator with a minimum effect on its vertical motion. The parametric shape and layout of air chambers are optimized individually with the firefly algorithm and a deep reinforcement learning approach using both a model-based formulation and finite element analysis. The presented modeling approach extends the analytical formulations for tapered and thickened cantilever beams connected in a structure with virtual spring elements. The deep reinforcement learning-based approach is combined with both the model- and finite element-based environments to fully explore the design space and for comparison and cross-validation purposes. The two-chamber soft actuator was specifically designed to be integrated as a modular element into a soft robotic pad system used for pressure injury prevention, where local control of planar displacements can be advantageous to mitigate the risk of pressure injuries and blisters by minimizing shear forces at the skin-pad contact. A comparison of the results shows that designs achieved using the deep reinforcement based approach best decouples the horizontal and vertical motions, while producing the necessary displacement for the intended application. The results from optimizations were compared computationally and experimentally to the empirically obtained design in the existing literature to validate the optimized design and methodology.

In daily life, there are a variety of complex sound sources. It is important to effectively detect certain sounds in some situations. With the outbreak of COVID-19, it is necessary to distinguish the sound of coughing, to estimate suspected patients in the population. In this paper, we propose a method for cough recognition based on a Mel-spectrogram and a Convolutional Neural Network called the Cough Recognition Network (CRN), which can effectively distinguish cough sounds.

Biometric security applications have been employed for providing a higher security in several access control systems during the past few years. The handwritten signature is the most widely accepted behavioral biometric trait for authenticating the documents like letters, contracts, wills, MOU’s, etc. for validation in day to day life. In this paper, a novel algorithm to detect gender of individuals based on the image of their handwritten signatures is proposed. The proposed work is based on the fusion of textural and statistical features extracted from the signature images. The LBP and HOG features represent the texture. The writer’s gender classification is carried out using machine learning techniques. The proposed technique is evaluated on own dataset of 4,790 signatures and realized an encouraging accuracy of 96.17, 98.72 and 100% for k-NN, decision tree and Support Vector Machine classifiers, respectively. The proposed method is expected to be useful in design of efficient computer vision tools for authentication and forensic investigation of documents with handwritten signatures.

Modern scenarios in robotics involve human-robot collaboration or robot-robot cooperation in unstructured environments. In human-robot collaboration, the objective is to relieve humans from repetitive and wearing tasks. This is the case of a retail store, where the robot could help a clerk to refill a shelf or an elderly customer to pick an item from an uncomfortable location. In robot-robot cooperation, automated logistics scenarios, such as warehouses, distribution centers and supermarkets, often require repetitive and sequential pick and place tasks that can be executed more efficiently by exchanging objects between robots, provided that they are endowed with object handover ability. Use of a robot for passing objects is justified only if the handover operation is sufficiently intuitive for the involved humans, fluid and natural, with a speed comparable to that typical of a human-human object exchange. The approach proposed in this paper strongly relies on visual and haptic perception combined with suitable algorithms for controlling both robot motion, to allow the robot to adapt to human behavior, and grip force, to ensure a safe handover. The control strategy combines model-based reactive control methods with an event-driven state machine encoding a human-inspired behavior during a handover task, which involves both linear and torsional loads, without requiring explicit learning from human demonstration. Experiments in a supermarket-like environment with humans and robots communicating only through haptic cues demonstrate the relevance of force/tactile feedback in accomplishing handover operations in a collaborative task.

From what I’ve seen of humanoid robotics, there’s a fairly substantial divide between what folks in the research space traditionally call robotics, and something like animatronics, which tends to be much more character-driven.

There’s plenty of technology embodied in animatronic robotics, but usually under some fairly significant constraints—like, they’re not autonomously interactive, or they’re stapled to the floor and tethered for power, things like that. And there are reasons for doing it this way: namely, dynamic untethered humanoid robots are already super hard, so why would anyone stress themselves out even more by trying to make them into an interactive character at the same time? That would be crazy!

At Walt Disney Imagineering, which is apparently full of crazy people, they’ve spent the last three years working on Project Kiwi: a dynamic untethered humanoid robot that’s an interactive character at the same time. We asked them (among other things) just how they managed to stuff all of the stuff they needed to stuff into that costume, and how they expect to enable children (of all ages) to interact with the robot safely.

Project Kiwi is an untethered bipedal humanoid robot that Disney Imagineering designed not just to walk without falling over, but to walk without falling over with some character. At about 0.75 meters tall, Kiwi is a bit bigger than a NAO and a bit smaller than an iCub, and it’s just about completely self-contained, with the tether you see in the video being used for control rather than for power. Kiwi can manage 45 minutes of operating time, which is pretty impressive considering its size and the fact that it incorporates a staggering 50 degrees of freedom, a requirement for lifelike motion.

This version of the robot is just a prototype, and it sounds like there’s plenty to do in terms of hardware optimization to improve efficiency and add sensing and interactivity. The most surprising thing to me is that this is not a stage robot: Disney does plan to have some future version of Kiwi wandering around and interacting directly with park guests, and I’m sure you can imagine how that’s likely to go. Interaction at this level, where there’s a substantial risk of small children tackling your robot with a vicious high-speed hug, could be a uniquely Disney problem for a robot with this level of sophistication. And it’s one of the reasons they needed to build their own robot—when Universal Studios decided to try out a Steampunk Spot, for example, they had to put a fence plus a row of potted plants between it and any potential hugs, because Spot is very much not a hug-safe robot.  

So how the heck do you design a humanoid robot from scratch with personality and safe human interaction in mind? We asked Scott LaValley, Project Kiwi lead, who came to Disney Imagineering by way of Boston Dynamics and some of our favorite robots ever (including RHex, PETMAN, and Atlas), to explain how they pulled it off.

IEEE Spectrum: What are some of the constraints of Disney’s use case that meant you had to develop your own platform from the ground up?

Scott LaValley: First and foremost, we had to consider the packaging constraints. Our robot was always intended to serve as a bipedal character platform capable of taking on the role of a variety of our small-size characters. While we can sometimes take artistic liberties, for the most part, the electromechanical design had to fit within a minimal character profile to allow the robot to be fully themed with shells, skin, and costuming. When determining the scope of the project, a high-performance biped that matched our size constraints just did not exist. 

Equally important was the ability to move with style and personality, or the "emotion of motion." To really capture a specific character performance, a robotic platform must be capable of motions that range from fast and expressive to extremely slow and nuanced. In our case, this required developing custom high-speed actuators with the necessary torque density to be packaged into the mechanical structure. Each actuator is also equipped with a mechanical clutch and inline torque sensor to support low-stiffness control for compliant interactions and reduced vibration. 

Designing custom hardware also allowed us to include additional joints that are uncommon in humanoid robots. For example, the clavicle and shoulder alone include five degrees of freedom to support a shrug function and an extended configuration space for more natural gestures. We were also able to integrate onboard computing to support interactive behaviors.

What compromises were required to make sure that your robot was not only functional, but also capable of becoming an expressive character?

As mentioned previously, we face serious challenges in terms of packaging and component selection due to the small size and character profile. This has led to a few compromises on the design side. For example, we currently rely on rigid-flex circuit boards to fit our electronics onto the available surface area of our parts without additional cables or connectors. Unfortunately, these boards are harder to design and manufacture than standard rigid boards, increasing complexity, cost, and build time. We might also consider increasing the size of the hip and knee actuators if they no longer needed to fit within a themed costume.

Designing a reliable walking robot is in itself a significant challenge, but adding style and personality to each motion is a new layer of complexity. From a software perspective, we spend a significant amount of time developing motion planning and animation tools that allow animators to author stylized gaits, gestures, and expressions for physical characters. Unfortunately, unlike on-screen characters, we do not have the option to bend the laws of physics and must validate each motion through simulation. As a result, we are currently limited to stylized walking and dancing on mostly flat ground, but we hope to be skipping up stairs in the future!

Of course, there is always more that can be done to better match the performance you would expect from a character. We are excited about some things we have in the pipeline, including a next generation lower body and an improved locomotion planner.

How are you going to make this robot safe for guests to be around?

First let us say, we take safety extremely seriously, and it is a top priority for any Disney experience. Ultimately, we do intend to allow interactions with guests of all ages, but it will take a measured process to get there. Proper safety evaluation is a big part of productizing any Research & Development project, and we plan to conduct playtests with our Imagineers, cast members and guests along the way. Their feedback will help determine exactly what an experience with a robotic character will look like once implemented.

From a design standpoint, we believe that small characters are the safest type of biped for human-robot interaction due to their reduced weight and low center of mass. We are also employing compliant control strategies to ensure that the robot’s actuators are torque-limited and backdrivable. Perception and behavior design may also play a key role, but in the end, we will rely on proper show design to permit a safe level of interaction as the technology evolves.

What do you think other roboticists working on legged systems could learn from Project Kiwi?

We are often inspired by other roboticists working on legged systems ourselves but would be happy to share some lessons learned. Remember that robotics is fundamentally interdisciplinary, and a good team typically consists of a mix of hardware and software engineers in close collaboration. In our experience, however, artists and animators play an equally valuable role in bringing a new vision to life. We often pull in ideas from the character animation and game development world, and while robotic characters are far more constrained than their virtual counterparts, we are solving many of the same problems. Another tip is to leverage motion studies (either through animation, motion capture, and/or simulation tools) early in the design process to generate performance-driven requirements for any new robot.

Now that Project Kiwi has de-stealthed, I hope the Disney Imagineering folks will be able to be a little more open with all of the sweet goo inside of the fuzzy skin of this metaphor that has stopped making sense. Meeting a new humanoid robot is always exciting, and the approach here (with its technical capability combined with an emphasis on character and interaction) is totally unique. And if they need anyone to test Kiwi’s huggability, I volunteer! You know, for science.

From what I’ve seen of humanoid robotics, there’s a fairly substantial divide between what folks in the research space traditionally call robotics, and something like animatronics, which tends to be much more character-driven.

There’s plenty of technology embodied in animatronic robotics, but usually under some fairly significant constraints—like, they’re not autonomously interactive, or they’re stapled to the floor and tethered for power, things like that. And there are reasons for doing it this way: namely, dynamic untethered humanoid robots are already super hard, so why would anyone stress themselves out even more by trying to make them into an interactive character at the same time? That would be crazy!

At Walt Disney Imagineering, which is apparently full of crazy people, they’ve spent the last three years working on Project Kiwi: a dynamic untethered humanoid robot that’s an interactive character at the same time. We asked them (among other things) just how they managed to stuff all of the stuff they needed to stuff into that costume, and how they expect to enable children (of all ages) to interact with the robot safely.

Project Kiwi is an untethered bipedal humanoid robot that Disney Imagineering designed not just to walk without falling over, but to walk without falling over with some character. At about 0.75 meters tall, Kiwi is a bit bigger than a NAO and a bit smaller than an iCub, and it’s just about completely self-contained, with the tether you see in the video being used for control rather than for power. Kiwi can manage 45 minutes of operating time, which is pretty impressive considering its size and the fact that it incorporates a staggering 50 degrees of freedom, a requirement for lifelike motion.

This version of the robot is just a prototype, and it sounds like there’s plenty to do in terms of hardware optimization to improve efficiency and add sensing and interactivity. The most surprising thing to me is that this is not a stage robot: Disney does plan to have some future version of Kiwi wandering around and interacting directly with park guests, and I’m sure you can imagine how that’s likely to go. Interaction at this level, where there’s a substantial risk of small children tackling your robot with a vicious high-speed hug, could be a uniquely Disney problem for a robot with this level of sophistication. And it’s one of the reasons they needed to build their own robot—when Universal Studios decided to try out a Steampunk Spot, for example, they had to put a fence plus a row of potted plants between it and any potential hugs, because Spot is very much not a hug-safe robot.  

So how the heck do you design a humanoid robot from scratch with personality and safe human interaction in mind? We asked Scott LaValley, Project Kiwi lead, who came to Disney Imagineering by way of Boston Dynamics and some of our favorite robots ever (including RHex, PETMAN, and Atlas), to explain how they pulled it off.

IEEE Spectrum: What are some of the constraints of Disney’s use case that meant you had to develop your own platform from the ground up?

Scott LaValley: First and foremost, we had to consider the packaging constraints. Our robot was always intended to serve as a bipedal character platform capable of taking on the role of a variety of our small-size characters. While we can sometimes take artistic liberties, for the most part, the electromechanical design had to fit within a minimal character profile to allow the robot to be fully themed with shells, skin, and costuming. When determining the scope of the project, a high-performance biped that matched our size constraints just did not exist. 

Equally important was the ability to move with style and personality, or the "emotion of motion." To really capture a specific character performance, a robotic platform must be capable of motions that range from fast and expressive to extremely slow and nuanced. In our case, this required developing custom high-speed actuators with the necessary torque density to be packaged into the mechanical structure. Each actuator is also equipped with a mechanical clutch and inline torque sensor to support low-stiffness control for compliant interactions and reduced vibration. 

Designing custom hardware also allowed us to include additional joints that are uncommon in humanoid robots. For example, the clavicle and shoulder alone include five degrees of freedom to support a shrug function and an extended configuration space for more natural gestures. We were also able to integrate onboard computing to support interactive behaviors.

What compromises were required to make sure that your robot was not only functional, but also capable of becoming an expressive character?

As mentioned previously, we face serious challenges in terms of packaging and component selection due to the small size and character profile. This has led to a few compromises on the design side. For example, we currently rely on rigid-flex circuit boards to fit our electronics onto the available surface area of our parts without additional cables or connectors. Unfortunately, these boards are harder to design and manufacture than standard rigid boards, increasing complexity, cost, and build time. We might also consider increasing the size of the hip and knee actuators if they no longer needed to fit within a themed costume.

Designing a reliable walking robot is in itself a significant challenge, but adding style and personality to each motion is a new layer of complexity. From a software perspective, we spend a significant amount of time developing motion planning and animation tools that allow animators to author stylized gaits, gestures, and expressions for physical characters. Unfortunately, unlike on-screen characters, we do not have the option to bend the laws of physics and must validate each motion through simulation. As a result, we are currently limited to stylized walking and dancing on mostly flat ground, but we hope to be skipping up stairs in the future!

Of course, there is always more that can be done to better match the performance you would expect from a character. We are excited about some things we have in the pipeline, including a next generation lower body and an improved locomotion planner.

How are you going to make this robot safe for guests to be around?

First let us say, we take safety extremely seriously, and it is a top priority for any Disney experience. Ultimately, we do intend to allow interactions with guests of all ages, but it will take a measured process to get there. Proper safety evaluation is a big part of productizing any Research & Development project, and we plan to conduct playtests with our Imagineers, cast members and guests along the way. Their feedback will help determine exactly what an experience with a robotic character will look like once implemented.

From a design standpoint, we believe that small characters are the safest type of biped for human-robot interaction due to their reduced weight and low center of mass. We are also employing compliant control strategies to ensure that the robot’s actuators are torque-limited and backdrivable. Perception and behavior design may also play a key role, but in the end, we will rely on proper show design to permit a safe level of interaction as the technology evolves.

What do you think other roboticists working on legged systems could learn from Project Kiwi?

We are often inspired by other roboticists working on legged systems ourselves but would be happy to share some lessons learned. Remember that robotics is fundamentally interdisciplinary, and a good team typically consists of a mix of hardware and software engineers in close collaboration. In our experience, however, artists and animators play an equally valuable role in bringing a new vision to life. We often pull in ideas from the character animation and game development world, and while robotic characters are far more constrained than their virtual counterparts, we are solving many of the same problems. Another tip is to leverage motion studies (either through animation, motion capture, and/or simulation tools) early in the design process to generate performance-driven requirements for any new robot.

Now that Project Kiwi has de-stealthed, I hope the Disney Imagineering folks will be able to be a little more open with all of the sweet goo inside of the fuzzy skin of this metaphor that has stopped making sense. Meeting a new humanoid robot is always exciting, and the approach here (with its technical capability combined with an emphasis on character and interaction) is totally unique. And if they need anyone to test Kiwi’s huggability, I volunteer! You know, for science.

Current neurorehabilitation models primarily rely on extended hospital stays and regular therapy sessions requiring close physical interactions between rehabilitation professionals and patients. The current COVID-19 pandemic has challenged this model, as strict physical distancing rules and a shift in the allocation of hospital resources resulted in many neurological patients not receiving essential therapy. Accordingly, a recent survey revealed that the majority of European healthcare professionals involved in stroke care are concerned that this lack of care will have a noticeable negative impact on functional outcomes. COVID-19 highlights an urgent need to rethink conventional neurorehabilitation and develop alternative approaches to provide high-quality therapy while minimizing hospital stays and visits. Technology-based solutions, such as, robotics bear high potential to enable such a paradigm shift. While robot-assisted therapy is already established in clinics, the future challenge is to enable physically assisted therapy and assessments in a minimally supervized and decentralized manner, ideally at the patient’s home. Key enablers are new rehabilitation devices that are portable, scalable and equipped with clinical intelligence, remote monitoring and coaching capabilities. In this perspective article, we discuss clinical and technological requirements for the development and deployment of minimally supervized, robot-assisted neurorehabilitation technologies in patient’s homes. We elaborate on key principles to ensure feasibility and acceptance, and on how artificial intelligence can be leveraged for embedding clinical knowledge for safe use and personalized therapy adaptation. Such new models are likely to impact neurorehabilitation beyond COVID-19, by providing broad access to sustained, high-quality and high-dose therapy maximizing long-term functional outcomes.

The COVID-19 pandemic has caused dramatic effects on the healthcare system, businesses, and education. In many countries, businesses were shut down, universities and schools had to cancel in-person classes, and many workers had to work remotely and socially distance in order to prevent the spread of the virus. These measures opened the door for technologies such as robotics and artificial intelligence to play an important role in minimizing the negative effects of such closures. There have been many efforts in the design and development of robotic systems for applications such as disinfection and eldercare. Healthcare education has seen a lot of potential in simulation robots, which offer valuable opportunities for remote learning during the pandemic. However, there are ethical considerations that need to be deliberated in the design and development of such systems. In this paper, we discuss the principles of roboethics and how these can be applied in the new era of COVID-19. We focus on identifying the most relevant ethical principles and apply them to a case study in dentistry education. DenTeach was developed as a portable device that uses sensors and computer simulation to make dental education more efficient. DenTeach makes remote instruction possible by allowing students to learn and practice dental procedures from home. We evaluate DenTeach on the principles of data, common good, and safety, and highlight the importance of roboethics in Canada. The principles identified in this paper can inform researchers and educational institutions considering implementing robots in their curriculum.

Pages