Frontiers in Robotics and AI

RSS Feed for Frontiers in Robotics and AI | New and Recent Articles
Subscribe to Frontiers in Robotics and AI feed

To endow robots with the flexibility to perform a wide range of tasks in diverse and complex environments, learning their controller from experience data is a promising approach. In particular, some recent meta-learning methods are shown to solve novel tasks by leveraging their experience of performing other tasks during training. Although studies around meta-learning of robot control have worked on improving the performance, the safety issue has not been fully explored, which is also an important consideration in the deployment. In this paper, we firstly relate uncertainty on task inference with the safety in meta-learning of visual imitation, and then propose a novel framework for estimating the task uncertainty through probabilistic inference in the task-embedding space, called PETNet. We validate PETNet with a manipulation task with a simulated robot arm in terms of the task performance and uncertainty evaluation on task inference. Following the standard benchmark procedure in meta-imitation learning, we show PETNet can achieve the same or higher level of performance (success rate of novel tasks at meta-test time) as previous methods. In addition, by testing PETNet with semantically inappropriate or synthesized out-of-distribution demonstrations, PETNet shows the ability to capture the uncertainty about the tasks inherent in the given demonstrations, which allows the robot to identify situations where the controller might not perform properly. These results illustrate our proposal takes a significant step forward to the safe deployment of robot learning systems into diverse tasks and environments.

The use of a robotic arm manipulator as a platform for coincident radiation mapping and laser profiling of radioactive sources on a flat surface is investigated in this work. A combined scanning head, integrating a micro-gamma spectrometer and Time of Flight (ToF) sensor were moved in a raster scan pattern across the surface, autonomously undertaken by the robot arm over a 600 × 260 mm survey area. A series of radioactive sources of different emission intensities were scanned in different configurations to test the accuracy and sensitivity of the system. We demonstrate that in each test configuration the system was able to generate a centimeter accurate 3D model complete with an overlaid radiation map detailing the emitted radiation intensity and the corrected surface dose rate.

Recently, with the increased number of robots entering numerous manufacturing fields, a considerable wealth of literature has appeared on the theme of physical human-robot interaction using data from proprioceptive sensors (motor or/and load side encoders). Most of the studies have then the accurate dynamic model of a robot for granted. In practice, however, model identification and observer design proceeds collision detection. To the best of our knowledge, no previous study has systematically investigated each aspect underlying physical human-robot interaction and the relationship between those aspects. In this paper, we bridge this gap by first reviewing the literature on model identification, disturbance estimation and collision detection, and discussing the relationship between the three, then by examining the practical sides of model-based collision detection on a case study conducted on UR10e. We show that the model identification step is critical for accurate collision detection, while the choice of the observer should be mostly based on computation time and the simplicity and flexibility of tuning. It is hoped that this study can serve as a roadmap to equip industrial robots with basic physical human-robot interaction capabilities.

Tactile sensing is an essential capability for a robot to perform manipulation tasks in cluttered environments. While larger areas can be assessed instantly with cameras, Lidars, and other remote sensors, tactile sensors can reduce their measurement uncertainties and gain information of the physical interactions between the objects and the robot end-effector that is not accessible via remote sensors. In this paper, we introduce the novel tactile sensor GelTip that has the shape of a finger and can sense contacts on any location of its surface. This contrasts to other camera-based tactile sensors that either only have a flat sensing surface, or a compliant tip of a limited sensing area, and our proposed GelTip sensor is able to detect contacts from all the directions, like a human finger. The sensor uses a camera located at its base to track the deformations of the opaque elastomer that covers its hollow, rigid, and transparent body. Because of this design, a gripper equipped with GelTip sensors is capable of simultaneously monitoring contacts happening inside and outside its grasp closure. Our extensive experiments show that the GelTip sensor can effectively localize these contacts at different locations of the finger body, with a small localization error of approximately 5 mm on average, and under 1 mm in the best cases. Furthermore, our experiments in a Blocks World environment demonstrate the advantages, and possibly a necessity, of leveraging all-around touch sensing in manipulation tasks. In particular, the experiments show that the contacts at different moments of the reach-to-grasp movements can be sensed using our novel GelTip sensor.

In the context of legged robotics, many criteria based on the control of the Center of Mass (CoM) have been developed to ensure a stable and safe robot locomotion. Defining a whole-body framework with the control of the CoM requires a planning strategy, often based on a specific type of gait and a reliable state-estimation. In a whole-body control approach, if the CoM task is not specified, the consequent redundancy can still be resolved by specifying a postural task that set references for all the joints. Therefore, the postural task can be exploited to keep a well-behaved, stable kinematic configuration. In this work, we propose a generic locomotion framework which is able to generate different kind of gaits, ranging from very dynamic gaits, such as the trot, to more static gaits like the crawl, without the need to plan the CoM trajectory. Consequently, the whole-body controller becomes planner-free and it does not require the estimation of the floating base state, which is often prone to drift. The framework is composed of a priority-based whole-body controller that works in synergy with a walking pattern generator. We show the effectiveness of the framework by presenting simulations on different types of simulated terrains, including rough terrain, using different quadruped platforms.

In-hand manipulation and grasp adjustment with dexterous robotic hands is a complex problem that not only requires highly coordinated finger movements but also deals with interaction variability. The control problem becomes even more complex when introducing tactile information into the feedback loop. Traditional approaches do not consider tactile feedback and attempt to solve the problem either by relying on complex models that are not always readily available or by constraining the problem in order to make it more tractable. In this paper, we propose a hierarchical control approach where a higher level policy is learned through reinforcement learning, while low level controllers ensure grip stability throughout the manipulation action. The low level controllers are independent grip stabilization controllers based on tactile feedback. The independent controllers allow reinforcement learning approaches to explore the manipulation tasks state-action space in a more structured manner. We show that this structure allows learning the unconstrained task with RL methods that cannot learn it in a non-hierarchical setting. The low level controllers also provide an abstraction to the tactile sensors input, allowing transfer to real robot platforms. We show preliminary results of the transfer of policies trained in simulation to the real robot hand.

We consider the problem of learning generalized first-order representations of concepts from a small number of examples. We augment an inductive logic programming learner with 2 novel contributions. First, we define a distance measure between candidate concept representations that improves the efficiency of search for target concept and generalization. Second, we leverage richer human inputs in the form of advice to improve the sample efficiency of learning. We prove that the proposed distance measure is semantically valid and use that to derive a PAC bound. Our experiments on diverse learning tasks demonstrate both the effectiveness and efficiency of our approach.

Human intention detection is fundamental to the control of robotic devices in order to assist humans according to their needs. This paper presents a novel approach for detecting hand motion intention, i.e., rest, open, close, and grasp, and grasping force estimation using force myography (FMG). The output is further used to control a soft hand exoskeleton called an SEM Glove. In this method, two sensor bands constructed using force sensing resistor (FSR) sensors are utilized to detect hand motion states and muscle activities. Upon placing both bands on an arm, the sensors can measure normal forces caused by muscle contraction/relaxation. Afterwards, the sensor data is processed, and hand motions are identified through a threshold-based classification method. The developed method has been tested on human subjects for object-grasping tasks. The results show that the developed method can detect hand motions accurately and to provide assistance w.r.t to the task requirement.

Electro-ribbon actuators are lightweight, flexible, high-performance actuators for next generation soft robotics. When electrically charged, electrostatic forces cause the electrode ribbons to progressively zip together through a process called dielectrophoretic liquid zipping (DLZ), delivering contractions of more than 99% of their length. Electro-ribbon actuators exhibit pull-in instability, and this phenomenon makes them challenging to control: below the pull-in voltage threshold, actuator contraction is small, while above this threshold, increasing electrostatic forces cause the actuator to completely contract, providing a narrow contraction range for feedforward control. We show that application of a time-varying voltage profile that starts above pull-in threshold, but subsequently reduces, allows access to intermediate steady-states not accessible using traditional feed-forward control. A modified proportional-integral closed-loop controller is proposed (Boost-PI), which incorporates a variable boost voltage to temporarily elevate actuation close to, but not exceeding, the pull-in voltage threshold. This primes the actuator for zipping and drastically reduces rise time compared with a traditional PI controller. A multi-objective parameter-space approach was implemented to choose appropriate controller gains by assessing the metrics of rise time, overshoot, steady-state error, and settle time. This proposed control method addresses a key limitation of the electro-ribbon actuators, allowing the actuator to perform staircase and oscillatory control tasks. This significantly increases the range of applications which can exploit this new DLZ actuation technology.

Significant information extraction from the images that are geometrically distorted or transformed is mainstream procedure in image processing. It becomes difficult to retrieve the relevant region when the images get distorted by some geometric deformation. Hu's moments are helpful in extracting information from such distorted images due to their unique invariance property. This work focuses on early detection and gradation of Knee Osteoarthritis utilizing Hu's invariant moments to understand the geometric transformation of the cartilage region in Knee X-ray images. The seven invariant moments are computed for the rotated version of the test image. The results demonstrated are found to be more competitive and promising, which are validated by ortho surgeons and rheumatologists.

Several lower-limb exoskeletons enable overcoming obstacles that would impair daily activities of wheelchair users, such as going upstairs. Still, as most of the currently commercialized exoskeletons require the use of crutches, they prevent the user from interacting efficiently with the environment. In a previous study, a bio-inspired controller was developed to allow dynamic standing balance for such exoskeletons. It was however only tested on the device without any user. This work describes and evaluates a new controller that extends this previous one with an online model compensation, and the contribution of the hip joint against strong perturbations. In addition, both controllers are tested with the exoskeleton TWIICE One, worn by a complete spinal cord injury pilot. Their performances are compared by the mean of three tasks: standing quietly, resisting external perturbations, and lifting barbells of increasing weight. The new controller exhibits a similar performance for quiet standing, longer recovery time for dynamic perturbations but better ability to sustain prolonged perturbations, and higher weightlifting capability.

Robot-assisted gait training (RAGT) devices are used in rehabilitation to improve patients' walking function. While there are some reports on the adverse events (AEs) and associated risks in overground exoskeletons, the risks of stationary gait trainers cannot be accurately assessed. We therefore aimed to collect information on AEs occurring during the use of stationary gait robots and identify associated risks, as well as gaps and needs, for safe use of these devices. We searched both bibliographic and full-text literature databases for peer-reviewed articles describing the outcomes of stationary RAGT and specifically mentioning AEs. We then compiled information on the occurrence and types of AEs and on the quality of AE reporting. Based on this, we analyzed the risks of RAGT in stationary gait robots. We included 50 studies involving 985 subjects and found reports of AEs in 18 of those studies. Many of the AE reports were incomplete or did not include sufficient detail on different aspects, such as severity or patient characteristics, which hinders the precise counts of AE-related information. Over 169 device-related AEs experienced by between 79 and 124 patients were reported. Soft tissue-related AEs occurred most frequently and were mostly reported in end-effector-type devices. Musculoskeletal AEs had the second highest prevalence and occurred mainly in exoskeleton-type devices. We further identified physiological AEs including blood pressure changes that occurred in both exoskeleton-type and end-effector-type devices. Training in stationary gait robots can cause injuries or discomfort to the skin, underlying tissue, and musculoskeletal system, as well as unwanted blood pressure changes. The underlying risks for the most prevalent injury types include excessive pressure and shear at the interface between robot and human (cuffs/harness), as well as increased moments and forces applied to the musculoskeletal system likely caused by misalignments (between joint axes of robot and human). There is a need for more structured and complete recording and dissemination of AEs related to robotic gait training to increase knowledge on risks. With this information, appropriate mitigation strategies can and should be developed and implemented in RAGT devices to increase their safety.

The development of AI that can socially engage with humans is exciting to imagine, but such advanced algorithms might prove harmful if people are no longer able to detect when they are interacting with non-humans in online environments. Because we cannot fully predict how socially intelligent AI will be applied, it is important to conduct research into how sensitive humans are to behaviors of humans compared to those produced by AI. This paper presents results from a behavioral Turing Test, in which participants interacted with a human, or a simple or “social” AI within a complex videogame environment. Participants (66 total) played an open world, interactive videogame with one of these co-players and were instructed that they could interact non-verbally however they desired for 30 min, after which time they would indicate their beliefs about the agent, including three Likert measures of how much participants trusted and liked the co-player, the extent to which they perceived them as a “real person,” and an interview about the overall perception and what cues participants used to determine humanness. T-tests, Analysis of Variance and Tukey's HSD was used to analyze quantitative data, and Cohen's Kappa and χ2 was used to analyze interview data. Our results suggest that it was difficult for participants to distinguish between humans and the social AI on the basis of behavior. An analysis of in-game behaviors, survey data and qualitative responses suggest that participants associated engagement in social interactions with humanness within the game.

Wearable robots (WRs) are increasingly moving out of the labs toward real-world applications. In order for WRs to be effectively and widely adopted by end-users, a common benchmarking framework needs to be established. In this article, we outline the perspectives that in our opinion are the main determinants of this endeavor, and exemplify the complex landscape into three areas. The first perspective is related to quantifying the technical performance of the device and the physical impact of the device on the user. The second one refers to the understanding of the user's perceptual, emotional, and cognitive experience of (and with) the technology. The third one proposes a strategic path for a global benchmarking methodology, composed by reproducible experimental procedures representing real-life conditions. We hope that this paper can enable developers, researchers, clinicians and end-users to efficiently identify the most promising directions for validating their technology and drive future research efforts in the short and medium term.

Contemporary research in human-machine symbiosis has mainly concentrated on enhancing relevant sensory, perceptual, and motor capacities, assuming short-term and nearly momentary interaction sessions. Still, human-machine confluence encompasses an inherent temporal dimension that is typically overlooked. The present work shifts the focus on the temporal and long-lasting aspects of symbiotic human-robot interaction (sHRI). We explore the integration of three time-aware modules, each one focusing on a diverse part of the sHRI timeline. Specifically, the Episodic Memory considers past experiences, the Generative Time Models estimate the progress of ongoing activities, and the Daisy Planner devices plans for the timely accomplishment of goals. The integrated system is employed to coordinate the activities of a multi-agent team. Accordingly, the proposed system (i) predicts human preferences based on past experience, (ii) estimates performance profile and task completion time, by monitoring human activity, and (iii) dynamically adapts multi-agent activity plans to changes in expectation and Human-Robot Interaction (HRI) performance. The system is deployed and extensively assessed in real-world and simulated environments. The obtained results suggest that building upon the unfolding and the temporal properties of team tasks can significantly enhance the fluency of sHRI.

In this paper, a new scheme for multi-lateral remote rehabilitation is proposed. There exist one therapist, one patient, and several trainees, who are participating in the process of telerehabilitation (TR) in this scheme. This kind of strategy helps the therapist to facilitate the neurorehabilitation remotely. Thus, the patients can stay in their homes, resulting in safer and less expensive costs. Meanwhile, several trainees in medical education centers can be trained by participating partially in the rehabilitation process. The trainees participate in a “hands-on” manner; so, they feel like they are rehabilitating the patient directly. For implementing such a scheme, a novel theoretical method is proposed using the power of multi-agent systems (MAS) theory into the multi-lateral teleoperation, based on the self-intelligence in the MAS. In the previous related works, changing the number of participants in the multi-lateral teleoperation tasks required redesigning the controllers; while, in this paper using both of the decentralized control and the self-intelligence of the MAS, avoids the need for redesigning the controller in the proposed structure. Moreover, in this research, uncertainties in the operators' dynamics, as well as time-varying delays in the communication channels, are taken into account. It is shown that the proposed structure has two tuning matrices (L and D) that can be used for different scenarios of multi-lateral teleoperation. By choosing proper tuning matrices, many related works about the multi-lateral teleoperation/telerehabilitation process can be implemented. In the final section of the paper, several scenarios were introduced to achieve “Simultaneous Training and Therapy” in TR and are implemented with the proposed structure. The results confirmed the stability and performance of the proposed framework.

It is well-established in the literature that biases (e. g., related to body size, ethnicity, race etc.) can occur during the employment interview and that applicants' fairness perceptions related to selection procedures can influence attitudes, intentions, and behaviors toward the recruiting organization. This study explores how social robotics may affect this situation. Using an online, video vignette-based experimental survey (n = 235), the study examines applicant fairness perceptions of two types of job interviews: a face-to-face and a robot-mediated interview. To reduce the risk of socially desirable responses, desensitize the topic, and detect any inconsistencies in the respondents' reactions to vignette scenarios, the study employs a first-person and a third-person perspective. In the robot-mediated interview, two teleoperated robots are used as fair proxies for the applicant and the interviewer, thus providing symmetrical visual anonymity unlike prior research that relied on asymmetrical anonymity, in which only one party was anonymized. This design is intended to eliminate visual cues that typically cause implicit biases and discrimination of applicants, but also to prevent biasing the interviewer's assessment through impression management tactics typically used by applicants. We hypothesize that fairness perception (i.e., procedural fairness and interactional fairness) and behavioral intentions (i.e., intentions of job acceptance, reapplication intentions, and recommendation intentions) will be higher in a robot-mediated job interview than in a face-to-face job interview, and that this effect will be stronger for introvert applicants. The study shows, contrary to our expectations, that the face-to-face interview is perceived as fairer, and that the applicant's personality (introvert vs. extravert) does not affect this perception. We discuss this finding and its implications, and address avenues for future research.

Traditionally, the robotic end-effectors that are employed in unstructured and dynamic environments are rigid and their operation requires sophisticated sensing elements and complicated control algorithms in order to handle and manipulate delicate and fragile objects. Over the last decade, considerable research effort has been put into the development of adaptive, under-actuated, soft robots that facilitate robust interactions with dynamic environments. In this paper, we present soft, retractable, pneumatically actuated, telescopic actuators that facilitate the efficient execution of stable grasps involving a plethora of everyday life objects. The efficiency of the proposed actuators is validated by employing them in two different soft and hybrid robotic grippers. The hybrid gripper uses three rigid fingers to accomplish the execution of all the tasks required by a traditional robotic gripper, while three inflatable, telescopic fingers provide soft interaction with objects. This synergistic combination of soft and rigid structures allows the gripper to cage/trap and firmly hold heavy and irregular objects. The second, simplistic and highly affordable robotic gripper employs just the telescopic actuators, exhibiting an adaptive behavior during the execution of stable grasps of fragile and delicate objects. The experiments demonstrate that both grippers can successfully and stably grasp a wide range of objects, being able to exert significantly high contact forces.

In nature, tip-localized growth allows navigation in tightly confined environments and creation of structures. Recently, this form of movement has been artificially realized through pressure-driven eversion of flexible, thin-walled tubes. Here we review recent work on robots that “grow” via pressure-driven eversion, referred to as “everting vine robots,” due to a movement pattern that is similar to that of natural vines. We break this work into four categories. First, we examine the design of everting vine robots, highlighting tradeoffs in material selection, actuation methods, and placement of sensors and tools. These tradeoffs have led to application-specific implementations. Second, we describe the state of and need for modeling everting vine robots. Quasi-static models of growth and retraction and kinematic and force-balance models of steering and environment interaction have been developed that use simplifying assumptions and limit the involved degrees of freedom. Third, we report on everting vine robot control and planning techniques that have been developed to move the robot tip to a target, using a variety of modalities to provide reference inputs to the robot. Fourth, we highlight the benefits and challenges of using this paradigm of movement for various applications. Everting vine robot applications to date include deploying and reconfiguring structures, navigating confined spaces, and applying forces on the environment. We conclude by identifying gaps in the state of the art and discussing opportunities for future research to advance everting vine robots and their usefulness in the field.

Pages