Frontiers in Robotics and AI

RSS Feed for Frontiers in Robotics and AI | New and Recent Articles
Subscribe to Frontiers in Robotics and AI feed

Having a trusted and useful system that helps to diminish the risk of medical errors and facilitate the improvement of quality in the medical education is indispensable. Thousands of surgical errors are occurred annually with high adverse event rate, despite inordinate number of devised patients safety initiatives. Inadvertently or otherwise, surgeons play a critical role in the aforementioned errors. Training surgeons is one of the most crucial and delicate parts of medical education and needs more attention due to its practical intrinsic. In contrast to engineering, dealing with mortal alive creatures provides a minuscule chance of trial and error for trainees. Training in operative rooms, on the other hand, is extremely expensive in terms of not only equipment but also hiring professional trainers. In addition, the COVID-19 pandemic has caused to establish initiatives such as social distancing in order to mitigate the rate of outbreak. This leads surgeons to postpone some non-urgent surgeries or operate with restrictions in terms of safety. Subsequently, educational systems are affected by the limitations due to the pandemic. Skill transfer systems in cooperation with a virtual training environment is thought as a solution to address aforesaid issues. This enables not only novice surgeons to enrich their proficiency but also helps expert surgeons to be supervised during the operation. This paper focuses on devising a solution based on deep leaning algorithms to model the behavior of experts during the operation. In other words, the proposed solution is a skill transfer method that learns professional demonstrations using different effective factors from the body of experts. The trained model then provides a real-time haptic guidance signal for either instructing trainees or supervising expert surgeons. A simulation is utilized to emulate an operating room for femur drilling surgery, which is a common invasive treatment for osteoporosis. This helps us with both collecting the essential data and assessing the obtained models. Experimental results show that the proposed method is capable of emitting guidance force haptic signal with an acceptable error rate.

As autonomous machines, such as automated vehicles (AVs) and robots, become pervasive in society, they will inevitably face moral dilemmas where they must make decisions that risk injuring humans. However, prior research has framed these dilemmas in starkly simple terms, i.e., framing decisions as life and death and neglecting the influence of risk of injury to the involved parties on the outcome. Here, we focus on this gap and present experimental work that systematically studies the effect of risk of injury on the decisions people make in these dilemmas. In four experiments, participants were asked to program their AVs to either save five pedestrians, which we refer to as the utilitarian choice, or save the driver, which we refer to as the nonutilitarian choice. The results indicate that most participants made the utilitarian choice but that this choice was moderated in important ways by perceived risk to the driver and risk to the pedestrians. As a second contribution, we demonstrate the value of formulating AV moral dilemmas in a game-theoretic framework that considers the possible influence of others’ behavior. In the fourth experiment, we show that participants were more (less) likely to make the utilitarian choice, the more utilitarian (nonutilitarian) other drivers behaved; furthermore, unlike the game-theoretic prediction that decision-makers inevitably converge to nonutilitarianism, we found significant evidence of utilitarianism. We discuss theoretical implications for our understanding of human decision-making in moral dilemmas and practical guidelines for the design of autonomous machines that solve these dilemmas while, at the same time, being likely to be adopted in practice.

In this study, we report the investigations conducted on the mimetic behavior of a new humanoid robot called Alter3. Alter3 autonomously imitates the motions of a person in front of it and stores the motion sequences in its memory. Alter3 also uses a self-simulator to simulate its own motions before executing them and generates a self-image. If the visual perception (of a person's motion being imitated) and the imitating self-image differ significantly, Alter3 retrieves a motion sequence closer to the target motion from its memory and executes it. We investigate how this mimetic behavior develops interacting with human, by analyzing memory dynamics and information flow between Alter3 and a interacting person. One important observation from this study is that when Alter3 fails to imitate a person's motion, the person tend to imitate Alter3 instead. This tendency is quantified by the alternation of the direction of information flow. This spontaneous role-switching behavior between a human and Alter3 is a way to initiate personality formation (i.e., personogenesis) in Alter3.

Soft exosuits are a promising solution for the assistance and augmentation of human motor abilities in the industrial field, where the use of more symbiotic wearable robots can avoid excessive worker fatigue and improve the quality of the work. One of the challenges in the design of soft exosuits is the choice of the right amount of softness to balance load transfer, ergonomics, and weight. This article presents a cable-driven based soft wrist exosuit for flexion assistance with the use of an ergonomic reinforced glove. The flexible and highly compliant three-dimensional (3D)-printed plastic structure that is sewn on the glove allows an optimal force transfer from the remotely located motor to the wrist articulation and to preserve a high level of comfort for the user during assistance. The device is shown to reduce fatigue and the muscular effort required for holding and lifting loads in healthy subjects for weights up to 3 kg.

We present control policies for use with a modified autonomous underwater glider that are intended to enable remote launch/recovery and long-range unattended survey of the Arctic's marginal ice zone (MIZ). This region of the Arctic is poorly characterized but critical to the dynamics of ice advance and retreat. Due to the high cost of operating support vessels in the Arctic, the proposed glider architecture minimizes external infrastructure requirements for navigation and mission updates to brief and infrequent satellite updates on the order of once per day. This is possible through intelligent power management in combination with hybrid propulsion, adaptive velocity control, and dynamic depth band selection based on real-time environmental state estimation. We examine the energy savings, range improvements, decreased communication requirements, and temporal consistency that can be attained with the proposed glider architecture and control policies based on preliminary field data, and we discuss a future MIZ survey mission concept in the Arctic. Although the sensing and control policies presented here focus on under ice missions with an unattended underwater glider, they are hardware independent and are transferable to other robotic vehicle classes, including in aerial and space domains.

The integration of people with disabilities into the working world is an important, yet challenging field of research. While different inclusion efforts exist, people with disabilities are still under-represented in the open labor market. This paper investigates the approach of using a collaborative robot arm to support people with disabilities with their reintegration into the workplace. However, there is currently little literature about the acceptance of an industrial robot by people with disabilities and in cases where a robot leads to stress, fear, or any other form of discomfort, this approach is not feasible. For this reason, a first user study was performed in a sheltered workshop to investigate the acceptance of a robot arm by workers with disabilities. As a first step in this underdeveloped field, two main aspects were covered. Firstly, the reaction and familiarization to the robot arm within a study situation was closely examined in order to separate any effects that were not caused by the moving robot. Secondly, the reaction toward the robot arm during collaboration was investigated. In doing so, five different distances between the robot arm and the participants were considered to make collaboration in the workplace as pleasant as possible. The results revealed that it took the participants about 20 min to get used to the situation, while the robot was immediately accepted very well and did not cause fear or discomfort at any time. Surprisingly, in some cases, short distances were accepted even better than the larger distances. For these reasons, the presented approach showed to promise for future investigations.

The field of rehabilitation and assistive devices is being disrupted by innovations in desktop 3D printers and open-source designs. For upper limb prosthetics, those technologies have demonstrated a strong potential to aid those with missing hands. However, there are basic interfacing issues that need to be addressed for long term usage. The functionality, durability, and the price need to be considered especially for those in difficult living conditions. We evaluated the most popular designs of body-powered, 3D printed prosthetic hands. We selected a representative sample and evaluated its suitability for its grasping postures, durability, and cost. The prosthetic hand can perform three grasping postures out of the 33 grasps that a human hand can do. This corresponds to grasping objects similar to a coin, a golf ball, and a credit card. Results showed that the material used in the hand and the cables can withstand a 22 N normal grasping force, which is acceptable based on standards for accessibility design. The cost model showed that a 3D printed hand could be produced for as low as $19. For the benefit of children with congenital missing limbs and for the war-wounded, the results can serve as a baseline study to advance the development of prosthetic hands that are functional yet low-cost.

The importance of infection control procedures in hospital radiology departments has become increasingly apparent in recent months as the impact of COVID-19 has spread across the world. Existing disinfectant procedures that rely on the manual application of chemical-based disinfectants are time consuming, resource intensive and prone to high degrees of human error. Alternative non-touch disinfection methods, such as Ultraviolet Germicidal Irradiation (UVGI), have the potential to overcome many of the limitations of existing approaches while significantly improving workflow and equipment utilization. The aim of this research was to investigate the germicidal effectiveness and the practical feasibility of using a robotic UVGI device for disinfecting surfaces in a radiology setting. We present the design of a robotic UVGI platform that can be deployed alongside human workers and can operate autonomously within cramped rooms, thereby addressing two important requirements necessary for integrating the technology within radiology settings. In one hospital, we conducted experiments in a CT and X-ray room. In a second hospital, we investigated the germicidal performance of the robot when deployed to disinfect a CT room in <15 minutes, a period which is estimated to be 2–4 times faster than current practice for disinfecting rooms after infectious (or potentially infectious) patients. Findings from both test sites show that UVGI successfully inactivated all of measurable microbial load on 22 out of 24 surfaces. On the remaining two surfaces, UVGI reduced the microbial load by 84 and 95%, respectively. The study also exposes some of the challenges of manually disinfecting radiology suites, revealing high concentrations of microbial load in hard-to-reach places. Our findings provide compelling evidence that UVGI can effectively inactivate microbes on commonly touched surfaces in radiology suites, even if they were only exposed to relatively short bursts of irradiation. Despite the short irradiation period, we demonstrated the ability to inactivate microbes with more complex cell structures and requiring higher UV inactivation energies than SARS-CoV-2, thus indicating high likelihood of effectiveness against coronavirus.

Stigmergy is a form of indirect communication and coordination in which agents modify the environment to pass information to their peers. In nature, animals use stigmergy by, for example, releasing pheromone that conveys information to other members of their species. A few systems in swarm robotics research have replicated this process by introducing the concept of artificial pheromone. In this paper, we present Phormica, a system to conduct experiments in swarm robotics that enables a swarm of e-puck robots to release and detect artificial pheromone. Phormica emulates pheromone-based stigmergy thanks to the ability of robots to project UV light on the ground, which has been previously covered with a photochromic material. As a proof of concept, we test Phormica on three collective missions in which robots act collectively guided by the artificial pheromone they release and detect. Experimental results indicate that a robot swarm can effectively self-organize and act collectively by using stigmergic coordination based on the artificial pheromone provided by Phormica.

Modeling deformable objects is an important preliminary step for performing robotic manipulation tasks with more autonomy and dexterity. Currently, generalization capabilities in unstructured environments using analytical approaches are limited, mainly due to the lack of adaptation to changes in the object shape and properties. Therefore, this paper proposes the design and implementation of a data-driven approach, which combines machine learning techniques on graphs to estimate and predict the state and transition dynamics of deformable objects with initially undefined shape and material characteristics. The learned object model is trained using RGB-D sensor data and evaluated in terms of its ability to estimate the current state of the object shape, in addition to predicting future states with the goal to plan and support the manipulation actions of a robotic hand.

This paper introduces a new genetic fuzzy based paradigm for developing scalable set of decentralized homogenous robots for a collaborative task. In this work, the number of robots in the team can be changed without any additional training. The dynamic problem considered in this work involves multiple stationary robots that are assigned with the goal of bringing a common effector, which is physically connected to each of these robots through cables, to any arbitrary target position within the workspace of the robots. The robots do not communicate with each other. This means that each robot has no explicit knowledge of the actions of the other robots in the team. At any instant, the robots only have information related to the common effector and the target. Genetic Fuzzy System (GFS) framework is used to train controllers for the robots to achieve the common goal. The same GFS model is shared among all robots. This way, we take advantage of the homogeneity of the robots to reduce the training parameters. This also provides the capability to scale to any team size without any additional training. This paper shows the effectiveness of this methodology by testing the system on an extensive set of cases involving teams with different number of robots. Although the robots are stationary, the GFS framework presented in this paper does not put any restriction on the placement of the robots. This paper describes the scalable GFS framework and its applicability across a wide set of cases involving a variety of team sizes and robot locations. We also show results in the case of moving targets.

Robots in the real world should be able to adapt to unforeseen circumstances. Particularly in the context of tool use, robots may not have access to the tools they need for completing a task. In this paper, we focus on the problem of tool construction in the context of task planning. We seek to enable robots to construct replacements for missing tools using available objects, in order to complete the given task. We introduce the Feature Guided Search (FGS) algorithm that enables the application of existing heuristic search approaches in the context of task planning, to perform tool construction efficiently. FGS accounts for physical attributes of objects (e.g., shape, material) during the search for a valid task plan. Our results demonstrate that FGS significantly reduces the search effort over standard heuristic search approaches by ≈93% for tool construction.

The quality of crossmodal perception hinges on two factors: The accuracy of the independent unimodal perception and the ability to integrate information from different sensory systems. In humans, the ability for cognitively demanding crossmodal perception diminishes from young to old age. Here, we propose a new approach to research to which degree the different factors contribute to crossmodal processing and the age-related decline by replicating a medical study on visuo-tactile crossmodal pattern discrimination utilizing state-of-the-art tactile sensing technology and artificial neural networks (ANN). We implemented two ANN models to specifically focus on the relevance of early integration of sensory information during the crossmodal processing stream as a mechanism proposed for efficient processing in the human brain. Applying an adaptive staircase procedure, we approached comparable unimodal classification performance for both modalities in the human participants as well as the ANN. This allowed us to compare crossmodal performance between and within the systems, independent of the underlying unimodal processes. Our data show that unimodal classification accuracies of the tactile sensing technology are comparable to humans. For crossmodal discrimination of the ANN the integration of high-level unimodal features on earlier stages of the crossmodal processing stream shows higher accuracies compared to the late integration of independent unimodal classifications. In comparison to humans, the ANN show higher accuracies than older participants in the unimodal as well as the crossmodal condition, but lower accuracies than younger participants in the crossmodal task. Taken together, we can show that state-of-the-art tactile sensing technology is able to perform a complex tactile recognition task at levels comparable to humans. For crossmodal processing, human inspired early sensory integration seems to improve the performance of artificial neural networks. Still, younger participants seem to employ more efficient crossmodal integration mechanisms than modeled in the proposed ANN. Our work demonstrates how collaborative research in neuroscience and embodied artificial neurocognitive models can help to derive models to inform the design of future neurocomputational architectures.

A fascinating challenge in the field of human–robot interaction is the possibility to endow robots with emotional intelligence in order to make the interaction more intuitive, genuine, and natural. To achieve this, a critical point is the capability of the robot to infer and interpret human emotions. Emotion recognition has been widely explored in the broader fields of human–machine interaction and affective computing. Here, we report recent advances in emotion recognition, with particular regard to the human–robot interaction context. Our aim is to review the state of the art of currently adopted emotional models, interaction modalities, and classification strategies and offer our point of view on future developments and critical issues. We focus on facial expressions, body poses and kinematics, voice, brain activity, and peripheral physiological responses, also providing a list of available datasets containing data from these modalities.

Robots that physically interact with their surroundings, in order to accomplish some tasks or assist humans in their activities, require to exploit contact forces in a safe and proficient manner. Impedance control is considered as a prominent approach in robotics to avoid large impact forces while operating in unstructured environments. In such environments, the conditions under which the interaction occurs may significantly vary during the task execution. This demands robots to be endowed with online adaptation capabilities to cope with sudden and unexpected changes in the environment. In this context, variable impedance control arises as a powerful tool to modulate the robot's behavior in response to variations in its surroundings. In this survey, we present the state-of-the-art of approaches devoted to variable impedance control from control and learning perspectives (separately and jointly). Moreover, we propose a new taxonomy for mechanical impedance based on variability, learning, and control. The objective of this survey is to put together the concepts and efforts that have been done so far in this field, and to describe advantages and disadvantages of each approach. The survey concludes with open issues in the field and an envisioned framework that may potentially solve them.

Current robot designs often reflect an anthropomorphic approach, apparently aiming to convince users through an ideal system, being most similar or even on par with humans. The present paper challenges human-likeness as a design goal and questions whether simulating human appearance and performance adequately fits into how humans think about robots in a conceptual sense, i.e., human's mental models of robots and their self. Independent of the technical possibilities and limitations, our paper explores robots' attributed potential to become human-like by means of a thought experiment. Four hundred eighty-one participants were confronted with fictional transitions from human-to-robot and robot-to-human, consisting of 20 subsequent steps. In each step, one part or area of the human (e.g., brain, legs) was replaced with robotic parts providing equal functionalities and vice versa. After each step, the participants rated the remaining humanness and remaining self of the depicted entity on a scale from 0 to 100%. It showed that the starting category (e.g., human, robot) serves as an anchor for all former judgments and can hardly be overcome. Even if all body parts had been exchanged, a former robot was not perceived as totally human-like and a former human not as totally robot-like. Moreover, humanness appeared as a more sensible and easier denied attribute than robotness, i.e., after the objectively same transition and exchange of the same parts, the former human was attributed less humanness and self left compared to the former robot's robotness and self left. The participants' qualitative statements about why the robot has not become human-like, often concerned the (unnatural) process of production, or simply argued that no matter how many parts are exchanged, the individual keeps its original entity. Based on such findings, we suggest that instead of designing most human-like robots in order to reach acceptance, it might be more promising to understand robots as an own “species” and underline their specific characteristics and benefits. Limitations of the present study and implications for future HRI research and practice are discussed.

The growing field of soft wearable exosuits, is gradually gaining terrain and proposing new complementary solutions in assistive technology, with several advantages in terms of portability, kinematic transparency, ergonomics, and metabolic efficiency. Those are palatable benefits that can be exploited in several applications, ranging from strength and resistance augmentation in industrial scenarios, to assistance or rehabilitation for people with motor impairments. To be effective, however, an exosuit needs to synergistically work with the human and matching specific requirements in terms of both movements kinematics and dynamics: an accurate and timely intention-detection strategy is the paramount aspect which assume a fundamental importance for acceptance and usability of such technology. We previously proposed to tackle this challenge by means of a model-based myoelectric controller, treating the exosuit as an external muscular layer in parallel to the human biomechanics and as such, controlled by the same efferent motor commands of biological muscles. However, previous studies that used classical control methods, demonstrated that the level of device's intervention and effectiveness of task completion are not linearly related: therefore, using a newly implemented EMG-driven controller, we isolated and characterized the relationship between assistance magnitude and muscular benefits, with the goal to find a range of assistance which could make the controller versatile for both dynamic and static tasks. Ten healthy participants performed the experiment resembling functional daily activities living in separate assistance conditions: without the device's active support and with different levels of intervention by the exosuit. Higher assistance levels resulted in larger reductions in the activity of the muscles augmented by the suit actuation and a good performance in motion accuracy, despite involving a decrease of the movement velocities, with respect to the no assistance condition. Moreover, increasing torque magnitude by the exosuit resulted in a significant reduction in the biological torque at the elbow joint and in a progressive effective delay in the onset of muscular fatigue. Thus, contrarily to classical force and proportional myoelectric schemes, the implementation of an opportunely tailored EMG-driven model based controller affords to naturally match user's intention detection and provide an assistance level working symbiotically with the human biomechanics.

Recently, extratheses, aka Supernumerary Robotic Limbs (SRLs), are emerging as a new trend in the field of assistive and rehabilitation devices. We proposed the SoftHand X, a system composed of an anthropomorphic soft hand extrathesis, with a gravity support boom and a control interface for the patient. In preliminary tests, the system exhibited a positive outlook toward assisting impaired people during daily life activities and fighting learned-non-use of the impaired arm. However, similar to many robot-aided therapies, the use of the system may induce side effects that can be detrimental and worsen patients' conditions. One of the most common is the onset of alternative grasping strategies and compensatory movements, which clinicians absolutely need to counter in physical therapy. Before embarking in systematic experimentation with the SoftHand X on patients, it is essential that the system is demonstrated not to lead to an increase of compensation habits. This paper provides a detailed description of the compensatory movements performed by healthy subjects using the SoftHand X. Eleven right-handed healthy subjects were involved within an experimental protocol in which kinematic data of the upper body and EMG signals of the arm were acquired. Each subject executed tasks with and without the robotic system, considering this last situation as reference of optimal behavior. A comparison between two different configurations of the robotic hand was performed to understand if this aspect may affect the compensatory movements. Results demonstrated that the use of the apparatus reduces the range of motion of the wrist, elbow and shoulder, while it increases the range of the trunk and head movements. On the other hand, EMG analysis indicated that muscle activation was very similar among all the conditions. Results obtained suggest that the system may be used as assistive device without causing an over-use of the arm joints, and opens the way to clinical trials with patients.

Children begin to develop self-awareness when they associate images and abilities with themselves. Such “construction of self” continues throughout adult life as we constantly cycle through different forms of self-awareness, seeking, to redefine ourselves. Modern technologies like screens and artificial intelligence threaten to alter our development of self-awareness, because children and adults are exposed to machines, tele-presences, and displays that increasingly become part of human identity. We use avatars, invent digital lives, and augment ourselves with digital imprints that depart from reality, making the development of self-identification adjust to digital technologies that blur the boundary between us and our devices. To empower children and adults to see themselves and artificially intelligent machines as separately aware entities, we created the persona of a salvaged supermarket security camera refurbished and enhanced with the power of computer vision to detect human faces, and project them on a large-scale 3D face sculpture. The surveillance camera system moves its head to point to human faces at times, but at other times, humans have to get its attention by moving to its vicinity, creating a dynamic where audiences attempt to see their own faces on the sculpture by gazing into the machine's eye. We found that audiences began attaining an understanding of machines that interpret our faces as separate from our identities, with their own agendas and agencies that show by the way they serendipitously interact with us. The machine-projected images of us are their own interpretation rather than our own, distancing us from our digital analogs. In the accompanying workshop, participants learn about how computer vision works by putting on disguises in order to escape from an algorithm detecting them as the same person by analyzing their faces. Participants learn that their own agency affects how machines interpret them, gaining an appreciation for the way their own identities and machines' awareness of them can be separate entities that can be manipulated for play. Together the installation and workshop empower children and adults to think beyond identification with digital technology to recognize the machine's own interpretive abilities that lie separate from human being's own self-awareness.

In order to assist after-stroke individuals to rehabilitate their movements, research centers have developed lower limbs exoskeletons and control strategies for them. Robot-assisted therapy can help not only by providing support, accuracy, and precision while performing exercises, but also by being able to adapt to different patient needs, according to their impairments. As a consequence, different control strategies have been employed and evaluated, although with limited effectiveness. This work presents a bio-inspired controller, based on the concept of motor primitives. The proposed approach was evaluated on a lower limbs exoskeleton, in which the knee joint was driven by a series elastic actuator. First, to extract the motor primitives, the user torques were estimated by means of a generalized momentum-based disturbance observer combined with an extended Kalman filter. These data were provided to the control algorithm, which, at every swing phase, assisted the subject to perform the desired movement, based on the analysis of his previous step. Tests are performed in order to evaluate the controller performance for a subject walking actively, passively, and at a combination of these two conditions. Results suggest that the robot assistance is capable of compensating the motor primitive weight deficiency when the subject exerts less torque than expected. Furthermore, though only the knee joint was actuated, the motor primitive weights with respect to the hip joint were influenced by the robot torque applied at the knee. The robot also generated torque to compensate for eventual asynchronous movements of the subject, and adapted to a change in the gait characteristics within three to four steps.

Pages