Frontiers in Robotics and AI

RSS Feed for Frontiers in Robotics and AI | New and Recent Articles
Subscribe to Frontiers in Robotics and AI feed

Improving the mobility of robots is an important goal for many real-world applications and implementing an animal-like spine structure in a quadruped robot is a promising approach to achieving high-speed running. This paper proposes a feline-like multi-joint spine adopting a one-degree-of-freedom closed-loop linkage for a quadruped robot to realize high-speed running. We theoretically prove that the proposed spine structure can realize 1.5 times the horizontal range of foot motion compared to a spine structure with a single joint. Experimental results demonstrate that a robot with the proposed spine structure achieves 1.4 times the horizontal range of motion and 1.9 times the speed of a robot with a single-joint spine structure.

This paper presents a cooperative, multi-robot solution for searching, excavating, and transporting mineral resources on the Moon. Our work was developed in the context of the Space Robotics Challenge Phase 2 (SRCP2), which was part of the NASA Centennial Challenges and was motivated by the current NASA Artemis program, a flagship initiative that intends to establish a long-term human presence on the Moon. In the SRCP2 a group of simulated mobile robots was tasked with reporting volatile locations within a realistic lunar simulation environment, and excavating and transporting these resources to target locations in such an environment. In this paper, we describe our solution to the SRCP2 competition that includes our strategies for rover mobility hazard estimation (e.g. slippage level, stuck status), immobility recovery, rover-to-rover, and rover-to-infrastructure docking, rover coordination and cooperation, and cooperative task planning and autonomy. Our solution was able to successfully complete all tasks required by the challenge, granting our team sixth place among all participants of the challenge. Our results demonstrate the potential of using autonomous robots for autonomous in-situ resource utilization (ISRU) on the Moon. Our results also highlight the effectiveness of realistic simulation environments for testing and validating robot autonomy and coordination algorithms. The successful completion of the SRCP2 challenge using our solution demonstrates the potential of cooperative, multi-robot systems for resource utilization on the Moon.

Advancements in the research on so-called “synthetic (artificial) cells” have been mainly characterized by an important acceleration in all sorts of experimental approaches, providing a growing amount of knowledge and techniques that will shape future successful developments. Synthetic cell technology, indeed, shows potential in driving a revolution in science and technology. On the other hand, theoretical and epistemological investigations related to what synthetic cells “are,” how they behave, and what their role is in generating knowledge have not received sufficient attention. Open questions about these less explored subjects range from the analysis of the organizational theories applied to synthetic cells to the study of the “relevance” of synthetic cells as scientific tools to investigate life and cognition; and from the recognition and the cultural reappraisal of cybernetic inheritance in synthetic biology to the need for developing concepts on synthetic cells and to the exploration, in a novel perspective, of information theories, complexity, and artificial intelligence applied in this novel field. In these contributions, we will briefly sketch some crucial aspects related to the aforementioned issues, based on our ongoing studies. An important take-home message will result: together with their impactful experimental results and potential applications, synthetic cells can play a major role in the exploration of theoretical questions as well.

Introduction: The RobHand (Robot for Hand Rehabilitation) is a robotic neuromotor rehabilitation exoskeleton that assists in performing flexion and extension movements of the fingers. The present case study assesses changes in manual function and hand muscle strength of four selected stroke patients after completion of an established training program. In addition, safety and user satisfaction are also evaluated.

Methods: The training program consisted of 16 sessions; two 60-minute training sessions per week for eight consecutive weeks. During each session, patients moved through six consecutive rehabilitation stages using the RobHand. Manual function assessments were applied before and after the training program and safety tests were carried out after each session. A user evaluation questionnaire was filled out after each patient completed the program.

Results: The safety test showed the absence of significant adverse events, such as skin lesions or fatigue. An average score of 4 out of 5 was obtained on the Quebec User Evaluation of Satisfaction with Assistive Technology 2.0 Scale. Users were very satisfied with the weight, comfort, and quality of professional services. A Kruskal-Wallis test revealed that there were not statistically significant changes in the manual function tests between the beginning and the end of the training program.

Discussion: It can be concluded that the RobHand is a safe rehabilitation technology and users were satisfied with the system. No statistically significant differences in manual function were found. This could be due to the high influence of the stroke stage on motor recovery since the study was performed with chronic patients. Hence, future studies should evaluate the rehabilitation effectiveness of the repetitive use of the RobHand exoskeleton on subacute patients.

Clinical Trial Registration:https://clinicaltrials.gov/ct2/show/NCT05598892?id=NCT05598892&draw=2&rank=1, identifier NCT05598892.

Middlewares are standard tools for modern software development in many areas, especially in robotics. Although such have become common for high-level applications, there is little support for real-time systems and low-level control. Therefore, µRT provides a lightweight solution for resource-constrained embedded systems, such as microcontrollers. It features publish–subscribe communication and remote procedure calls (RPCs) and can validate timing constraints at runtime. In contrast to other middlewares, µRT does not rely on specific transports for communication but can be used with any technology. Empirical results prove the small memory footprint, consistent temporal behavior, and predominantly linear scaling. The usability of µRT was found to be competitive with state-of-the-art solutions by means of a study.

Soft robotics technology can aid in achieving United Nations’ Sustainable Development Goals (SDGs) and the Paris Climate Agreement through development of autonomous, environmentally responsible machines powered by renewable energy. By utilizing soft robotics, we can mitigate the detrimental effects of climate change on human society and the natural world through fostering adaptation, restoration, and remediation. Moreover, the implementation of soft robotics can lead to groundbreaking discoveries in material science, biology, control systems, energy efficiency, and sustainable manufacturing processes. However, to achieve these goals, we need further improvements in understanding biological principles at the basis of embodied and physical intelligence, environment-friendly materials, and energy-saving strategies to design and manufacture self-piloting and field-ready soft robots. This paper provides insights on how soft robotics can address the pressing issue of environmental sustainability. Sustainable manufacturing of soft robots at a large scale, exploring the potential of biodegradable and bioinspired materials, and integrating onboard renewable energy sources to promote autonomy and intelligence are some of the urgent challenges of this field that we discuss in this paper. Specifically, we will present field-ready soft robots that address targeted productive applications in urban farming, healthcare, land and ocean preservation, disaster remediation, and clean and affordable energy, thus supporting some of the SDGs. By embracing soft robotics as a solution, we can concretely support economic growth and sustainable industry, drive solutions for environment protection and clean energy, and improve overall health and well-being.

Introduction: Wearable assistive devices for the visually impaired whose technology is based on video camera devices represent a challenge in rapid evolution, where one of the main problems is to find computer vision algorithms that can be implemented in low-cost embedded devices.

Objectives and Methods: This work presents a Tiny You Only Look Once architecture for pedestrian detection, which can be implemented in low-cost wearable devices as an alternative for the development of assistive technologies for the visually impaired.

Results: The recall results of the proposed refined model represent an improvement of 71% working with four anchor boxes and 66% with six anchor boxes compared to the original model. The accuracy achieved on the same data set shows an increase of 14% and 25%, respectively. The F1 calculation shows a refinement of 57% and 55%. The average accuracy of the models achieved an improvement of 87% and 99%. The number of correctly detected objects was 3098 and 2892 for four and six anchor boxes, respectively, whose performance is better by 77% and 65% compared to the original, which correctly detected 1743 objects.

Discussion: Finally, the model was optimized for the Jetson Nano embedded system, a case study for low-power embedded devices, and in a desktop computer. In both cases, the graphics processing unit (GPU) and central processing unit were tested, and a documented comparison of solutions aimed at serving visually impaired people was performed.

Conclusion: We performed the desktop tests with a RTX 2070S graphics card, and the image processing took about 2.8 ms. The Jetson Nano board could process an image in about 110 ms, offering the opportunity to generate alert notification procedures in support of visually impaired mobility.

The concept of Industry 4.0 brings the change of industry manufacturing patterns that become more efficient and more flexible. In response to this tendency, an efficient robot teaching approach without complex programming has become a popular research direction. Therefore, we propose an interactive finger-touch based robot teaching schema using a multimodal 3D image (color (RGB), thermal (T) and point cloud (3D)) processing. Here, the resulting heat trace touching the object surface will be analyzed on multimodal data, in order to precisely identify the true hand/object contact points. These identified contact points are used to calculate the robot path directly. To optimize the identification of the contact points we propose a calculation scheme using a number of anchor points which are first predicted by hand/object point cloud segmentation. Subsequently a probability density function is defined to calculate the prior probability distribution of true finger trace. The temperature in the neighborhood of each anchor point is then dynamically analyzed to calculate the likelihood. Experiments show that the trajectories estimated by our multimodal method have significantly better accuracy and smoothness than only by analyzing point cloud and static temperature distribution.

Reproducibility of results is, in all research fields, the cornerstone of the scientific method and the minimum standard for assessing the value of scientific claims and conclusions drawn by other scientists. It requires a systematic approach and accurate description of the experimental procedure and data analysis, which allows other scientists to follow the steps described in the published work and obtain the “same results.” In general and in different research contexts with “same” results, we mean different things. It can be almost identical measures in a fully deterministic experiment or “validation of a hypothesis” or statistically similar results in a non-deterministic context. Unfortunately, it has been shown by systematic meta-analysis studies that many findings in fields like psychology, sociology, medicine, and economics do not hold up when other researchers try to replicate them. Many scientific fields are experiencing what is generally referred to as a “reproducibility crisis,” which undermines the trust in published results, imposes a thorough revision of the methodology in scientific research, and makes progress difficult. In general, the reproducibility of experiments is not a mainstream practice in artificial intelligence and robotics research. Surgical robotics is no exception. There is a need for developing new tools and putting in place a community effort to allow the transition to more reproducible research and hence faster progress in research. Reproducibility, replicability, and benchmarking (operational procedures for the assessment and comparison of research results) are made more complex for medical robotics and surgical systems, due to patenting, safety, and ethical issues. In this review paper, we selected 10 relevant published manuscripts on surgical robotics to analyze their clinical applicability and underline the problems related to reproducibility of the reported experiments, with the aim of finding possible solutions to the challenges that limit the translation of many scientific research studies into real-world applications and slow down research progress.

Fiber reinforced soft pneumatic actuators are hard to control due to their non-linear behavior and non-uniformity introduced by the fabrication process. Model-based controllers generally have difficulty compensating non-uniform and non-linear material behaviors, whereas model-free approaches are harder to interpret and tune intuitively. In this study, we present the design, fabrication, characterization, and control of a fiber reinforced soft pneumatic module with an outer diameter size of 12 mm. Specifically, we utilized the characterization data to adaptively control the soft pneumatic actuator. From the measured characterization data, we fitted mapping functions between the actuator input pressures and the actuator space angles. These maps were used to construct the feedforward control signal and tune the feedback controller adaptively depending on the actuator bending configuration. The performance of the proposed control approach is experimentally validated by comparing the measured 2D tip orientation against the reference trajectory. The adaptive controller was able to successfully follow the prescribed trajectory with a mean absolute error of 0.68° for the magnitude of the bending angle and 3.5° for the bending phase around the axial direction. The data-driven control method introduced in this paper may offer a solution to intuitively tune and control soft pneumatic actuators, compensating for their non-uniform and non-linear behavior.

Awareness of catheter tip interaction forces is a crucial aspect during cardiac ablation procedures. The most important contact forces are the ones that originate between the catheter tip and the beating cardiac tissue. Clinical studies have shown that effective ablation occurs when contact forces are in the proximity of 0.2 N. Lower contact forces lead to ineffective ablation, while higher contact forces may result in complications such as cardiac perforation. Accurate and high resolution force sensing is therefore indispensable in such critical situations. Accordingly, this work presents the development of a unique and novel catheter tip force sensor utilizing a multi-core fiber with inscribed fiber Bragg gratings. A customizable helical compression spring is designed to serve as the flexural component relaying external forces to the multi-core fiber. The limited number of components, simple construction, and compact nature of the sensor makes it an appealing solution towards clinical translation. An elaborated approach is proposed for the design and dimensioning of the necessary sensor components. The approach also presents a unique method to decouple longitudinal and lateral force measurements. A force sensor prototype and a dedicated calibration setup are developed to experimentally validate the theoretical performance. Results show that the proposed force sensor exhibits 7.4 mN longitudinal resolution, 0.8 mN lateral resolution, 0.72 mN mean longitudinal error, 0.96 mN mean lateral error, a high repeatability, and excellent decoupling between longitudinal and lateral forces.

Objective: To characterize a socially active humanoid robot’s therapeutic interaction as a therapeutic assistant when providing arm rehabilitation (i.e., arm basis training (ABT) for moderate-to-severe arm paresis or arm ability training (AAT) for mild arm paresis) to stroke survivors when using the digital therapeutic system Evidence-Based Robot-Assistant in Neurorehabilitation (E-BRAiN) and to compare it to human therapists’ interaction.

Methods: Participants and therapy: Seventeen stroke survivors receiving arm rehabilitation (i.e., ABT [n = 9] or AAT [n = 8]) using E-BRAiN over a course of nine sessions and twenty-one other stroke survivors receiving arm rehabilitation sessions (i.e., ABT [n = 6] or AAT [n = 15]) in a conventional 1:1 therapist–patient setting. Analysis of therapeutic interaction: Therapy sessions were videotaped, and all therapeutic interactions (information provision, feedback, and bond-related interaction) were documented offline both in terms of their frequency of occurrence and time used for the respective type of interaction using the instrument THER-I-ACT. Statistical analyses: The therapeutic interaction of the humanoid robot, supervising staff/therapists, and helpers on day 1 is reported as mean across subjects for each type of therapy (i.e., ABT and AAT) as descriptive statistics. Effects of time (day 1 vs. day 9) on the humanoid robot interaction were analyzed by repeated-measures analysis of variance (rmANOVA) together with the between-subject factor type of therapy (ABT vs. AAT). The between-subject effect of the agent (humanoid robot vs. human therapist; day 1) was analyzed together with the factor therapy (ABT vs. AAT) by ANOVA.

Main results and interpretation: The overall pattern of the therapeutic interaction by the humanoid robot was comprehensive and varied considerably with the type of therapy (as clinically indicated and intended), largely comparable to human therapists’ interaction, and adapted according to needs for interaction over time. Even substantially long robot-assisted therapy sessions seemed acceptable to stroke survivors and promoted engaged patients’ training behavior.

Conclusion: Humanoid robot interaction as implemented in the digital system E-BRAiN matches the human therapeutic interaction and its modification across therapies well and promotes engaged training behavior by patients. These characteristics support its clinical use as a therapeutic assistant and, hence, its application to support specific and intensive restorative training for stroke survivors.

Introduction: Measuring kinematic behavior during robot-assisted gait therapy requires either laborious set up of a marker-based motion capture system or relies on the internal sensors of devices that may not cover all relevant degrees of freedom. This presents a major barrier for the adoption of kinematic measurements in the normal clinical schedule. However, to advance the field of robot-assisted therapy many insights could be gained from evaluating patient behavior during regular therapies.

Methods: For this reason, we recently developed and validated a method for extracting kinematics from recordings of a low-cost RGB-D sensor, which relies on a virtual 3D body model to estimate the patient’s body shape and pose in each frame. The present study aimed to evaluate the robustness of the method to the presence of a lower limb exoskeleton. 10 healthy children without gait impairment walked on a treadmill with and without wearing the exoskeleton to evaluate the estimated body shape, and 8 custom stickers were placed on the body to evaluate the accuracy of estimated poses.

Results & Conclusion: We found that the shape is generally robust to wearing the exoskeleton, and systematic pose tracking errors were around 5 mm. Therefore, the method can be a valuable measurement tool for the clinical evaluation, e.g., to measure compensatory movements of the trunk.

Capturing vertical profiles of the atmosphere and measuring wind conditions can be of significant value for weather forecasting and pollution monitoring however, collecting such data can be limited by current approaches using balloon-based radiosondes and expensive ground-based sensors. Multirotor vehicles can be significantly affected by the local wind conditions, and due to their under-actuated nature, the response to the flow is visible in the changes in the orientation. From these changes in orientation, wind speed and direction estimates can be determined, allowing accurate estimation with no additional sensors. In this work, we expand on and improve this method of wind speed and direction estimation and incorporate corrections for climbing flight to improve estimation during vertical profiling. These corrections were validated against sonic anemometer data before being used to gather vertical profiles of the wind conditions around Volcan De Fuego in Guatemala up to altitudes of 3000 m Above Ground Level (AGL). From the results of this work, we show we can improve the accuracy of multirotor wind estimation in vertical profiling through our improved model and some of the practical limitations of radiosondes that can be overcome through the use of UAS in this application.

HD-maps are one of the core components of the self-driving pipeline. Despite the effort of many companies to develop a completely independent vehicle, many state-of-the-art solutions rely on high-definition maps of the environment for localization and navigation. Nevertheless, the creation process of such maps can be complex and error-prone or expensive if performed via ad-hoc surveys. For this reason, robust automated solutions are required. One fundamental component of an high-definition map is traffic lights. In particular, traffic light detection has been a well-known problem in the autonomous driving field. Still, the focus has always been on the light state, not the features (i.e., shape, orientation, pictogram). This work presents a pipeline for lights HD-map creation designed to provide accurate georeferenced position and description of all traffic lights seen by a camera mounted on a surveying vehicle. Our algorithm considers consecutive detection of the same light and uses Kalman filtering techniques to provide each target’s smoother and more precise position. Our pipeline has been validated for the detection and mapping task using the state-of-the-art dataset DriveU Traffic Light Dataset. The results show that our model is robust even with noisy GPS data. Moreover, for the detection task, we highlight how our model can correctly identify even far-away targets which are not labeled in the original dataset.

Introduction: Video-based clinical rating plays an important role in assessing dystonia and monitoring the effect of treatment in dyskinetic cerebral palsy (CP). However, evaluation by clinicians is time-consuming, and the quality of rating is dependent on experience. The aim of the current study is to provide a proof-of-concept for a machine learning approach to automatically assess scoring of dystonia using 2D stick figures extracted from videos. Model performance was compared to human performance.

Methods: A total of 187 video sequences of 34 individuals with dyskinetic CP (8–23 years, all non-ambulatory) were filmed at rest during lying and supported sitting. Videos were scored by three raters according to the Dyskinesia Impairment Scale (DIS) for arm and leg dystonia (normalized scores ranging from 0–1). Coordinates in pixels of the left and right wrist, elbow, shoulder, hip, knee and ankle were extracted using DeepLabCut, an open source toolbox that builds on a pose estimation algorithm. Within a subset, tracking accuracy was assessed for a pretrained human model and for models trained with an increasing number of manually labeled frames. The mean absolute error (MAE) between DeepLabCut’s prediction of the position of body points and manual labels was calculated. Subsequently, movement and position features were calculated from extracted body point coordinates. These features were fed into a Random Forest Regressor to train a model to predict the clinical scores. The model performance trained with data from one rater evaluated by MAEs (model-rater) was compared to inter-rater accuracy.

Results: A tracking accuracy of 4.5 pixels (approximately 1.5 cm) could be achieved by adding 15–20 manually labeled frames per video. The MAEs for the trained models ranged from 0.21 ± 0.15 for arm dystonia to 0.14 ± 0.10 for leg dystonia (normalized DIS scores). The inter-rater MAEs were 0.21 ± 0.22 and 0.16 ± 0.20, respectively.

Conclusion: This proof-of-concept study shows the potential of using stick figures extracted from common videos in a machine learning approach to automatically assess dystonia. Sufficient tracking accuracy can be reached by manually adding labels within 15–20 frames per video. With a relatively small data set, it is possible to train a model that can automatically assess dystonia with a performance comparable to human scoring.

Introduction: In this study, the development of a social robot, capable of giving speech simultaneously in more than one language was in mind. However, the negative effect of background noise on speech comprehension is well-documented in previous works. This deteriorating effect is more highlighted when the background noise has speech-like properties. Hence, the presence of speech as the background noise in a simultaneously speaking bilingual robot can be fatal for the speech comprehension of each person listening to the robot.

Methods: To improve speech comprehension and consequently, user experience in the intended bilingual robot, the effect of time expansion on speech comprehension in a multi-talker speech scenario was investigated. Sentence recognition, speech comprehension, and subjective evaluation tasks were implemented in the study.

Results: The obtained results suggest that a reduced speech rate, leading to an expansion in the speech time, in addition to increased pause duration in both the target and background speeches can lead to statistically significant improvement in both sentence recognition, and speech comprehension of participants. More interestingly, participants got a higher score in the time-expanded multi-talker speech than in the standard-speed single-talker speech in the speech comprehension and, in the sentence recognition task. However, this positive effect could not be attributed merely to the time expansion, as we could not repeat the same positive effect in a time-expanded single-talker speech.

Discussion: The results obtained in this study suggest a facilitating effect of the presence of the background speech in a simultaneously speaking bilingual robot provided that both languages are presented in a time-expanded manner. The implications of such a simultaneously speaking robot are discussed.

Pages