Frontiers in Robotics and AI

RSS Feed for Frontiers in Robotics and AI | New and Recent Articles
Subscribe to Frontiers in Robotics and AI feed

The usage of socially assistive robots for autism therapies has increased in recent years. This novel therapeutic tool allows the specialist to keep track of the improvement in socially assistive tasks for autistic children, who hypothetically prefer object-based over human interactions. These kinds of tools also allow the collection of new information to early diagnose neurodevelopment disabilities. This work presents the integration of an output feedback adaptive controller for trajectory tracking and energetic autonomy of a mobile socially assistive robot for autism spectrum disorder under an event-driven control scheme. The proposed implementation integrates facial expression and emotion recognition algorithms to detect the emotions and identities of users (providing robustness to the algorithm since it automatically generates the missing input parameters, which allows it to complete the recognition) to detonate a set of adequate trajectories. The algorithmic implementation for the proposed socially assistive robot is presented and implemented in the Linux-based Robot Operating System. It is considered that the optimization of energetic consumption of the proposal is the main contribution of this work, as it will allow therapists to extend and adapt sessions with autistic children. The experiment that validates the energetic optimization of the proposed integration of an event-driven control scheme is presented.

Introduction: Backchannels, i.e., short interjections by an interlocutor to indicate attention, understanding or agreement regarding utterances by another conversation participant, are fundamental in human-human interaction. Lack of backchannels or if they have unexpected timing or formulation may influence the conversation negatively, as misinterpretations regarding attention, understanding or agreement may occur. However, several studies over the years have shown that there may be cultural differences in how backchannels are provided and perceived and that these differences may affect intercultural conversations. Culturally aware robots must hence be endowed with the capability to detect and adapt to the way these conversational markers are used across different cultures. Traditionally, culture has been defined in terms of nationality, but this is more and more considered to be a stereotypic simplification. We therefore investigate several socio-cultural factors, such as the participants’ gender, age, first language, extroversion and familiarity with robots, that may be relevant for the perception of backchannels.

Methods: We first cover existing research on cultural influence on backchannel formulation and perception in human-human interaction and on backchannel implementation in Human-Robot Interaction. We then present an experiment on second language spoken practice, in which we investigate how backchannels from the social robot Furhat influence interaction (investigated through speaking time ratios and ethnomethodology and multimodal conversation analysis) and impression of the robot (measured by post-session ratings). The experiment, made in a triad word game setting, is focused on if activity-adaptive robot backchannels may redistribute the participants’ speaking time ratio, and/or if the participants’ assessment of the robot is influenced by the backchannel strategy. The goal is to explore how robot backchannels should be adapted to different language learners to encourage their participation while being perceived as socio-culturally appropriate.

Results: We find that a strategy that displays more backchannels towards a less active speaker may substantially decrease the difference in speaking time between the two speakers, that different socio-cultural groups respond differently to the robot’s backchannel strategy and that they also perceive the robot differently after the session.

Discussion: We conclude that the robot may need different backchanneling strategies towards speakers from different socio-cultural groups in order to encourage them to speak and have a positive perception of the robot.

The Vulcano challenge is a new and innovative robotic challenge for legged robots in a physical and simulated scenario of a volcanic eruption. In this scenario, robots must climb a volcano’s escarpment and collect data from areas with high temperatures and toxic gases. This paper presents the main idea behind this challenge, with a detailed description of the simulated and physical scenario of the volcano ramp, the rules proposed for the competition, and the conception of a robot prototype, Vulcano, used in the competition. Finally, it discusses the performance of teams invited to participate in the challenge in the context of Azorean Robotics Open, the Azoresbot 2022. This first test for this challenge provided insights into what the participants found exciting and positive and what they found less positive.

This paper presents the singularity analysis of 3-DOF planar parallel continuum robots (PCR) with three identical legs. Each of the legs contains two passive conventional rigid 1-DOF joints and one actuated planar continuum link, which bends with a constant curvature. All possible PCR architectures featuring such legs are enumerated and the kinematic velocity equations are provided for each of them. Afterwards, a singularity analysis is conducted based on the obtained Jacobian matrices, providing a geometrical understanding of singularity occurences. It is shown that while loci and occurrences of type II singularities are mostly analogous to conventional parallel kinematic mechanisms (PKM), type I singularity occurences for the PCR studied in this work are quite different from conventional PKM and less geometrically intuitive. The study provided in this paper can promote further investigations on planar parallel continuum robots, such as structural design and control.

Positioning and navigation represent relevant topics in the field of robotics, due to their multiple applications in real-world scenarios, ranging from autonomous driving to harsh environment exploration. Despite localization in outdoor environments is generally achieved using a Global Navigation Satellite System (GNSS) receiver, global navigation satellite system-denied environments are typical of many situations, especially in indoor settings. Autonomous robots are commonly equipped with multiple sensors, including laser rangefinders, IMUs, and odometers, which can be used for mapping and localization, overcoming the need for global navigation satellite system data. In literature, almost no information can be found on the positioning accuracy and precision of 6 Degrees of Freedom Light Detection and Ranging (LiDAR) localization systems, especially for real-world scenarios. In this paper, we present a short review of state-of-the-art light detection and ranging localization methods in global navigation satellite system-denied environments, highlighting their advantages and disadvantages. Then, we evaluate two state-of-the-art Simultaneous Localization and Mapping (SLAM) systems able to also perform localization, one of which implemented by us. We benchmark these two algorithms on manually collected dataset, with the goal of providing an insight into their attainable precision in real-world scenarios. In particular, we present two experimental campaigns, one indoor and one outdoor, to measure the precision of these algorithms. After creating a map for each of the two environments, using the simultaneous localization and mapping part of the systems, we compute a custom localization error for multiple, different trajectories. Results show that the two algorithms are comparable in terms of precision, having a similar mean translation and rotation errors of about 0.01 m and 0.6°, respectively. Nevertheless, the system implemented by us has the advantage of being modular, customizable and able to achieve real-time performance.

Robots that work in unstructured scenarios are often subjected to collisions with the environment or external agents. Accordingly, recently, researchers focused on designing robust and resilient systems. This work presents a framework that quantitatively assesses the balancing resilience of self-stabilizing robots subjected to external perturbations. Our proposed framework consists of a set of novel Performance Indicators (PIs), experimental protocols for the reliable and repeatable measurement of the PIs, and a novel testbed to execute the protocols. The design of the testbed, the control structure, the post-processing software, and all the documentation related to the performance indicators and protocols are provided as open-source material so that other institutions can replicate the system. As an example of the application of our method, we report a set of experimental tests on a two-wheeled humanoid robot, with an experimental campaign of more than 1100 tests. The investigation demonstrates high repeatability and efficacy in executing reliable and precise perturbations.

Novel technologies, fabrication methods, controllers and computational methods are rapidly advancing the capabilities of soft robotics. This is creating the need for design techniques and methodologies that are suited for the multi-disciplinary nature of soft robotics. These are needed to provide a formalized and scientific approach to design. In this paper, we formalize the scientific questions driving soft robotic design; what motivates the design of soft robots, and what are the fundamental challenges when designing soft robots? We review current methods and approaches to soft robot design including bio-inspired design, computational design and human-driven design, and highlight the implications that each design methods has on the resulting soft robotic systems. To conclude, we provide an analysis of emerging methods which could assist robot design, and we present a review some of the necessary technologies that may enable these approaches.

Using human tools can significantly benefit robots in many application domains. Such ability would allow robots to solve problems that they were unable to without tools. However, robot tool use is a challenging task. Tool use was initially considered to be the ability that distinguishes human beings from other animals. We identify three skills required for robot tool use: perception, manipulation, and high-level cognition skills. While both general manipulation tasks and tool use tasks require the same level of perception accuracy, there are unique manipulation and cognition challenges in robot tool use. In this survey, we first define robot tool use. The definition highlighted the skills required for robot tool use. The skills coincide with an affordance model which defined a three-way relation between actions, objects, and effects. We also compile a taxonomy of robot tool use with insights from animal tool use literature. Our definition and taxonomy lay a theoretical foundation for future robot tool use studies and also serve as practical guidelines for robot tool use applications. We first categorize tool use based on the context of the task. The contexts are highly similar for the same task (e.g., cutting) in non-causal tool use, while the contexts for causal tool use are diverse. We further categorize causal tool use based on the task complexity suggested in animal tool use studies into single-manipulation tool use and multiple-manipulation tool use. Single-manipulation tool use are sub-categorized based on tool features and prior experiences of tool use. This type of tool may be considered as building blocks of causal tool use. Multiple-manipulation tool use combines these building blocks in different ways. The different combinations categorize multiple-manipulation tool use. Moreover, we identify different skills required in each sub-type in the taxonomy. We then review previous studies on robot tool use based on the taxonomy and describe how the relations are learned in these studies. We conclude with a discussion of the current applications of robot tool use and open questions to address future robot tool use.

Due to the complexity of autonomous mobile robot’s requirement and drastic technological changes, the safe and efficient path tracking development is becoming complex and requires intensive knowledge and information, thus the demand for advanced algorithm has rapidly increased. Analyzing unstructured gain data has been a growing interest among researchers, resulting in valuable information in many fields such as path planning and motion control. Among those, motion control is a vital part of a fast, secure operation. Yet, current approaches face problems in managing unstructured gain data and producing accurate local planning due to the lack of formulation in the knowledge on the gain optimization. Therefore, this research aims to design a new gain optimization approach to assist researcher in identifying the value of the gain’s product with a qualitative comparative study of the up-to-date controllers. Gains optimization in this context is to classify the near perfect value of the gain’s product and processes. For this, a domain controller will be developed based on the attributes of the Fuzzy-PID parameters. The development of the Fuzzy Logic Controller requires information on the PID controller parameters that will be fuzzified and defuzzied based on the resulting 49 fuzzy rules. Furthermore, this fuzzy inference will be optimized for its usability by a genetic algorithm (GA). It is expected that the domain controller will give a positive impact to the path planning position and angular PID controller algorithm that meet the autonomous demand.

Reinforcement Learning has been shown to have a great potential for robotics. It demonstrated the capability to solve complex manipulation and locomotion tasks, even by learning end-to-end policies that operate directly on visual input, removing the need for custom perception systems. However, for practical robotics applications, its scarce sample efficiency, the need for huge amounts of resources, data, and computation time can be an insurmountable obstacle. One potential solution to this sample efficiency issue is the use of simulated environments. However, the discrepancy in visual and physical characteristics between reality and simulation, namely the sim-to-real gap, often significantly reduces the real-world performance of policies trained within a simulator. In this work we propose a sim-to-real technique that trains a Soft-Actor Critic agent together with a decoupled feature extractor and a latent-space dynamics model. The decoupled nature of the method allows to independently perform the sim-to-real transfer of feature extractor and control policy, and the presence of the dynamics model acts as a constraint on the latent representation when finetuning the feature extractor on real-world data. We show how this architecture can allow the transfer of a trained agent from simulation to reality without retraining or finetuning the control policy, but using real-world data only for adapting the feature extractor. By avoiding training the control policy in the real domain we overcome the need to apply Reinforcement Learning on real-world data, instead, we only focus on the unsupervised training of the feature extractor, considerably reducing real-world experience collection requirements. We evaluate the method on sim-to-sim and sim-to-real transfer of a policy for table-top robotic object pushing. We demonstrate how the method is capable of adapting to considerable variations in the task observations, such as changes in point-of-view, colors, and lighting, all while substantially reducing the training time with respect to policies trained directly in the real.

Most motion planners generate trajectories as low-level control inputs, such as joint torque or interpolation of joint angles, which cannot be deployed directly in most industrial robot control systems. Some industrial robot systems provide interfaces to execute planned trajectories by an additional control loop with low-level control inputs. However, there is a geometric and temporal deviation between the executed and the planned motions due to the inaccurate estimation of the inaccessible robot dynamic behavior and controller parameters in the planning phase. This deviation can lead to collisions or dangerous situations, especially in heavy-duty industrial robot applications where high-speed and long-distance motions are widely used. When deploying the planned robot motion, the actual robot motion needs to be iteratively checked and adjusted to avoid collisions caused by the deviation between the planned and the executed motions. This process takes a lot of time and engineering effort. Therefore, the state-of-the-art methods no longer meet the needs of today’s agile manufacturing for robotic systems that should rapidly plan and deploy new robot motions for different tasks. We present a data-driven motion planning approach using a neural network structure to simultaneously learn high-level motion commands and robot dynamics from acquired realistic collision-free trajectories. The trained neural network can generate trajectory in the form of high-level commands, such as Point-to-Point and Linear motion commands, which can be executed directly by the robot control system. The result carried out in various experimental scenarios has shown that the geometric and temporal deviation between the executed and the planned motions by the proposed approach has been significantly reduced, even if without access to the “black box” parameters of the robot. Furthermore, the proposed approach can generate new collision-free trajectories up to 10 times faster than benchmark motion planners.

The safe and reliable operation of autonomous agricultural vehicles requires an advanced environment perception system. An important component of perception systems is vision-based algorithms for detecting objects and other structures in the fields. This paper presents an ensemble method for combining outputs of three scene understanding tasks: semantic segmentation, object detection and anomaly detection in the agricultural context. The proposed framework uses an object detector to detect seven agriculture-specific classes. The anomaly detector detects all other objects that do not belong to these classes. In addition, the segmentation map of the field is utilized to provide additional information if the objects are located inside or outside the field area. The detections of different algorithms are combined at inference time, and the proposed ensemble method is independent of underlying algorithms. The results show that combining object detection with anomaly detection can increase the number of detected objects in agricultural scene images.

Recent technological advances in micro-robotics have demonstrated their immense potential for biomedical applications. Emerging micro-robots have versatile sensing systems, flexible locomotion and dexterous manipulation capabilities that can significantly contribute to the healthcare system. Despite the appreciated and tangible benefits of medical micro-robotics, many challenges still remain. Here, we review the major challenges, current trends and significant achievements for developing versatile and intelligent micro-robotics with a focus on applications in early diagnosis and therapeutic interventions. We also consider some recent emerging micro-robotic technologies that employ synthetic biology to support a new generation of living micro-robots. We expect to inspire future development of micro-robots toward clinical translation by identifying the roadblocks that need to be overcome.

Background: Studies aiming to objectively quantify movement disorders during upper limb tasks using wearable sensors have recently increased, but there is a wide variety in described measurement and analyzing methods, hampering standardization of methods in research and clinics. Therefore, the primary objective of this review was to provide an overview of sensor set-up and type, included tasks, sensor features and methods used to quantify movement disorders during upper limb tasks in multiple pathological populations. The secondary objective was to identify the most sensitive sensor features for the detection and quantification of movement disorders on the one hand and to describe the clinical application of the proposed methods on the other hand.

Methods: A literature search using Scopus, Web of Science, and PubMed was performed. Articles needed to meet following criteria: 1) participants were adults/children with a neurological disease, 2) (at least) one sensor was placed on the upper limb for evaluation of movement disorders during upper limb tasks, 3) comparisons between: groups with/without movement disorders, sensor features before/after intervention, or sensor features with a clinical scale for assessment of the movement disorder. 4) Outcome measures included sensor features from acceleration/angular velocity signals.

Results: A total of 101 articles were included, of which 56 researched Parkinson’s Disease. Wrist(s), hand(s) and index finger(s) were the most popular sensor locations. Most frequent tasks were: finger tapping, wrist pro/supination, keeping the arms extended in front of the body and finger-to-nose. Most frequently calculated sensor features were mean, standard deviation, root-mean-square, ranges, skewness, kurtosis/entropy of acceleration and/or angular velocity, in combination with dominant frequencies/power of acceleration signals. Examples of clinical applications were automatization of a clinical scale or discrimination between a patient/control group or different patient groups.

Conclusion: Current overview can support clinicians and researchers in selecting the most sensitive pathology-dependent sensor features and methodologies for detection and quantification of upper limb movement disorders and objective evaluations of treatment effects. Insights from Parkinson’s Disease studies can accelerate the development of wearable sensors protocols in the remaining pathologies, provided that there is sufficient attention for the standardisation of protocols, tasks, feasibility and data analysis methods.

There are a large number of publicly available datasets of 3D data, they generally suffer from some drawbacks, such as small number of data samples, and class imbalance. Data augmentation is a set of techniques that aim to increase the size of datasets and solve such defects, and hence to overcome the problem of overfitting when training a classifier. In this paper, we propose a method to create new synthesized data by converting complete meshes into occluded 3D point clouds similar to those in real-world datasets. The proposed method involves two main steps, the first one is hidden surface removal (HSR), where the occluded parts of objects surfaces from the viewpoint of a camera are deleted. A low-complexity method has been proposed to implement HSR based on occupancy grids. The second step is a random sampling of the detected visible surfaces. The proposed two-step method is applied to a subset of ModelNet40 dataset to create a new dataset, which is then used to train and test three different deep-learning classifiers (VoxNet, PointNet, and 3DmFV). We studied classifiers performance as a function of the camera elevation angle. We also conducted another experiment to show how the newly generated data samples can improve the classification performance when they are combined with the original data during training process. Simulation results show that the proposed method enables us to create a large number of new data samples with a small size needed for storage. Results also show that the performance of classifiers is highly dependent on the elevation angle of the camera. In addition, there may exist some angles where performance degrades significantly. Furthermore, data augmentation using our created data improves the performance of classifiers not only when they are tested on the original data, but also on real data.

In robotic-assisted partial nephrectomy, surgeons remove a part of a kidney often due to the presence of a mass. A drop-in ultrasound probe paired to a surgical robot is deployed to execute multiple swipes over the kidney surface to localise the mass and define the margins of resection. This sub-task is challenging and must be performed by a highly-skilled surgeon. Automating this sub-task may reduce cognitive load for the surgeon and improve patient outcomes. The eventual goal of this work is to autonomously move the ultrasound probe on the surface of the kidney taking advantage of the use of the Pneumatically Attachable Flexible (PAF) rail system, a soft robotic device used for organ scanning and repositioning. First, we integrate a shape-sensing optical fibre into the PAF rail system to evaluate the curvature of target organs in robotic-assisted laparoscopic surgery. Then, we investigate the impact of the PAF rail’s material stiffness on the curvature sensing accuracy, considering that soft targets are present in the surgical field. We found overall curvature sensing accuracy to be between 1.44% and 7.27% over the range of curvatures present in adult kidneys. Finally, we use shape sensing to plan the trajectory of the da Vinci surgical robot paired with a drop-in ultrasound probe and autonomously generate an Ultrasound scan of a kidney phantom.

For effective human-robot collaboration, it is crucial for robots to understand requests from users perceiving the three-dimensional space and ask reasonable follow-up questions when there are ambiguities. While comprehending the users’ object descriptions in the requests, existing studies have focused on this challenge for limited object categories that can be detected or localized with existing object detection and localization modules. Further, they have mostly focused on comprehending the object descriptions using flat RGB images without considering the depth dimension. On the other hand, in the wild, it is impossible to limit the object categories that can be encountered during the interaction, and 3-dimensional space perception that includes depth information is fundamental in successful task completion. To understand described objects and resolve ambiguities in the wild, for the first time, we suggest a method leveraging explainability. Our method focuses on the active areas of an RGB scene to find the described objects without putting the previous constraints on object categories and natural language instructions. We further improve our method to identify the described objects considering depth dimension. We evaluate our method in varied real-world images and observe that the regions suggested by our method can help resolve ambiguities. When we compare our method with a state-of-the-art baseline, we show that our method performs better in scenes with ambiguous objects which cannot be recognized by existing object detectors. We also show that using depth features significantly improves performance in scenes where depth data is critical to disambiguate the objects and across our evaluation dataset that contains objects that can be specified with and without the depth dimension.

This paper proposes an adaptive robust Jacobian-based controller for task-space position-tracking control of robotic manipulators. Structure of the controller is built up on a traditional Proportional-Integral-Derivative (PID) framework. An additional neural control signal is next synthesized under a non-linear learning law to compensate for internal and external disturbances in the robot dynamics. To provide the strong robustness of such the controller, a new gain learning feature is then integrated to automatically adjust the PID gains for various working conditions. Stability of the closed-loop system is guaranteed by Lyapunov constraints. Effectiveness of the proposed controller is carefully verified by intensive simulation results.

Often in swarm robotics, an assumption is made that all robots in the swarm behave the same and will have a similar (if not the same) error model. However, in reality, this is not the case, and this lack of uniformity in the error model, and other operations, can lead to various emergent behaviors. This paper considers the impact of the error model and compares robots in a swarm that operate using the same error model (uniform error) against each robot in the swarm having a different error model (thus introducing error diversity). Experiments are presented in the context of a foraging task. Simulation and physical experimental results show the importance of the error model and diversity in achieving the expected swarm behavior.

Pages