Frontiers in Robotics and AI

RSS Feed for Frontiers in Robotics and AI | New and Recent Articles
Subscribe to Frontiers in Robotics and AI feed

This article describes an approach for multiagent search planning for a team of agents. A team of UAVs tasked to conduct a forest fire search was selected as the use case, although solutions are applicable to other domains. Fixed-path (e.g., parallel track) methods for multiagent search can produce predictable and structured paths, with the main limitation being poor management of agents’ resources and limited adaptability (i.e., based on predefined geometric paths, e.g., parallel track, expanding square, etc.). On the other hand, pseudorandom methods allow agents to generate well-separated paths; but methods can be computationally expensive and can result in a lack of coordination of agents’ activities. We present a hybrid solution that exploits the complementary strengths of fixed-pattern and pseudorandom methods, i.e., an approach that is resource-efficient, predictable, adaptable, and scalable. Our approach evolved from the Delaunay triangulation of systematically selected waypoints to allocate agents to explore a specific region while optimizing a given set of mission constraints. We implement our approach in a simulation environment, comparing the performance of the proposed algorithm with fixed-path and pseudorandom baselines. Results proved agents’ resource utilization, predictability, scalability, and adaptability of the developed path. We also demonstrate the proposed algorithm’s application on real UAVs.

Industrial contexts, typically characterized by highly unstructured environments, where task sequences are difficult to hard-code and unforeseen events occur daily (e.g., oil and gas, energy generation, aeronautics) cannot completely rely upon automation to substitute the human dexterity and judgment skills. Robots operating in these conditions have the common requirement of being able to deploy appropriate behaviours in highly dynamic and unpredictable environments, while aiming to achieve a more natural human-robot interaction and a broad range of acceptability in providing useful and efficient services. The goal of this paper is to introduce a deliberative framework able to acquire, reuse and instantiate a collection of behaviours that promote an extension of the autonomy periods of mobile robotic platforms, with a focus on maintenance, repairing and overhaul applications. Behavior trees are employed to design the robotic system’s high-level deliberative intelligence, which integrates: social behaviors, aiming to capture the human’s emotional state and intention; the ability to either perform or support various process tasks; seamless planning and execution of human-robot shared work plans. In particular, the modularity, reactiveness and deliberation capacity that characterize the behaviour tree formalism are leveraged to interpret the human’s health and cognitive load for supporting her/him, and to complete a shared mission by collaboration or complete take-over. By enabling mobile robotic platforms to take-over risky jobs which the human cannot, should not or do not want to perform the proposed framework bears high potential to significantly improve the safety, productivity and efficiency in harsh working environments.

To enable the application of humanoid robots outside of laboratory environments, the biped must meet certain requirements. These include, in particular, coping with dynamic motions such as climbing stairs or ramps or walking over irregular terrain. Sit-to-stand transitions also belong to this category. In addition to their actual application such as getting out of vehicles or standing up after sitting, for example, at a table, these motions also provide benefits in terms of performance assessment. Therefore, they have long been used as a sports medical and geriatric assessment for humans. Here, we develop optimized sit-to-stand trajectories using optimal control, which are characterized by their dynamic and humanlike nature. We implement these motions on the humanoid robot REEM-C. Based on the obtained sensor data, we present a unified benchmarking procedure based on two different experimental protocols. These protocols are characterized by their increasing level of difficulty for quantifying different aspects of lower limb performance. We report performance results obtained by REEM-C using two categories of indicators: primary, scenario-specific indicators that assess overall performance (chair height and ankle-to-chair distance) and subsidiary, general indicators that further describe performance. The latter provide a more detailed analysis of the applied motion and are based on metrics such as the angular momentum, zero moment point, capture point, or foot placement estimator. In the process, we identify performance deficiencies of the robot based on the collected data. Thus, this work is an important step toward a unified quantification of bipedal performance in the execution of humanlike and dynamically demanding motions.

The formal description and verification of networks of cooperative and interacting agents is made difficult by the interplay of several different behavioral patterns, models of communication, scalability issues. In this paper, we will explore the functionalities and the expressiveness of a general-purpose process algebraic framework for the specification and model checking based analysis of collective and cooperative systems. The proposed syntactic and semantic schemes are general enough to be adapted with small modifications to heterogeneous application domains, like, e.g., crowdsourcing systems, trustworthy networks, and distributed ledger technologies.

A cyber-physical system (CPS) is a system with integrated computational and physical abilities. Deriving the notion of cyber-physical collective (CPC) from a social view of CPS, we consider the nodes of a CPS as individuals (agents) that interact to overcome their limits in the collective. When CPC agents are able to move in their environment, the CPC is considered as a Mobile CPC (MCPC). The interactions of the agents give rise to the appearance of a phenomenon collectively generated by the agents of the CPC that we call a collective product. This phenomenon is not recorded as “a whole” in the CPC because an agent has only a partial view of its environment. This paper presents COPE (COllective Product Exploitation), an approach that allows one MCPC to exploit the collective product of another one. The approach is based on the deployment of meta-agents in both systems. A meta-agent is an agent that is external to a MCPC but is associated with one of its agents. Each meta-agent is able to monitor the agent with which it is associated and can fake its perceptions to influence its behavior. The meta-agents deployed in the system from which the collective product emerges provide indicators related to this product. Utilizing these indicators, the meta-agents deployed in the other system can act on the agents in order to adapt the global dynamics of the whole system. The proposed coupling approach is evaluated in a “fire detection and control” use case. It allows a system of UAVs to use the collective product of a network of sensors to monitor the fire.

Recent experiments indicate that pretraining of end-to-end reinforcement learning neural networks on general tasks can speed up the training process for specific robotic applications. However, it remains open if these networks form general feature extractors and a hierarchical organization that can be reused as in, for example, convolutional neural networks. In this study, we analyze the intrinsic neuron activation in networks trained for target reaching of robot manipulators with increasing joint number and analyze the individual neuron activation distribution within the network. We introduce a pruning algorithm to increase network information density and depict correlations of neuron activation patterns. Finally, we search for projections of neuron activation among networks trained for robot kinematics of different complexity. As a result, we show that the input and output network layers entail more distinct neuron activation in contrast to inner layers. Our pruning algorithm reduces the network size significantly and increases the distance of neuron activation while keeping a high performance in training and evaluation. Our results demonstrate that robots with small difference in joint number show higher layer-wise projection accuracy, whereas more distinct robot kinematics reveal dominant projections to the first layer.

The fabrication and control of robot hands with biologically inspired structure remains challenging due to its cost and complexity. In this paper we explore how widely available FDM printers can be used to fabricate complex hand structures by leveraging compliant PLA flexures. In particular, we focus on the fabrication of fingers printed as a single piece with tunable compliance, a multi degree of freedom thumb joint, and sensorized compliant fingertips. To address the challenge of control and actuation, we model the behavior of the flexure joints and propose a new method for control: combinatorial actuation. This control method combines the use of a single continuous actuated tendon per finger with two shared “combinatorial” actuators which act across all fingers. We demonstrate that the fingertip workspace using this method is comparable to fully actuated fingers while using significantly less independent actuators. The proposed approach of fabrication and combinatorial actuation provides a rapid and scalable method of designing and controlling complex manipulators.

Two sub-problems are typically identified for the replication of human finger motions on artificial hands: the measurement of the motions on the human side and the mapping method of human hand movements (primary hand) on the robotic hand (target hand). In this study, we focus on the second sub-problem. During human to robot hand mapping, ensuring natural motions and predictability for the operator is a difficult task, since it requires the preservation of the Cartesian position of the fingertips and the finger shapes given by the joint values. Several approaches have been presented to deal with this problem, which is still unresolved in general. In this work, we exploit the spatial information available in-hand, in particular, related to the thumb-finger relative position, for combining joint and Cartesian mappings. In this way, it is possible to perform a large range of both volar grasps (where the preservation of finger shapes is more important) and precision grips (where the preservation of fingertip positions is more important) during primary-to-target hand mappings, even if kinematic dissimilarities are present. We therefore report on two specific realizations of this approach: a distance-based hybrid mapping, in which the transition between joint and Cartesian mapping is driven by the approaching of the fingers to the current thumb fingertip position, and a workspace-based hybrid mapping, in which the joint–Cartesian transition is defined on the areas of the workspace in which thumb and fingertips can get in contact. The general mapping approach is presented, and the two realizations are tested. In order to report the results of an evaluation of the proposed mappings for multiple robotic hand kinematic structures (both industrial grippers and anthropomorphic hands, with a variable number of fingers), a simulative evaluation was performed.

Disabled people are often involved in robotics research as potential users of technologies which address specific needs. However, their more generalised lived expertise is not usually included when planning the overall design trajectory of robots for health and social care purposes. This risks losing valuable insight into the lived experience of disabled people, and impinges on their right to be involved in the shaping of their future care. This project draws upon the expertise of an interdisciplinary team to explore methodologies for involving people with disabilities in the early design of care robots in a way that enables incorporation of their broader values, experiences and expectations. We developed a comparative set of focus group workshops using Community Philosophy, LEGO® Serious Play® and Design Thinking to explore how people with a range of different physical impairments used these techniques to envision a “useful robot”. The outputs were then workshopped with a group of roboticists and designers to explore how they interacted with the thematic map produced. Through this process, we aimed to understand how people living with disability think robots might improve their lives and consider new ways of bringing the fullness of lived experience into earlier stages of robot design. Secondary aims were to assess whether and how co-creative methodologies might produce actionable information for designers (or why not), and to deepen the exchange of social scientific and technical knowledge about feasible trajectories for robotics in health-social care. Our analysis indicated that using these methods in a sequential process of workshops with disabled people and incorporating engineers and other stakeholders at the Design Thinking stage could potentially produce technologically actionable results to inform follow-on proposals.

In the study of collective animal behavior, researchers usually rely on gathering empirical data from animals in the wild. While the data gathered can be highly accurate, researchers have limited control over both the test environment and the agents under study. Further aggravating the data gathering problem is the fact that empirical studies of animal groups typically involve a large number of conspecifics. In these groups, collective dynamics may occur over long periods of time interspersed with excessively rapid events such as collective evasive maneuvers following a predator’s attack. All these factors stress the steep challenges faced by biologists seeking to uncover the fundamental mechanisms and functions of social organization in a given taxon. Here, we argue that beyond commonly used simulations, experiments with multi-robot systems offer a powerful toolkit to deepen our understanding of various forms of swarming and other social animal organizations. Indeed, the advances in multi-robot systems and swarm robotics over the past decade pave the way for the development of a new hybrid form of scientific investigation of social organization in biology. We believe that by fostering such interdisciplinary research, a feedback loop can be created where agent behaviors designed and tested in robotico can assist in identifying hypotheses worth being validated through the observation of animal collectives in nature. In turn, these observations can be used as a novel source of inspiration for even more innovative behaviors in engineered systems, thereby perpetuating the feedback loop.

This work describes the design of real-time dance-based interaction with a humanoid robot, where the robot seeks to promote physical activity in children by taking on multiple roles as a dance partner. It acts as a leader by initiating dances but can also act as a follower by mimicking a child’s dance movements. Dances in the leader role are produced by a sequence-to-sequence (S2S) Long Short-Term Memory (LSTM) network trained on children’s music videos taken from YouTube. On the other hand, a music orchestration platform is implemented to generate background music in the follower mode as the robot mimics the child’s poses. In doing so, we also incorporated the largely unexplored paradigm of learning-by-teaching by including multiple robot roles that allow the child to both learn from and teach to the robot. Our work is among the first to implement a largely autonomous, real-time full-body dance interaction with a bipedal humanoid robot that also explores the impact of the robot roles on child engagement. Importantly, we also incorporated in our design formal constructs taken from autism therapy, such as the least-to-most prompting hierarchy, reinforcements for positive behaviors, and a time delay to make behavioral observations. We implemented a multimodal child engagement model that encompasses both affective engagement (displayed through eye gaze focus and facial expressions) as well as task engagement (determined by the level of physical activity) to determine child engagement states. We then conducted a virtual exploratory user study to evaluate the impact of mixed robot roles on user engagement and found no statistically significant difference in the children’s engagement in single-role and multiple-role interactions. While the children were observed to respond positively to both robot behaviors, they preferred the music-driven leader role over the movement-driven follower role, a result that can partly be attributed to the virtual nature of the study. Our findings support the utility of such a platform in practicing physical activity but indicate that further research is necessary to fully explore the impact of each robot role.

Tactile sensing for robotics is achieved through a variety of mechanisms, including magnetic, optical-tactile, and conductive fluid. Currently, the fluid-based sensors have struck the right balance of anthropomorphic sizes and shapes and accuracy of tactile response measurement. However, this design is plagued by a low Signal to Noise Ratio (SNR) due to the fluid based sensing mechanism “damping” the measurement values that are hard to model. To this end, we present a spatio-temporal gradient representation on the data obtained from fluid-based tactile sensors, which is inspired from neuromorphic principles of event based sensing. We present a novel algorithm (GradTac) that converts discrete data points from spatial tactile sensors into spatio-temporal surfaces and tracks tactile contours across these surfaces. Processing the tactile data using the proposed spatio-temporal domain is robust, makes it less susceptible to the inherent noise from the fluid based sensors, and allows accurate tracking of regions of touch as compared to using the raw data. We successfully evaluate and demonstrate the efficacy of GradTac on many real-world experiments performed using the Shadow Dexterous Hand, equipped with the BioTac SP sensors. Specifically, we use it for tracking tactile input across the sensor’s surface, measuring relative forces, detecting linear and rotational slip, and for edge tracking. We also release an accompanying task-agnostic dataset for the BioTac SP, which we hope will provide a resource to compare and quantify various novel approaches, and motivate further research.

For robots navigating using only a camera, illumination changes in indoor environments can cause re-localization failures during autonomous navigation. In this paper, we present a multi-session visual SLAM approach to create a map made of multiple variations of the same locations in different illumination conditions. The multi-session map can then be used at any hour of the day for improved re-localization capability. The approach presented is independent of the visual features used, and this is demonstrated by comparing re-localization performance between multi-session maps created using the RTAB-Map library with SURF, SIFT, BRIEF, BRISK, KAZE, DAISY, and SuperPoint visual features. The approach is tested on six mapping and six localization sessions recorded at 30 min intervals during sunset using a Google Tango phone in a real apartment.

Inchworm-styled locomotion is one of the simplest gaits for mobile robots, which enables easy actuation, effective movement, and strong adaptation in nature. However, an agile inchworm-like robot that realizes versatile locomotion usually requires effective friction force manipulation with a complicated actuation structure and control algorithm. In this study, we embody a friction force controller based on the deformation of the robot body, to realize bidirectional locomotion. Two kinds of differential friction forces are integrated into a beam-like soft robot body, and along with the cyclical actuation of the robot body, two locomotion gaits with opposite locomotion directions can be generated and controlled by the deformation process of the robot body, that is, the dynamic gaits. Based on these dynamic gaits, two kinds of locomotion control schemes, the amplitude-based control and the frequency-based control, are proposed, analyzed, and validated with both theoretical simulations and prototype experiments. The soft inchworm crawler achieves the versatile locomotion result via a simple system configuration and minimalist actuation input. This work is an example of using soft structure vibrations for challenging robotic tasks.

We developed a novel framework for deep reinforcement learning (DRL) algorithms in task constrained path generation problems of robotic manipulators leveraging human demonstrated trajectories. The main contribution of this article is to design a reward function that can be used with generic reinforcement learning algorithms by utilizing the Koopman operator theory to build a human intent model from the human demonstrated trajectories. In order to ensure that the developed reward function produces the correct reward, the demonstrated trajectories are further used to create a trust domain within which the Koopman operator–based human intent prediction is considered. Otherwise, the proposed algorithm asks for human feedback to receive rewards. The designed reward function is incorporated inside the deep Q-learning (DQN) framework, which results in a modified DQN algorithm. The effectiveness of the proposed learning algorithm is demonstrated using a simulated robotic arm to learn the paths for constrained end-effector motion and considering the safety of the human in the surroundings of the robot.

Wearable robots are envisioned to amplify the independence of people with movement impairments by providing daily physical assistance. For portable, comfortable, and safe devices, soft pneumatic-based robots are emerging as a potential solution. However, due to the inherent complexities, including compliance and nonlinear mechanical behavior, feedback control for facilitating human–robot interaction remains a challenge. Herein, we present the design, fabrication, and control architecture of a soft wearable robot that assists in supination and pronation of the forearm. The soft wearable robot integrates an antagonistic pair of pneumatic-based helical actuators to provide active pronation and supination torques. Our main contribution is a bio-inspired equilibrium-point control scheme for integrating proprioceptive feedback and exteroceptive input (e.g., the user’s muscle activation signals) directly with the on/off valve behavior of the soft pneumatic actuators. The proposed human–robot controller is directly inspired by the equilibrium-point hypothesis of motor control, which suggests that voluntary movements arise through shifts in the equilibrium state of the antagonistic muscle pair spanning a joint. We hypothesized that the proposed method would reduce the required effort during dynamic manipulation without affecting the error. In order to evaluate our proposed method, we recruited seven pediatric participants with movement disorders to perform two dynamic interaction tasks with a haptic manipulandum. Each task required the participant to track a sinusoidal trajectory while the haptic manipulandum behaved as a Spring-Dominate system or Inertia-Dominate system. Our results reveal that the soft wearable robot, when active, reduced user effort on average by 14%. This work demonstrates the practical implementation of an equilibrium-point volitional controller for wearable robots and provides a foundational path toward versatile, low-cost, and soft wearable robots.

Pages