Frontiers in Robotics and AI

RSS Feed for Frontiers in Robotics and AI | New and Recent Articles
Subscribe to Frontiers in Robotics and AI feed

Deep-sea manganese nodules are abundant in the ocean, with high exploitation potential and commercial value, and have become mineral resources that coastal countries compete to develop. The pipeline-lifting mining system is the most promising deep-sea mining system at present. A deep-sea mining vehicle is the core equipment of this system. Mining quality and efficiency rely on mining vehicles to a great extent. According to the topographic and geomorphic environmental characteristics of deep-sea manganese nodules at the bottom of the ocean, a new deep-sea mining system based on an autonomous manganese nodule mining vehicle is proposed in this paper. According to the operating environment and functional requirements of the seabed, a new mining method is proposed, and the global traverse path planning research of the autonomous manganese nodule mining vehicle based on this mining method is carried out. The arc round-trip acquisition path planning method is put forward, and the simulation verification shows that the method effectively solves the problems of low efficiency of mining vehicle traversing acquisition and obstacle avoidance.

As the responses of chat dialogue systems have become more natural, the empathy skill of dialogue systems has become an important new issue. In text-based chat dialogue systems, the definition of empathy is not precise, and how to design the kind of utterance that improves the user’s impression of receiving empathy is not clear since the main method used is to imitate utterances and dialogues that humans consider empathetic. In this study, we focus on the necessity of grasping an agent as an experienceable Other, which is considered the most important factor when empathy is performed by an agent, and propose an utterance design that directly conveys the fact that the agent can experience and feel empathy through text. Our system has an experience database including the system’s pseudo-experience and feelings to show empathetic feelings. Then, the system understands the user’s experiences and empathizes with the user on the basis of the system’s experience database, in line with the dialogue content. As a result of developing and evaluating several systems with different ways of conveying the aforementioned rationale, we found that conveying the rationale as a hearsay experience improved the user’s impression of receiving empathy more than conveying it as the system’s own experience. Moreover, an exhaustive evaluation shows that our empathetic utterance design using hearsay experience is effective to improve the user’s impression about the system’s cognitive empathy.

Optical colonoscopy is the gold standard procedure to detect colorectal cancer, the fourth most common cancer in the United Kingdom. Up to 22%–28% of polyps can be missed during the procedure that is associated with interval cancer. A vision-based autonomous soft endorobot for colonoscopy can drastically improve the accuracy of the procedure by inspecting the colon more systematically with reduced discomfort. A three-dimensional understanding of the environment is essential for robot navigation and can also improve the adenoma detection rate. Monocular depth estimation with deep learning methods has progressed substantially, but collecting ground-truth depth maps remains a challenge as no 3D camera can be fitted to a standard colonoscope. This work addresses this issue by using a self-supervised monocular depth estimation model that directly learns depth from video sequences with view synthesis. In addition, our model accommodates wide field-of-view cameras typically used in colonoscopy and specific challenges such as deformable surfaces, specular lighting, non-Lambertian surfaces, and high occlusion. We performed qualitative analysis on a synthetic data set, a quantitative examination of the colonoscopy training model, and real colonoscopy videos in near real-time.

The development of soft robotic hand exoskeletons for rehabilitation has been well-reported in the literature, whereby the emphasis was placed on the development of soft actuators for flexion and extension. Little attention was focused on developing the glove interface and attachments of actuators to the hand. As these hand exoskeletons are largely developed for personnel with impaired hand function for rehabilitation, it may be tedious to aid the patients in donning and doffing the glove, given that patients usually have stiff fingers exhibiting high muscle tone. To address this issue, a hybrid securing actuator was developed and powered pneumatically to allow for rapid securing and release of a body segment. As a proof of concept, the actuator was further adapted into a self-securing glove mechanism and assembled into a complete self-securing soft robotic hand exoskeleton with the attachment of bidirectional actuators. Our validation tests show that the self-wearing soft robotic hand exoskeleton can easily conform and secure onto the human hand and assist with manipulation tasks.

Introduction: Human-in-the-loop optimization algorithms have proven useful in optimizing complex interactive problems, such as the interaction between humans and robotic exoskeletons. Specifically, this methodology has been proven valid for reducing metabolic cost while wearing robotic exoskeletons. However, many prostheses and orthoses still consist of passive elements that require manual adjustments of settings.

Methods: In the present study, we investigated if human-in-the-loop algorithms could guide faster manual adjustments in a procedure similar to fitting a prosthesis. Eight healthy participants wore a prosthesis simulator and walked on a treadmill at 0.8 ms−1 under 16 combinations of shoe heel height and pylon height. A human-in-the-loop optimization algorithm was used to find an optimal combination for reducing the loading rate on the limb contralateral to the prosthesis simulator. To evaluate the performance of the optimization algorithm, we used a convergence criterium. We evaluated the accuracy by comparing it against the optimum from a full sweep of all combinations.

Results: In five out of the eight participants, the human-in-the-loop optimization reduced the time taken to find an optimal combination; however, in three participants, the human-in-the-loop optimization either converged by the last iteration or did not converge.

Discussion: Findings from this study show that the human-in-the-loop methodology could be helpful in tasks that require manually adjusting an assistive device, such as optimizing an unpowered prosthesis. However, further research is needed to achieve robust performance and evaluate applicability in persons with amputation wearing an actual prosthesis.

The demands of traditional industrial robotics differ significantly from those of space robotics. While industry requires robots that can perform repetitive tasks with precision and speed, the space environment needs robots to cope with uncertainties, dynamics, and communication delays or interruptions, similar to human astronauts. These demands make a well-suited application for compliant robotics and behavior-based programming. Pose Target Wrench Limiting (PTWL) is a compliant behavior paradigm developed specifically to meet these demands. PTWL controls a robot by moving a virtual attractor to a target pose. The attractor applies virtual forces, based on stiffness and damping presets, to an underlying admittance controller. Guided by virtual forces, the robot will follow the attractor until safety conditions are violated or success criteria are met. We tested PTWL on a variety of quasi-static tasks that may be useful for future space operations. Our results demonstrate that PTWL is an extremely powerful tool. It makes teleoperation easy and safe for a wide range of quasi-static tasks. It also facilitates the creation of semi-autonomous state machines that can reliably complete complex tasks with minimal human intervention.

3d reconstruction of deformable objects in dynamic scenes forms the fundamental basis of many robotic applications. Existing mesh-based approaches compromise registration accuracy, and lose important details due to interpolation and smoothing. Additionally, existing non-rigid registration techniques struggle with unindexed points and disconnected manifolds. We propose a novel non-rigid registration framework for raw, unstructured, deformable point clouds purely based on geometric features. The global non-rigid deformation of an object is formulated as an aggregation of locally rigid transformations. The concept of locality is embodied in soft patches described by geometrical properties based on SHOT descriptor and its neighborhood. By considering the confidence score of pairwise association between soft patches of two scans (not necessarily consecutive), a computed similarity matrix serves as the seed to grow a correspondence graph which leverages rigidity terms defined in As-Rigid-As-Possible for pruning and optimization. Experiments on simulated and publicly available datasets demonstrate the capability of the proposed approach to cope with large deformations blended with numerous missing parts in the scan process.

Introduction: Trunk-like continuum robots have wide applications in manipulation and locomotion. In particular, trunk-like soft arms exhibit high dexterity and adaptability very similar to the creatures of the natural world. However, owing to the continuum and soft bodies, their performance in payload and spatial movements is limited.

Methods: In this paper, we investigate the influence of key design parameters on robotic performance. It is verified that a larger workspace, lateral stiffness, payload, and bending moment could be achieved with adjustments to soft materials’ hardness, the height of module segments, and arrayed radius of actuators.

Results: Especially, a 55% increase in arrayed radius would enhance the lateral stiffness by 25% and a bending moment by 55%. An 80% increase in segment height would enlarge 112% of the elongation range and 70 % of the bending range. Around 200% and 150% increments in the segment’s lateral stiffness and payload forces, respectively, could be obtained by tuning the hardness of soft materials. These relations enable the design customization of trunk-like soft arms, in which this tapering structure ensures stability via the stocky base for an impact reduction of 50% compared to that of the tip and ensures dexterity of the long tip for a relatively larger bending range of over 400% compared to that of the base.

Discussion: The complete methodology of the design concept, analytical models, simulation, and experiments is developed to offer comprehensive guidelines for trunk-like soft robotic design and enable high performance in robotic manipulation.

Humans are increasingly coming into direct physical contact with robots in the context of object handovers. The technical development of robots is progressing so that handovers can be better adapted to humans. An important criterion for successful handovers between robots and humans is the predictability of the robot for the human. The better humans can anticipate the robot’s actions, the better they can adapt to them and thus achieve smoother handovers. In the context of this work, it was investigated whether a highly adaptive transport method of the object, adapted to the human hand, leads to better handovers than a non-adaptive transport method with a predefined target position. To ensure robust handovers at high repetition rates, a Franka Panda robotic arm with a gripper equipped with an Intel RealSense camera and capacitive proximity sensors in the gripper was used. To investigate the handover behavior, a study was conducted with n = 40 subjects, each performing 40 handovers in four consecutive runs. The dependent variables examined are physical handover time, early handover intervention before the robot reaches its target position, and subjects’ subjective ratings. The adaptive transport method does not result in significantly higher mean physical handover times than the non-adaptive transport method. The non-adaptive transport method does not lead to a significantly earlier handover intervention in the course of the runs than the adaptive transport method. Trust in the robot and the perception of safety are rated significantly lower for the adaptive transport method than for the non-adaptive transport method. The physical handover time decreases significantly for both transport methods within the first two runs. For both transport methods, the percentage of handovers with a physical handover time between 0.1 and 0.2 s increases sharply, while the percentage of handovers with a physical handover time of >0.5 s decreases sharply. The results can be explained by theories of motor learning. From the experience of this study, an increased understanding of motor learning and adaptation in the context of human-robot interaction can be of great benefit for further technical development in robotics and for the industrial use of robots.

Introduction: In Interactive Task Learning (ITL), an agent learns a new task through natural interaction with a human instructor. Behavior Trees (BTs) offer a reactive, modular, and interpretable way of encoding task descriptions but have not yet been applied a lot in robotic ITL settings. Most existing approaches that learn a BT from human demonstrations require the user to specify each action step-by-step or do not allow for adapting a learned BT without the need to repeat the entire teaching process from scratch.

Method: We propose a new framework to directly learn a BT from only a few human task demonstrations recorded as RGB-D video streams. We automatically extract continuous pre- and post-conditions for BT action nodes from visual features and use a Backchaining approach to build a reactive BT. In a user study on how non-experts provide and vary demonstrations, we identify three common failure cases of an BT learned from potentially imperfect initial human demonstrations. We offer a way to interactively resolve these failure cases by refining the existing BT through interaction with a user over a web-interface. Specifically, failure cases or unknown states are detected automatically during the execution of a learned BT and the initial BT is adjusted or extended according to the provided user input.

Evaluation and results: We evaluate our approach on a robotic trash disposal task with 20 human participants and demonstrate that our method is capable of learning reactive BTs from only a few human demonstrations and interactively resolving possible failure cases at runtime.

A high degree of freedom (DOF) benefits manipulators by presenting various postures when reaching a target. Using a tendon-driven system with an underactuated structure can provide flexibility and weight reduction to such manipulators. The design and control of such a composite system are challenging owing to its complicated architecture and modeling difficulties. In our previous study, we developed a tendon-driven, high-DOF underactuated manipulator inspired from an ostrich neck referred to as the Robostrich arm. This study particularly focused on the control problems and simulation development of such a tendon-driven high-DOF underactuated manipulator. We proposed a curriculum-based reinforcement-learning approach. Inspired by human learning, progressing from simple to complex tasks, the Robostrich arm can obtain manipulation abilities by step-by-step reinforcement learning ranging from simple position control tasks to practical application tasks. In addition, an approach was developed to simulate tendon-driven manipulation with a complicated structure. The results show that the Robostrich arm can continuously reach various targets and simultaneously maintain its tip at the desired orientation while mounted on a mobile platform in the presence of perturbation. These results show that our system can achieve flexible manipulation ability even if vibrations are presented by locomotion.

The variability in the shapes and sizes of objects presents a significant challenge for two-finger robotic grippers when it comes to manipulating them. Based on the chemistry of vitrimers (a new class of polymer materials that have dynamic covalent bonds, which allow them to reversibly change their mechanical properties under specific conditions), we present two designs as 3D-printed shape memory polymer-based shape-adaptive fingertips (SMP-SAF). The fingertips have two main properties needed for an effective grasping. First, the ability to adapt their shape to different objects. Second, exhibiting variable rigidity, to lock and retain this new shape without the need for any continuous external triggering system. Our two design strategies are: 1) A curved part, which is suitable for grasping delicate and fragile objects. In this mode and prior to gripping, the SMP-SAFs are straightened by the force of the parallel gripper and are adapted to the object by shape memory activation. 2) A straight part that takes on the form of the objects by contact force with them. This mode is better suited for gripping hard bodies and provides a more straightforward shape programming process. The SMP-SAFs can be programmed by heating them up above glass transition temperature (54°C) via Joule-effect of the integrated electrically conductive wire or by using a heat gun, followed by reshaping by the external forces (without human intervention), and subsequently fixing the new shape upon cooling. As the shape programming process is time-consuming, this technique suits adaptive sorting lines where the variety of objects is not changed from grasp to grasp, but from batch to batch.

Over the years, efforts in bioinspired soft robotics have led to mobile systems that emulate features of natural animal locomotion. This includes combining mechanisms from multiple organisms to further improve movement. In this work, we seek to improve locomotion in soft, amphibious robots by combining two independent mechanisms: sea star locomotion gait and gecko adhesion. Specifically, we present a sea star-inspired robot with a gecko-inspired adhesive surface that is able to crawl on a variety of surfaces. It is composed of soft and stretchable elastomer and has five limbs that are powered with pneumatic actuation. The gecko-inspired adhesion provides additional grip on wet and dry surfaces, thus enabling the robot to climb on 25° slopes and hold on statically to 51° slopes.

In recent years, various service robots have been deployed in stores as recommendation systems. Previous studies have sought to increase the influence of these robots by enhancing their social acceptance and trust. However, when such service robots recommend a product to customers in real environments, the effect on the customers is influenced not only by the robot itself, but also by the social influence of the surrounding people such as store clerks. Therefore, leveraging the social influence of the clerks may increase the influence of the robots on the customers. Hence, we compared the influence of robots with and without collaborative customer service between the robots and clerks in two bakery stores. The experimental results showed that collaborative customer service increased the purchase rate of the recommended bread and improved the impressions of the robot and store experience of the customers. Because the results also showed that the workload required for the clerks to collaborate with the robot was not high, this study suggests that all stores with service robots may demonstrate high effectiveness in introducing collaborative customer service.

Individuals who have suffered neurotrauma like a stroke or brachial plexus injury often experience reduced limb functionality. Soft robotic exoskeletons have been successful in assisting rehabilitative treatment and improving activities of daily life but restoring dexterity for tasks such as playing musical instruments has proven challenging. This research presents a soft robotic hand exoskeleton coupled with machine learning algorithms to aid in relearning how to play the piano by ‘feeling’ the difference between correct and incorrect versions of the same song. The exoskeleton features piezoresistive sensor arrays with 16 taxels integrated into each fingertip. The hand exoskeleton was created as a single unit, with polyvinyl acid (PVA) used as a stent and later dissolved to construct the internal pressure chambers for the five individually actuated digits. Ten variations of a song were produced, one that was correct and nine containing rhythmic errors. To classify these song variations, Random Forest (RF), K-Nearest Neighbor (KNN), and Artificial Neural Network (ANN) algorithms were trained with data from the 80 taxels combined from the tactile sensors in the fingertips. Feeling the differences between correct and incorrect versions of the song was done with the exoskeleton independently and while the exoskeleton was worn by a person. Results demonstrated that the ANN algorithm had the highest classification accuracy of 97.13% ± 2.00% with the human subject and 94.60% ± 1.26% without. These findings highlight the potential of the smart exoskeleton to aid disabled individuals in relearning dexterous tasks like playing musical instruments.

Navigation in forest environments is a challenging and open problem in the area of field robotics. Rovers in forest environments are required to infer the traversability of a priori unknown terrains, comprising a number of different types of compliant and rigid obstacles, under varying lighting and weather conditions. The challenges are further compounded for inexpensive small-sized (portable) rovers. While such rovers may be useful for collaboratively monitoring large tracts of forests as a swarm, with low environmental impact, their small-size affords them only a low viewpoint of their proximal terrain. Moreover, their limited view may frequently be partially occluded by compliant obstacles in close proximity such as shrubs and tall grass. Perhaps, consequently, most studies on off-road navigation typically use large-sized rovers equipped with expensive exteroceptive navigation sensors. We design a low-cost navigation system tailored for small-sized forest rovers. For navigation, a light-weight convolution neural network is used to predict depth images from RGB input images from a low-viewpoint monocular camera. Subsequently, a simple coarse-grained navigation algorithm aggregates the predicted depth information to steer our mobile platform towards open traversable areas in the forest while avoiding obstacles. In this study, the steering commands output from our navigation algorithm direct an operator pushing the mobile platform. Our navigation algorithm has been extensively tested in high-fidelity forest simulations and in field trials. Using no more than a 16 × 16 pixel depth prediction image from a 32 × 32 pixel RGB image, our algorithm running on a Raspberry Pi was able to successfully navigate a total of over 750 m of real-world forest terrain comprising shrubs, dense bushes, tall grass, fallen branches, fallen tree trunks, small ditches and mounds, and standing trees, under five different weather conditions and four different times of day. Furthermore, our algorithm exhibits robustness to changes in the mobile platform’s camera pitch angle, motion blur, low lighting at dusk, and high-contrast lighting conditions.