Feed aggregator

Point cloud data provides three-dimensional (3D) measurement of the geometric details in the physical world, which relies heavily on the quality of the machine vision system. In this paper, we explore the potentials of a 3D scanner of high quality (15 million points per second), accuracy (up to 0.150 mm), and frame rate (up to 20 FPS) during static and dynamic measurements of the robot flange for direct hand-eye calibration and trajectory error tracking. With the availability of high-quality point cloud data, we can exploit the standardized geometric features on the robot flange for 3D measurement, which are directly accessible for hand-eye calibration problems. In the meanwhile, we tested the proposed flange-based calibration methods in a dynamic setting to capture point cloud data in a high frame rate. We found that our proposed method works robustly even in dynamic environments, enabling a versatile hand-eye calibration during motion. Furthermore, capturing high-quality point cloud data in real-time opens new doors for the use of 3D scanners, capable of detecting sensitive anomalies of refined details even in motion trajectories. Codes and sample data of this calibration method is provided at Github (https://github.com/ancorasir/flange_handeye_calibration).

Programming by demonstration has received much attention as it offers a general framework which allows robots to efficiently acquire novel motor skills from a human teacher. While traditional imitation learning that only focuses on either Cartesian or joint space might become inappropriate in situations where both spaces are equally important (e.g., writing or striking task), hybrid imitation learning of skills in both Cartesian and joint spaces simultaneously has been studied recently. However, an important issue which often arises in dynamical or unstructured environments is overlooked, namely how can a robot avoid obstacles? In this paper, we aim to address the problem of avoiding obstacles in the context of hybrid imitation learning. Specifically, we propose to tackle three subproblems: (i) designing a proper potential field so as to bypass obstacles, (ii) guaranteeing joint limits are respected when adjusting trajectories in the process of avoiding obstacles, and (iii) determining proper control commands for robots such that potential human-robot interaction is safe. By solving the aforementioned subproblems, the robot is capable of generalizing observed skills to new situations featuring obstacles in a feasible and safe manner. The effectiveness of the proposed method is validated through a toy example as well as a real transportation experiment on the iCub humanoid robot.

The article describes a highly trustable environmental monitoring system employing a small scalable swarm of small-sized marine vessels equipped with compact sensors and intended for the monitoring of water resources and infrastructures. The technological foundation of the process which guarantees that any third party can not alter the samples taken by the robot swarm is based on the Robonomics platform. This platform provides encrypted decentralized technologies based on distributed ledger tools, and market mechanisms for organizing the work of heterogeneous multi-vendor cyber-physical systems when automated economical transactions are needed. A small swarm of robots follows the autonomous ship, which is in charge of maintaining the secure transactions. The swarm implements a version of Reynolds' Boids model based on the Belief Space Planning approach. The main contributions of our work consist of: (1) the deployment of a secure sample certification and logging platform based on the blockchain with a small-sized swarm of autonomous vessels performing maneuvers to measure chemical parameters of water in automatic mode; (2) the coordination of a leader-follower framework for the small platoon of robots by means of a Reynolds' Boids model based on a Belief Space Planning approach. In addition, the article describes the process of measuring the chemical parameters of water by using sensors located on the vessels. Both technology testing on experimental vessel and environmental measurements are detailed. The results have been obtained through real world experiments of an autonomous vessel, which was integrated as the “leader” into a mixed reality simulation of a swarm of simulated smaller vessels.The design of the experimental vessel physically deployed in the Volga river to demonstrate the practical viability of the proposed methods is shortly described.

We consider the detection of change in spatial distribution of fluorescent markers inside cells imaged by single cell microscopy. Such problems are important in bioimaging since the density of these markers can reflect the healthy or pathological state of cells, the spatial organization of DNA, or cell cycle stage. With the new super-resolved microscopes and associated microfluidic devices, bio-markers can be detected in single cells individually or collectively as a texture depending on the quality of the microscope impulse response. In this work, we propose, via numerical simulations, to address detection of changes in spatial density or in spatial clustering with an individual (pointillist) or collective (textural) approach by comparing their performances according to the size of the impulse response of the microscope. Pointillist approaches show good performances for small impulse response sizes only, while all textural approaches are found to overcome pointillist approaches with small as well as with large impulse response sizes. These results are validated with real fluorescence microscopy images with conventional resolution. This, a priori non-intuitive result in the perspective of the quest of super-resolution, demonstrates that, for difference detection tasks in single cell microscopy, super-resolved microscopes may not be mandatory and that lower cost, sub-resolved, microscopes can be sufficient.

Recognizing material categories is one of the core challenges in robotic nuclear waste decommissioning. All nuclear waste should be sorted and segregated according to its materials, and then different disposal post-process can be applied. In this paper, we propose a novel transfer learning approach to learn boundary-aware material segmentation from a meta-dataset and weakly annotated data. The proposed method is data-efficient, leveraging a publically available dataset for general computer vision tasks and coarsely labeled material recognition data, with only a limited number of fine pixel-wise annotations required. Importantly, our approach is integrated with a Simultaneous Localization and Mapping (SLAM) system to fuse the per-frame understanding delicately into a 3D global semantic map to facilitate robot manipulation in self-occluded object heaps or robot navigation in disaster zones. We evaluate the proposed method on the Materials in Context dataset over 23 categories and that our integrated system delivers quasi-real-time 3D semantic mapping with high-resolution images. The trained model is also verified in an industrial environment as part of the EU RoMaNs project, and promising qualitative results are presented. A video demo and the newly generated data can be found at the project website1 (Supplementary Material).

Consensus achievement is a crucial capability for robot swarms, for example, for path selection, spatial aggregation, or collective sensing. However, the presence of malfunctioning and malicious robots (Byzantine robots) can make it impossible to achieve consensus using classical consensus protocols. In this work, we show how a swarm of robots can achieve consensus even in the presence of Byzantine robots by exploiting blockchain technology. Bitcoin and later blockchain frameworks, such as Ethereum, have revolutionized financial transactions. These frameworks are based on decentralized databases (blockchains) that can achieve secure consensus in peer-to-peer networks. We illustrate our approach in a collective sensing scenario where robots in a swarm are controlled via blockchain-based smart contracts (decentralized protocols executed via blockchain technology) that serve as “meta-controllers” and we compare it to state-of-the-art consensus protocols using a robot swarm simulator. Additionally, we show that our blockchain-based approach can prevent attacks where robots forge a large number of identities (Sybil attacks). The developed robot-blockchain interface is released as open-source software in order to facilitate future research in blockchain-controlled robot swarms. Besides increasing security, we expect the presented approach to be important for data analysis, digital forensics, and robot-to-robot financial transactions in robot swarms.

Many applications benefit from the use of multiple robots, but their scalability and applicability are fundamentally limited when relying on a central control station. Getting beyond the centralized approach can increase the complexity of the embedded software, the sensitivity to the network topology, and render the deployment on physical devices tedious and error-prone. This work introduces a software-based solution to cope with these challenges on commercial hardware. We bring together our previous work on Buzz, the swarm-oriented programming language, and the many contributions of the Robotic Operating System (ROS) community into a reliable workflow, from rapid prototyping of decentralized behaviors up to robust field deployment. The Buzz programming language is a hardware independent, domain-specific (swarm-oriented), and composable language. From simulation to the field, a Buzz script can stay unmodified and almost seamlessly applicable to all units of a heterogeneous robotic team. We present the software structure of our solution, and the swarm-oriented paradigms it encompasses. While the design of a new behavior can be achieved on a lightweight simulator, we show how our security mechanisms enhance field deployment robustness. In addition, developers can update their scripts in the field using a safe software release mechanism. Integrating Buzz in ROS, adding safety mechanisms and granting field updates are core contributions essential to swarm robotics deployment: from simulation to the field. We show the applicability of our work with the implementation of two practical decentralized scenarios: a robust generic task allocation strategy and an optimized area coverage algorithm. Both behaviors are explained and tested with simulations, then experimented with heterogeneous ground-and-air robotic teams.

Media influence people's perceptions of reality broadly and of technology in particular. Robot villains and heroes—from Ultron to Wall-E—have been shown to serve a specific cultivation function, shaping people's perceptions of those embodied social technologies, especially when individuals do not have direct experience with them. To date, however, little is understood about the nature of the conceptions people hold for what robots are, how they work, and how they may function in society, as well as the media antecedents and relational effects of those cognitive structures. This study takes a step toward bridging that gap by exploring relationships among individuals' recall of robot characters from popular media, their mental models for actual robots, and social evaluations of an actual robot. Findings indicate that mental models consist of a small set of common and tightly linked components (beyond which there is a good deal of individual difference), but robot character recall and evaluation have little association with whether people hold any of those components. Instead, data are interpreted to suggest that cumulative sympathetic evaluations of robot media characters may form heuristics that are primed by and engaged in social evaluations of actual robots, while technical content in mental models is associated with a more utilitarian approach to actual robots.

Research related to regulatory focus theory has shown that the way in which a message is conveyed can increase the effectiveness of the message. While different research fields have used this theory, in human-robot interaction (HRI), no real attention has been given to this theory. In this paper, we investigate it in an in the wild scenario. More specifically, we are interested in how individuals react when a robot suddenly appears at their office doors. Will they interact with it or will they ignore it? We report the results from our experimental study in which the robot approaches 42 individuals. Twenty-nine of them interacted with the robot, while the others either ignored it or avoided any interaction with it. The robot displayed two types of behavior (i.e., promotion or prevention). Our results show that individuals that interacted with a robot that matched their regulatory focus type interacted with it significantly longer than individuals that did not experience regulatory fit. Other qualitative results are also reported, together with some reactions from the participants.

Online social networks (OSN) are prime examples of socio-technical systems in which individuals interact via a technical platform. OSN are very volatile because users enter and exit and frequently change their interactions. This makes the robustness of such systems difficult to measure and to control. To quantify robustness, we propose a coreness value obtained from the directed interaction network. We study the emergence of large drop-out cascades of users leaving the OSN by means of an agent-based model. For agents, we define a utility function that depends on their relative reputation and their costs for interactions. The decision of agents to leave the OSN depends on this utility. Our aim is to prevent drop-out cascades by influencing specific agents with low utility. We identify strategies to control agents in the core and the periphery of the OSN such that drop-out cascades are significantly reduced, and the robustness of the OSN is increased.

Laparoscopic surgery is a representative operative method of minimally invasive surgery. However, most laparoscopic hand instruments consist of rigid and straight structures, which have serious limitations such as interference by the instruments and limited field of view of the endoscope. To improve the flexibility and dexterity of these instruments, we propose a new concept of a multijoint manipulator using a variable stiffness mechanism. The manipulator uses a magneto-rheological compound (MRC) whose rheological properties can be tuned by an external magnetic field. In this study, we changed the shape of the electromagnet and MRC to improve the performance of the variable stiffness joint we previously fabricated; further, we fabricated a prototype and performed basic evaluation of the joint using this prototype. The MRC was fabricated by mixing carbonyl iron particles and glycerol. The prototype single joint was assembled by combining MRC and electromagnets. The configuration of the joint indicates that it has a closed magnetic circuit. To examine the basic properties of the joint, we conducted preliminary experiments such as elastic modulus measurement and rigidity evaluation. We confirmed that the elastic modulus increased when a magnetic field was applied. The rigidity of the joint was also verified under bending conditions. Our results confirmed that the stiffness of the new joint changed significantly compared with the old joint depending on the presence or absence of a magnetic field, and the performance of the new joint also improved.

Quadruped robots require compliance to handle unexpected external forces, such as impulsive contact forces from rough terrain, or from physical human-robot interaction. This paper presents a locomotion controller using Cartesian impedance control to coordinate tracking performance and desired compliance, along with Quadratic Programming (QP) to satisfy friction cone constraints, unilateral constraints, and torque limits. First, we resort to projected inverse-dynamics to derive an analytical control law of Cartesian impedance control for constrained and underactuated systems (typically a quadruped robot). Second, we formulate a QP to compute the optimal torques that are as close as possible to the desired values resulting from Cartesian impedance control while satisfying all of the physical constraints. When the desired motion torques lead to violation of physical constraints, the QP will result in a trade-off solution that sacrifices motion performance to ensure physical constraints. The proposed algorithm gives us more insight into the system that benefits from an analytical derivation and more efficient computation compared to hierarchical QP (HQP) controllers that typically require a solution of three QPs or more. Experiments applied on the ANYmal robot with various challenging terrains show the efficiency and performance of our controller.

How does AI need to evolve in order to better support more effective decision-making in managing the many complex problems we face at every scale, from global climate change, collapsing ecosystems, international conflicts and extremism, through to all the dimensions of public policy, economics, and governance that affect human well-being? Research in complex decision-making at an individual human level (understanding of what constitutes more, and less, effective decision-making behaviors, and in particular the many pathways to failures in dealing with complex problems), informs a discussion about the potential for AI to aid in mitigating those failures and enabling a more robust and adaptive (and therefore more effective) decision-making framework, calling for AI to move well-beyond the current envelope of competencies.

Many real-world applications have been suggested in the swarm robotics literature. However, there is a general lack of understanding of what needs to be done for robot swarms to be useful and trusted by users in reality. This paper aims to investigate user perception of robot swarms in the workplace, and inform design principles for the deployment of future swarms in real-world applications. Three qualitative studies with a total of 37 participants were done across three sectors: fire and rescue, storage organization, and bridge inspection. Each study examined the users' perceptions using focus groups and interviews. In this paper, we describe our findings regarding: the current processes and tools used in these professions and their main challenges; attitudes toward robot swarms assisting them; and the requirements that would encourage them to use robot swarms. We found that there was a generally positive reaction to robot swarms for information gathering and automation of simple processes. Furthermore, a human in the loop is preferred when it comes to decision making. Recommendations to increase trust and acceptance are related to transparency, accountability, safety, reliability, ease of maintenance, and ease of use. Finally, we found that mutual shaping, a methodology to create a bidirectional relationship between users and technology developers to incorporate societal choices in all stages of research and development, is a valid approach to increase knowledge and acceptance of swarm robotics. This paper contributes to the creation of such a culture of mutual shaping between researchers and users, toward increasing the chances of a successful deployment of robot swarms in the physical realm.

To coordinate actions with an interaction partner requires a constant exchange of sensorimotor signals. Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner. They require the ability to predict and adapt to one's partner during an interaction. In this work we want to explore these ideas in a human-robot interaction setting in which a robot is required to learn interactive tasks from a combination of observational and kinesthetic learning. To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner, and (3) generation of robot joint trajectories matching the human motion. As long-term motion prediction methods often suffer from the problem of regression to the mean, our technical contribution here is a novel probabilistic latent variable model which does not predict in joint space but in latent space. To test the proposed method, we collect human-human interaction data and human-robot interaction data of four interactive tasks “hand-shake,” “hand-wave,” “parachute fist-bump,” and “rocket fist-bump.” We demonstrate experimentally the importance of predictive and adaptive components as well as low-level abstractions to successfully learn to imitate human behavior in interactive social tasks.

Social engagement is a key indicator of an individual's socio-emotional and cognitive states. For a child with Autism Spectrum Disorder (ASD), this serves as an important factor in assessing the quality of the interactions and interventions. So far, qualitative measures of social engagement have been used extensively in research and in practice, but a reliable, objective, and quantitative measure is yet to be widely accepted and utilized. In this paper, we present our work on the development of a framework for the automated measurement of social engagement in children with ASD that can be utilized in real-world settings for the long-term clinical monitoring of a child's social behaviors as well as for the evaluation of the intervention methods being used. We present a computational modeling approach to derive the social engagement metric based on a user study with children between the ages of 4 and 12 years. The study was conducted within a child-robot interaction setting that targets sensory processing skills in children. We collected video, audio and motion-tracking data from the subjects and used them to generate personalized models of social engagement by training a multi-channel and multi-layer convolutional neural network. We then evaluated the performance of this network by comparing it with traditional classifiers and assessed its limitations, followed by discussions on the next steps toward finding a comprehensive and accurate metric for social engagement in ASD.

Researchers investigating virtual/augmented reality have shown humans' marked adaptability, especially regarding our sense of body ownership; their cumulative findings have expanded the concept of what it means to have a body. Herein, we report the hand ownership illusion during “two views merged in.” In our experiment, participants were presented two first-person perspective views of their arm overlapped, one was the live feed from a camera and the other was a playback video of the same situation, slightly shifted toward one side. The relative visibility of these two views and synchrony of tactile stimulation were manipulated. Participants' level of embodiment was evaluated using a questionnaire and proprioceptive drift. The results show that the likelihood of embodying the virtual hand is affected by the relative visibility of the two views and synchrony of the tactile events. We observed especially strong hand ownership of the virtual hand in the context of high virtual hand visibility with synchronous tactile stimulation.

Pages