Feed aggregator

In recent years, there has been a rise in interest in the development of self-growing robotics inspired by the moving-by-growing paradigm of plants. In particular, climbing plants capitalize on their slender structures to successfully negotiate unstructured environments while employing a combination of two classes of growth-driven movements: tropic responses, growing toward or away from an external stimulus, and inherent nastic movements, such as periodic circumnutations, which promote exploration. In order to emulate these complex growth dynamics in a 3D environment, a general and rigorous mathematical framework is required. Here, we develop a general 3D model for rod-like organs adopting the Frenet-Serret frame, providing a useful framework from the standpoint of robotics control. Differential growth drives the dynamics of the organ, governed by both internal and external cues while neglecting elastic responses. We describe the numerical method required to implement this model and perform numerical simulations of a number of key scenarios, showcasing the applicability of our model. In the case of responses to external stimuli, we consider a distant stimulus (such as sunlight and gravity), a point stimulus (a point light source), and a line stimulus that emulates twining of a climbing plant around a support. We also simulate circumnutations, the response to an internal oscillatory cue, associated with search processes. Lastly, we also demonstrate the superposition of the response to an external stimulus and circumnutations. In addition, we consider a simple example illustrating the possible use of an optimal control approach in order to recover tropic dynamics in a way that may be relevant for robotics use. In all, the model presented here is general and robust, paving the way for a deeper understanding of plant response dynamics and also for novel control systems for newly developed self-growing robots.

Backed by the virtually unbounded resources of the cloud, battery-powered mobile robotics can also benefit from cloud computing, meeting the demands of even the most computationally and resource-intensive tasks. However, many existing mobile-cloud hybrid (MCH) robotic tasks are inefficient in terms of optimizing trade-offs between simultaneously conflicting objectives, such as minimizing both battery power consumption and network usage. To tackle this problem we propose a novel approach that can be used not only to instrument an MCH robotic task but also to search for its efficient configurations representing compromise solution between the objectives. We introduce a general-purpose MCH framework to measure, at runtime, how well the tasks meet these two objectives. The framework employs these efficient configurations to make decisions at runtime, which are based on: (1) changing of the environment (i.e., WiFi signal level variation), and (2) itself in a changing environment (i.e., actual observed packet loss in the network). Also, we introduce a novel search-based multi-objective optimization (MOO) algorithm, which works in two steps to search for efficient configurations of MCH applications. Analysis of our results shows that: (i) using self-adaptive and self-aware decisions, an MCH foraging task performed by a battery-powered robot can achieve better optimization in a changing environment than using static offloading or running the task only on the robot. However, a self-adaptive decision would fall behind when the change in the environment happens within the system. In such a case, a self-aware system can perform well, in terms of minimizing the two objectives. (ii) The Two-Step algorithm can search for better quality configurations for MCH robotic tasks of having a size from small to medium scale, in terms of the total number of their offloadable modules.

Engagement is a concept of the utmost importance in human-computer interaction, not only for informing the design and implementation of interfaces, but also for enabling more sophisticated interfaces capable of adapting to users. While the notion of engagement is actively being studied in a diverse set of domains, the term has been used to refer to a number of related, but different concepts. In fact it has been referred to across different disciplines under different names and with different connotations in mind. Therefore, it can be quite difficult to understand what the meaning of engagement is and how one study relates to another one accordingly. Engagement has been studied not only in human-human, but also in human-agent interactions i.e., interactions with physical robots and embodied virtual agents. In this overview article we focus on different factors involved in engagement studies, distinguishing especially between those studies that address task and social engagement, involve children and adults, are conducted in a lab or aimed for long term interaction. We also present models for detecting engagement and for generating multimodal behaviors to show engagement.

Background: Clinical exoskeletal-assisted walking (EAW) programs for individuals with spinal cord injury (SCI) have been established, but many unknown variables remain. These include addressing staffing needs, determining the number of sessions needed to achieve a successful walking velocity milestone for ambulation, distinguishing potential achievement goals according to level of injury, and deciding the number of sessions participants need to perform in order to meet the Food and Drug Administration (FDA) criteria for personal use prescription in the home and community. The primary aim of this study was to determine the number of sessions necessary to achieve adequate EAW skills and velocity milestones, and the percentage of participants able to achieve these skills by 12 sessions and to determine the skill progression over the course of 36 sessions.

Methods: A randomized clinical trial (RCT) was conducted across three sites, in persons with chronic (≥6 months) non-ambulatory SCI. Eligible participants were randomized (within site) to either the EAW arm first (Group 1), three times per week for 36 sessions, striving to be completed in 12 weeks or the usual activity arm (UA) first (Group 2), followed by a crossover to the other arm for both groups. The 10-meter walk test seconds (s) (10MWT), 6-min walk test meters (m) (6MWT), and the Timed-Up-and-Go (s) (TUG) were performed at 12, 24, and 36 sessions. To test walking performance in the exoskeletal devices, nominal velocities and distance milestones were chosen prior to study initiation, and were used for the 10MWT (≤ 40s), 6MWT (≥80m), and TUG (≤ 90s). All walking tests were performed with the exoskeletons.

Results: A total of 50 participants completed 36 sessions of EAW training. At 12 sessions, 31 (62%), 35 (70%), and 36 (72%) participants achieved the 10MWT, 6MWT, and TUG milestones, respectively. By 36 sessions, 40 (80%), 41 (82%), and 42 (84%) achieved the 10MWT, 6MWT, and TUG criteria, respectively.

Conclusions: It is feasible to train chronic non-ambulatory individuals with SCI in performance of EAW sufficiently to achieve reasonable mobility skill outcome milestones.

To investigate how a robot's use of feedback can influence children's engagement and support second language learning, we conducted an experiment in which 72 children of 5 years old learned 18 English animal names from a humanoid robot tutor in three different sessions. During each session, children played 24 rounds in an “I spy with my little eye” game with the robot, and in each session the robot provided them with a different type of feedback. These feedback types were based on a questionnaire study that we conducted with student teachers and the outcome of this questionnaire was translated to three within-design conditions: (teacher) preferred feedback, (teacher) dispreferred feedback and no feedback. During the preferred feedback session, among others, the robot varied his feedback and gave children the opportunity to try again (e.g., “Well done! You clicked on the horse.”, “Too bad, you pressed the bird. Try again. Please click on the horse.”); during the dispreferred feedback the robot did not vary the feedback (“Well done!”, “Too bad.”) and children did not receive an extra attempt to try again; and during no feedback the robot did not comment on the children's performances at all. We measured the children's engagement with the task and with the robot as well as their learning gain, as a function of condition. Results show that children tended to be more engaged with the robot and task when the robot used preferred feedback than in the two other conditions. However, preferred or dispreferred feedback did not have an influence on learning gain. Children learned on average the same number of words in all conditions. These findings are especially interesting for long-term interactions where engagement of children often drops. Moreover, feedback can become more important for learning when children need to rely more on feedback, for example, when words or language constructions are more complex than in our experiment. The experiment's method, measurements and main hypotheses were preregistered.

Robotic agents should be able to learn from sub-symbolic sensor data and, at the same time, be able to reason about objects and communicate with humans on a symbolic level. This raises the question of how to overcome the gap between symbolic and sub-symbolic artificial intelligence. We propose a semantic world modeling approach based on bottom-up object anchoring using an object-centered representation of the world. Perceptual anchoring processes continuous perceptual sensor data and maintains a correspondence to a symbolic representation. We extend the definitions of anchoring to handle multi-modal probability distributions and we couple the resulting symbol anchoring system to a probabilistic logic reasoner for performing inference. Furthermore, we use statistical relational learning to enable the anchoring framework to learn symbolic knowledge in the form of a set of probabilistic logic rules of the world from noisy and sub-symbolic sensor input. The resulting framework, which combines perceptual anchoring and statistical relational learning, is able to maintain a semantic world model of all the objects that have been perceived over time, while still exploiting the expressiveness of logical rules to reason about the state of objects which are not directly observed through sensory input data. To validate our approach we demonstrate, on the one hand, the ability of our system to perform probabilistic reasoning over multi-modal probability distributions, and on the other hand, the learning of probabilistic logical rules from anchored objects produced by perceptual observations. The learned logical rules are, subsequently, used to assess our proposed probabilistic anchoring procedure. We demonstrate our system in a setting involving object interactions where object occlusions arise and where probabilistic inference is needed to correctly anchor objects.

In this study, the sources of EEG activity in motor imagery brain–computer interface (BCI) control experiments were investigated. Sixteen linear decomposition methods for EEG source separation were compared according to different criteria. The criteria were mutual information reduction between the source activities and physiological plausibility. The latter was tested by estimating the dipolarity of the source topographic maps, i.e., the accuracy of approximating the map by potential distribution from a single current dipole, as well as by the specificity of the source activity for different motor imagery tasks. The decomposition methods were also compared according to the number of shared components found. The results indicate that most of the dipolar components are found by the Independent Component Analysis Methods AMICA and PWCICA, which also provided the highest information reduction. These two methods also found the most task-specific EEG patterns of the blind source separation algorithms used. They are outperformed only by non-blind Common Spatial Pattern methods in terms of pattern specificity. The components found by all of the methods were clustered using the Attractor Neural Network with Increasing Activity. The results of the cluster analysis revealed the most frequent patterns of electrical activity occurring in the experiments. The patterns reflect blinking, eye movements, sensorimotor rhythm suppression during the motor imagery, and activations in the precuneus, supplementary motor area, and premotor areas of both hemispheres. Overall, multi-method decomposition with subsequent clustering and task-specificity estimation is a viable and informative procedure for processing the recordings of electrophysiological experiments.

We analyze the efficacy of modern neuro-evolutionary strategies for continuous control optimization. Overall, the results collected on a wide variety of qualitatively different benchmark problems indicate that these methods are generally effective and scale well with respect to the number of parameters and the complexity of the problem. Moreover, they are relatively robust with respect to the setting of hyper-parameters. The comparison of the most promising methods indicates that the OpenAI-ES algorithm outperforms or equals the other algorithms on all considered problems. Moreover, we demonstrate how the reward functions optimized for reinforcement learning methods are not necessarily effective for evolutionary strategies and vice versa. This finding can lead to reconsideration of the relative efficacy of the two classes of algorithm since it implies that the comparisons performed to date are biased toward one or the other class.

We introduce Robot DE NIRO, an autonomous, collaborative, humanoid robot for mobile manipulation. We built DE NIRO to perform a wide variety of manipulation behaviors, with a focus on pick-and-place tasks. DE NIRO is designed to be used in a domestic environment, especially in support of caregivers working with the elderly. Given this design focus, DE NIRO can interact naturally, reliably, and safely with humans, autonomously navigate through environments on command, intelligently retrieve or move target objects, and avoid collisions efficiently. We describe DE NIRO's hardware and software, including an extensive vision sensor suite of 2D and 3D LIDARs, a depth camera, and a 360-degree camera rig; two types of custom grippers; and a custom-built exoskeleton called DE VITO. We demonstrate DE NIRO's manipulation capabilities in three illustrative challenges: First, we have DE NIRO perform a fetch-an-object challenge. Next, we add more cognition to DE NIRO's object recognition and grasping abilities, confronting it with small objects of unknown shape. Finally, we extend DE NIRO's capabilities into dual-arm manipulation of larger objects. We put particular emphasis on the features that enable DE NIRO to interact safely and naturally with humans. Our contribution is in sharing how a humanoid robot with complex capabilities can be designed and built quickly with off-the-shelf hardware and open-source software. Supplementary Material including our code, a documentation, videos and the CAD models of several hardware parts are openly available at https://www.imperial.ac.uk/robot-intelligence/software/.

Recent work suggests that collective computation of social structure can minimize uncertainty about the social and physical environment, facilitating adaptation. We explore these ideas by studying how fission-fusion social structure arises in spider monkey (Ateles geoffroyi) groups, exploring whether monkeys use social knowledge to collectively compute subgroup size distributions adaptive for foraging in variable environments. We assess whether individual decisions to stay in or leave subgroups are conditioned on strategies based on the presence or absence of others. We search for this evidence in a time series of subgroup membership. We find that individuals have multiple strategies, suggesting that the social knowledge of different individuals is important. These stay-leave strategies provide microscopic inputs to a stochastic model of collective computation encoded in a family of circuits. Each circuit represents an hypothesis for how collectives combine strategies to make decisions, and how these produce various subgroup size distributions. By running these circuits forward in simulation we generate new subgroup size distributions and measure how well they match food abundance in the environment using transfer entropies. We find that spider monkeys decide to stay or go using information from multiple individuals and that they can collectively compute a distribution of subgroup size that makes efficient use of ephemeral sources of nutrition. We are able to artificially tune circuits with subgroup size distributions that are a better fit to the environment than the observed. This suggests that a combination of measurement error, constraint, and adaptive lag are diminishing the power of collective computation in this system. These results are relevant for a more general understanding of the emergence of ordered states in multi-scale social systems with adaptive properties–both natural and engineered.

Modeling of soft robots is typically performed at the static level or at a second-order fully dynamic level. Controllers developed upon these models have several advantages and disadvantages. Static controllers, based on the kinematic relations tend to be the easiest to develop, but by sacrificing accuracy, efficiency and the natural dynamics. Controllers developed using second-order dynamic models tend to be computationally expensive, but allow optimal control. Here we propose that the dynamic model of a soft robot can be reduced to first-order dynamical equation owing to their high damping and low inertial properties, as typically observed in nature, with minimal loss in accuracy. This paper investigates the validity of this assumption and the advantages it provides to the modeling and control of soft robots. Our results demonstrate that this model approximation is a powerful tool for developing closed-loop task-space dynamic controllers for soft robots by simplifying the planning and sensory feedback process with minimal effects on the controller accuracy.

Multimodal integration is an important process in perceptual decision-making. In humans, this process has often been shown to be statistically optimal, or near optimal: sensory information is combined in a fashion that minimizes the average error in perceptual representation of stimuli. However, sometimes there are costs that come with the optimization, manifesting as illusory percepts. We review audio-visual facilitations and illusions that are products of multisensory integration, and the computational models that account for these phenomena. In particular, the same optimal computational model can lead to illusory percepts, and we suggest that more studies should be needed to detect and mitigate these illusions, as artifacts in artificial cognitive systems. We provide cautionary considerations when designing artificial cognitive systems with the view of avoiding such artifacts. Finally, we suggest avenues of research toward solutions to potential pitfalls in system design. We conclude that detailed understanding of multisensory integration and the mechanisms behind audio-visual illusions can benefit the design of artificial cognitive systems.

Plants are movers, but the nature of their movement differs dramatically from that of creatures that move their whole body from point A to point B. Plants grow to where they are going. Bio-inspired robotics sometimes emulates plants' growth-based movement; but growing is part of a broader system of movement guidance and control. We argue that ecological psychology's conception of “information” and “control” can simultaneously make sense of what it means for a plant to navigate its environment and provide a control scheme for the design of ecological plant-inspired robotics. In this effort, we will outline several control laws and give special consideration to the class of control laws identified by tau theory, such as time to contact.

Complex maritime missions, both above and below the surface, have traditionally been carried out by manned surface ships and submarines equipped with advanced sensor systems. Unmanned Maritime Vehicles (UMVs) are increasingly demonstrating their potential for improving existing naval capabilities due to their rapid deployability, easy scalability, and high reconfigurability, offering a reduction in both operational time and cost. In addition, they mitigate the risk to personnel by leaving the man far-from-the-risk but in-the-loop of decision making. In the long-term, a clear interoperability framework between unmanned systems, human operators, and legacy platforms will be crucial for effective joint operations planning and execution. However, the present multi-vendor multi-protocol solutions in multi-domain UMVs activities are hard to interoperate without common mission control interfaces and communication protocol schemes. Furthermore, the underwater domain presents significant challenges that cannot be satisfied with the solutions developed for terrestrial networks. In this paper, the interoperability topic is discussed blending a review of the technological growth from 2000 onwards with recent authors' in-field experience; finally, important research directions for the future are given. Within the broad framework of interoperability in general, the paper focuses on the aspect of interoperability among UMVs not neglecting the role of the human operator in the loop. The picture emerging from the review demonstrates that interoperability is currently receiving a high level of attention with a great and diverse deal of effort. Besides, the manuscript describes the experience from a sea trial exercise, where interoperability has been demonstrated by integrating heterogeneous autonomous UMVs into the NATO Centre for Maritime Research and Experimentation (CMRE) network, using different robotic middlewares and acoustic modem technologies to implement a multistatic active sonar system. A perspective for the interoperability in marine robotics missions emerges in the paper, through a discussion of current capabilities, in-field experience and future advanced technologies unique to UMVs. Nonetheless, their application spread is slowed down by the lack of human confidence. In fact, an interoperable system-of-systems of autonomous UMVs will require operators involved only at a supervisory level. As trust develops, endorsed by stable and mature interoperability, human monitoring will be diminished to exploit the tremendous potential of fully autonomous UMVs.

In our study, we tested a combination of virtual reality (VR) and robotics in the original adjuvant method of post-stroke lower limb walk restoration in acute phase using a simulation with visual and tactile biofeedback based on VR immersion and physical impact to the soles of patients. The duration of adjuvant therapy was 10 daily sessions of 15 min each. The study showed the following significant rehabilitation progress in Control (N = 27) vs. Experimental (N = 35) groups, respectively: 1.56 ± 0.29 (mean ± SD) and 2.51 ± 0.31 points by Rivermead Mobility Index (p = 0.0286); 2.15 ± 0.84 and 6.29 ± 1.20 points by Fugl-Meyer Assessment Lower Extremities scale (p = 0.0127); and 6.19 ± 1.36 and 13.49 ± 2.26 points by Berg Balance scale (p = 0.0163). P-values were obtained by the Mann–Whitney U test. The simple and intuitive mechanism of rehabilitation, including through the use of sensory and semantic components, allows the therapy of a patient with diaschisis and afferent and motor aphasia. Safety of use allows one to apply the proposed method of therapy at the earliest stage of a stroke. We consider the main finding of this study that the application of rehabilitation with implicit interaction with VR environment produced by the robotics action has measurable significant influence on the restoration of the affected motor function of the lower limbs compared with standard rehabilitation therapy.

Group interactions are widely observed in nature to optimize a set of critical collective behaviors, most notably sensing and decision making in uncertain environments. Nevertheless, these interactions are commonly modeled using local (proximity) networks, in which individuals interact within a certain spatial range. Recently, other interaction topologies have been revealed to support the emergence of higher levels of scalability and rapid information exchange. One prominent example is scale-free networks. In this study, we aim to examine the impact of scale-free communication when implemented for a swarm foraging task in dynamic environments. We model dynamic (uncertain) environments in terms of changes in food density and analyze the collective response of a simulated swarm with communication topology given by either proximity or scale-free networks. Our results suggest that scale-free networks accelerate the process of building up a rapid collective response to cope with the environment changes. However, this comes at the cost of lower coherence of the collective decision. Moreover, our findings suggest that the use of scale-free networks can improve swarm performance due to two side-effects introduced by using long-range interactions and frequent network regeneration. The former is a topological consequence, while the latter is a necessity due to robot motion. These two effects lead to reduced spatial correlations of a robot's behavior with its neighborhood and to an enhanced opinion mixing, i.e., more diversified information sampling. These insights were obtained by comparing the swarm performance in presence of scale-free networks to scenarios with alternative network topologies, and proximity networks with and without packet loss.

Bioinspired and biomimetic soft machines rely on functions and working principles that have been abstracted from biology but that have evolved over 3.5 billion years. So far, few examples from the huge pool of natural models have been examined and transferred to technical applications. Like living organisms, subsequent generations of soft machines will autonomously respond, sense, and adapt to the environment. Plants as concept generators remain relatively unexplored in biomimetic approaches to robotics and related technologies, despite being able to grow, and continuously adapt in response to environmental stimuli. In this research review, we highlight recent developments in plant-inspired soft machine systems based on movement principles. We focus on inspirations taken from fast active movements in the carnivorous Venus flytrap (Dionaea muscipula) and compare current developments in artificial Venus flytraps with their biological role model. The advantages and disadvantages of current systems are also analyzed and discussed, and a new state-of-the-art autonomous system is derived. Incorporation of the basic structural and functional principles of the Venus flytrap into novel autonomous applications in the field of robotics not only will inspire further plant-inspired biomimetic developments but might also advance contemporary plant-inspired robots, leading to fully autonomous systems utilizing bioinspired working concepts.

Extracting information from noisy signals is of fundamental importance for both biological and artificial perceptual systems. To provide tractable solutions to this challenge, the fields of human perception and machine signal processing (SP) have developed powerful computational models, including Bayesian probabilistic models. However, little true integration between these fields exists in their applications of the probabilistic models for solving analogous problems, such as noise reduction, signal enhancement, and source separation. In this mini review, we briefly introduce and compare selective applications of probabilistic models in machine SP and human psychophysics. We focus on audio and audio-visual processing, using examples of speech enhancement, automatic speech recognition, audio-visual cue integration, source separation, and causal inference to illustrate the basic principles of the probabilistic approach. Our goal is to identify commonalities between probabilistic models addressing brain processes and those aiming at building intelligent machines. These commonalities could constitute the closest points for interdisciplinary convergence.

Robot swarms are groups of robots that each act autonomously based on only local perception and coordination with neighboring robots. While current swarm implementations can be large in size (e.g., 1,000 robots), they are typically constrained to working in highly controlled indoor environments. Moreover, a common property of swarms is the underlying assumption that the robots act in close proximity of each other (e.g., 10 body lengths apart), and typically employ uninterrupted, situated, close-range communication for coordination. Many real world applications, including environmental monitoring and precision agriculture, however, require scalable groups of robots to act jointly over large distances (e.g., 1,000 body lengths), rendering the use of dense swarms impractical. Using a dense swarm for such applications would be invasive to the environment and unrealistic in terms of mission deployment, maintenance and post-mission recovery. To address this problem, we propose the sparse swarm concept, and illustrate its use in the context of four application scenarios. For one scenario, which requires a group of rovers to traverse, and monitor, a forest environment, we identify the challenges involved at all levels in developing a sparse swarm—from the hardware platform to communication-constrained coordination algorithms—and discuss potential solutions. We outline open questions of theoretical and practical nature, which we hope will bring the concept of sparse swarms to fruition.

This paper presents an approach to control the position of a gecko-inspired soft robot in Cartesian space. By formulating constraints under the assumption of constant curvature, the joint space of the robot is reduced in its dimension from nine to two. The remaining two generalized coordinates describe respectively the walking speed and the rotational speed of the robot and define the so-called velocity space. By means of simulations and experimental validation, the direct kinematics of the entire velocity space (mapping in Cartesian task space) is approximated by a bivariate polynomial. Based on this, an optimization problem is formulated that recursively generates the optimal references to reach a given target position in task space. Finally, we show in simulation and experiment that the robot can master arbitrary obstacle courses by making use of this gait pattern generator.

Pages