Frontiers in Robotics and AI

RSS Feed for Frontiers in Robotics and AI | New and Recent Articles
Subscribe to Frontiers in Robotics and AI feed

Soft pneumatic artificial muscles are a well actuation scheme in soft robotics due to its key features for robotic machines being safe, lightweight, and conformable. In this work, we present a versatile vacuum-powered artificial muscle (VPAM) with manually tunable output motion. We developed an artificial muscle that consists of a stack of air chambers that can use replaceable external reinforcements. Different modes of operation are achieved by assembling different reinforcements that constrain the output motion of the actuator during actuation. We designed replaceable external reinforcements to produce single motions such as twisting, bending, shearing and rotary. We then conducted a deformation and lifting force characterization for these motions. We demonstrated sophisticated motions and reusability of the artificial muscle in two soft machines with different modes of locomotion. Our results show that our VPAM is reusable and versatile producing a variety and sophisticated output motions if needed. This key feature specially benefits unpredicted workspaces that require a soft actuator that can be adjusted for other tasks. Our scheme has the potential to offer new strategies for locomotion in machines for underwater or terrestrial operation, and wearable devices with different modes of operation.

In control theory, reactive methods have been widely celebrated owing to their success in providing robust, provably convergent solutions to control problems. Even though such methods have long been formulated for motion planning, optimality has largely been left untreated through reactive means, with the community focusing on discrete/graph-based solutions. Although the latter exhibit certain advantages (completeness, complicated state-spaces), the recent rise in Reinforcement Learning (RL), provides novel ways to address the limitations of reactive methods. The goal of this paper is to treat the reactive optimal motion planning problem through an RL framework. A policy iteration RL scheme is formulated in a consistent manner with the control-theoretic results, thus utilizing the advantages of each approach in a complementary way; RL is employed to construct the optimal input without necessitating the solution of a hard, non-linear partial differential equation. Conversely, safety, convergence and policy improvement are guaranteed through control theoretic arguments. The proposed method is validated in simulated synthetic workspaces, and compared against reactive methods as well as a PRM and an RRT⋆ approach. The proposed method outperforms or closely matches the latter methods, indicating the near global optimality of the former, while providing a solution for planning from anywhere within the workspace to the goal position.

Pages