Feed aggregator

Swarm behaviors offer scalability and robustness to failure through a decentralized and distributed design. When designing coherent group motion as in swarm flocking, virtual potential functions are a widely used mechanism to ensure the aforementioned properties. However, arbitrating through different virtual potential sources in real-time has proven to be difficult. Such arbitration is often affected by fine tuning of the control parameters used to select among the different sources and by manually set cut-offs used to achieve a balance between stability and velocity. A reliance on parameter tuning makes these methods not ideal for field operations of aerial drones which are characterized by fast non-linear dynamics hindering the stability of potential functions designed for slower dynamics. A situation that is further exacerbated by parameters that are fine-tuned in the lab is often not appropriate to achieve satisfying performances on the field. In this work, we investigate the problem of dynamic tuning of local interactions in a swarm of aerial vehicles with the objective of tackling the stability–velocity trade-off. We let the focal agent autonomously and adaptively decide which source of local information to prioritize and at which degree—for example, which neighbor interaction or goal direction. The main novelty of the proposed method lies in a Gaussian kernel used to regulate the importance of each element in the swarm scheme. Each agent in the swarm relies on such a mechanism at every algorithmic iteration and uses it to tune the final output velocities. We show that the presented approach can achieve cohesive flocking while at the same time navigating through a set of way-points at speed. In addition, the proposed method allows to achieve other desired field properties such as automatic group splitting and joining over long distances. The aforementioned properties have been empirically proven by an extensive set of simulated and field experiments, in communication-full and communication-less scenarios. Moreover, the presented approach has been proven to be robust to failures, intermittent communication, and noisy perceptions.

This article presents perspective on the research challenge of understanding and synthesizing anthropomorphic whole-body contact motions through a platform called “interactive cyber-physical human (iCPH)” for data collection and augmentation. The iCPH platform combines humanoid robots as “physical twins” of human and “digital twins” that simulates humans and robots in cyber-space. Several critical research topics are introduced to address this challenge by leveraging the advanced model-based analysis together with data-driven learning to exploit collected data from the integrated platform of iCPH. Definition of general description is identified as the first topic as a common basis of contact motions compatible to both humans and humanoids. Then, we set continual learning of a feasible contact motion network as the second challenge by benefiting from model-based approach and machine learning bridged by the efficient analytical gradient computation developed by the author and his collaborators. The final target is to establish a high-level symbolic system allowing automatic understanding and generation of contact motions in unexperienced environments. The proposed approaches are still under investigation, and the author expects that this article triggers discussions and further collaborations from different research communities, including robotics, artificial intelligence, neuroscience, and biomechanics.

The smart factory is at the heart of Industry 4.0 and is the new paradigm for establishing advanced manufacturing systems and realizing modern manufacturing objectives such as mass customization, automation, efficiency, and self-organization all at once. Such manufacturing systems, however, are characterized by dynamic and complex environments where a large number of decisions should be made for smart components such as production machines and the material handling system in a real-time and optimal manner. AI offers key intelligent control approaches in order to realize efficiency, agility, and automation all at once. One of the most challenging problems faced in this regard is uncertainty, meaning that due to the dynamic nature of the smart manufacturing environments, sudden seen or unseen events occur that should be handled in real-time. Due to the complexity and high-dimensionality of smart factories, it is not possible to predict all the possible events or prepare appropriate scenarios to respond. Reinforcement learning is an AI technique that provides the intelligent control processes needed to deal with such uncertainties. Due to the distributed nature of smart factories and the presence of multiple decision-making components, multi-agent reinforcement learning (MARL) should be incorporated instead of single-agent reinforcement learning (SARL), which, due to the complexities involved in the development process, has attracted less attention. In this research, we will review the literature on the applications of MARL to tasks within a smart factory and then demonstrate a mapping connecting smart factory attributes to the equivalent MARL features, based on which we suggest MARL to be one of the most effective approaches for implementing the control mechanism for smart factories.

Road infrastructure is one of the most vital assets of any country. Keeping the road infrastructure clean and unpolluted is important for ensuring road safety and reducing environmental risk. However, roadside litter picking is an extremely laborious, expensive, monotonous and hazardous task. Automating the process would save taxpayers money and reduce the risk for road users and the maintenance crew. This work presents LitterBot, an autonomous robotic system capable of detecting, localizing and classifying common roadside litter. We use a learning-based object detection and segmentation algorithm trained on the TACO dataset for identifying and classifying garbage. We develop a robust modular manipulation framework by using soft robotic grippers and a real-time visual-servoing strategy. This enables the manipulator to pick up objects of variable sizes and shapes even in dynamic environments. The robot achieves greater than 80% classified picking and binning success rates for all experiments; which was validated on a wide variety of test litter objects in static single and cluttered configurations and with dynamically moving test objects. Our results showcase how a deep model trained on an online dataset can be deployed in real-world applications with high accuracy by the appropriate design of a control framework around it.

Communication therapies based on conversations with caregivers, such as reminiscence therapy and music therapy, have been proposed to delay the progression of dementia. Although these therapies have been reported to improve the cognitive and behavioral functions of elderly people suffering from dementia, caregivers do not have enough time to spend on administering such communication therapies, especially in Japan where the workforce of caregivers is inadequate. Consequently, the progression of dementia in the elderly and the accompanying increased burden on caregivers has become a social problem. While the automation of communication therapy using robots and virtual agents has been proposed, the accuracy of both speech recognition and dialogue control is still insufficient to improve the cognitive and behavioral functions of the dementia elderly. In this study, we examine the effect of a Japanese word-chain game (Shiritori game) with an interactive robot and that of music listening on the maintenance and improvement of cognitive and behavioral scales [Mini-Mental State Examination (MMSE) and Dementia Behavior Disturbance scale (DBD)] of the dementia elderly. These activities can provide linguistic and phonetic stimuli, and they are simpler to implement than conventional daily conversation. The results of our Wizard-of-Oz-based experiments show that the cognitive and behavioral function scores of the elderly who periodically played the Shiritori game with an interactive robot were significantly improved over the elderly in a control group. On the other hand, no such effect was observed with the music listening stimuli. Our further experiments showed that, in the Shiritori intervention group, there was a ceiling on the increase in MMSE. The lower the MMSE before participating in the experiment, the greater the increase. Furthermore, greater improvement in DBD was observed when the participants actively played the Shiritori game. Since the Shiritori game is relatively easy to automate, our findings show the potential benefits of automating dementia therapies to maintain cognitive and behavioral functions.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Humanoid robots are a lot more capable than they used to be, but for most of them, falling over is still borderline catastrophic. Understandably, the focus has been on getting humanoid robots to succeed at things as opposed to getting robots to tolerate (or recover from) failing at things, but sometimes, failure is inevitable because stuff happens that’s outside your control. Earthquakes, accidentally clumsy grad students, tornadoes, deliberately malicious grad students—the list goes on.

When humans lose their balance, the go-to strategy is a highly effective one: use whatever happens to be nearby to keep from falling over. While for humans this approach is instinctive, it’s a hard problem for robots, involving perception, semantic understanding, motion planning, and careful force control, all executed under aggressive time constraints. In a paper published earlier this year in IEEE Robotics and Automation Letters, researchers at Inria in France show some early work getting a TALOS humanoid robot to use a nearby wall to successfully keep itself from taking a tumble.

The tricky thing about this technique is how little time a robot has to understand that it’s going to fall, sense its surroundings, make a plan to save itself, and execute that plan in time to avoid falling. In this paper, the researchers address most of these things—the biggest caveat is probably that they’re assuming that the location of the nearby wall is known, but that’s a relatively straightforward problem to solve if your robot has the right sensors on it.

Once the robot detects that something in its leg has given out, its Damage Reflex (“D-Reflex”) kicks in. D-Reflex is based around a neural network that was trained in simulation (taking a mere 882,000 simulated trials), and with the posture of the robot and the location of the wall as inputs, the network outputs how likely a potential wall contact is to stabilize the robot, taking just just a few milliseconds. The system doesn’t actually need to know anything specific about the robot’s injury, and will work whether the actuator is locked up, moving freely but not controllably, or completely absent, the “amputation” case. Of course, reality rarely matches simulation, and it turns out that a damaged and tipping over robot doesn’t reliably make contact with the the wall exactly where it should, so the researchers had to tweak things to make sure that the robot stops its hand as soon as it touches the wall whether it’s in the right spot or not. This method worked pretty well—using D-Reflex, the TALOS robot was able to avoid falling in three out of four trials where it would otherwise have fallen. Considering how expensive robots like TALOS are, this is a pretty great result, if you ask me.

The obvious question at this point is, “okay, now what?” Well, that’s beyond the scope of this research, but generally “now what” consists of one of two things. Either the robot falls anyway, which can definitely happen even with this method because some configurations of robot and wall are simply not avoidable, or the robot doesn’t fall and you end up with a slightly busted robot leaning precariously against a wall. In either case, though, there are options. We’ve seen a bunch of complementary work on surviving falls with humanoid robots in one way or another. And in fact one of the authors of this paper, Jean-Baptiste Mouret, has already published some very cool research on injury adaptation for legged robots.

In the future, the idea is to extend this idea to robots that are moving dynamically, which is definitely going to be a lot more challenging, but potentially a lot more useful.

First do not fall: learning to exploit a wall with a damaged humanoid robot, by Timothee Anne, Eloïse Dalin, Ivan Bergonzani, Serena Ivaldi, and Jean-Baptiste Mouret from Inria, is published in IEEE Robotics and Automation Letters.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Humanoid robots are a lot more capable than they used to be, but for most of them, falling over is still borderline catastrophic. Understandably, the focus has been on getting humanoid robots to succeed at things as opposed to getting robots to tolerate (or recover from) failing at things, but sometimes, failure is inevitable because stuff happens that’s outside your control. Earthquakes, accidentally clumsy grad students, tornadoes, deliberately malicious grad students—the list goes on.

When humans lose their balance, the go-to strategy is a highly effective one: use whatever happens to be nearby to keep from falling over. While for humans this approach is instinctive, it’s a hard problem for robots, involving perception, semantic understanding, motion planning, and careful force control, all executed under aggressive time constraints. In a paper published earlier this year in IEEE Robotics and Automation Letters, researchers at Inria in France show some early work getting a TALOS humanoid robot to use a nearby wall to successfully keep itself from taking a tumble.

The tricky thing about this technique is how little time a robot has to understand that it’s going to fall, sense its surroundings, make a plan to save itself, and execute that plan in time to avoid falling. In this paper, the researchers address most of these things—the biggest caveat is probably that they’re assuming that the location of the nearby wall is known, but that’s a relatively straightforward problem to solve if your robot has the right sensors on it.

Once the robot detects that something in its leg has given out, its Damage Reflex (“D-Reflex”) kicks in. D-Reflex is based around a neural network that was trained in simulation (taking a mere 882,000 simulated trials), and with the posture of the robot and the location of the wall as inputs, the network outputs how likely a potential wall contact is to stabilize the robot, taking just just a few milliseconds. The system doesn’t actually need to know anything specific about the robot’s injury, and will work whether the actuator is locked up, moving freely but not controllably, or completely absent, the “amputation” case. Of course, reality rarely matches simulation, and it turns out that a damaged and tipping over robot doesn’t reliably make contact with the the wall exactly where it should, so the researchers had to tweak things to make sure that the robot stops its hand as soon as it touches the wall whether it’s in the right spot or not. This method worked pretty well—using D-Reflex, the TALOS robot was able to avoid falling in three out of four trials where it would otherwise have fallen. Considering how expensive robots like TALOS are, this is a pretty great result, if you ask me.

The obvious question at this point is, “okay, now what?” Well, that’s beyond the scope of this research, but generally “now what” consists of one of two things. Either the robot falls anyway, which can definitely happen even with this method because some configurations of robot and wall are simply not avoidable, or the robot doesn’t fall and you end up with a slightly busted robot leaning precariously against a wall. In either case, though, there are options. We’ve seen a bunch of complementary work on surviving falls with humanoid robots in one way or another. And in fact one of the authors of this paper, Jean-Baptiste Mouret, has already published some very cool research on injury adaptation for legged robots.

In the future, the idea is to extend this idea to robots that are moving dynamically, which is definitely going to be a lot more challenging, but potentially a lot more useful.

First do not fall: learning to exploit a wall with a damaged humanoid robot, by Timothee Anne, Eloïse Dalin, Ivan Bergonzani, Serena Ivaldi, and Jean-Baptiste Mouret from Inria, is published in IEEE Robotics and Automation Letters.

Complex and bulky driving systems are among the main issues for soft robots driven by pneumatic actuators. Self-excited oscillation is a promising approach for dealing with this problem: oscillatory actuation is generated from non-oscillatory input. However, small varieties of self-excited pneumatic actuators currently limit their applications. We present a simple, self-excited pneumatic valve that uses a flat ring tube (FRT), a device originally developed as a self-excited pneumatic actuator. First, we explore the driving principle of the self-excited valve and investigate the effect of the flow rate and FRT length on its driving frequency. Then, a locomotive robot containing the valve is demonstrated. The prototype succeeded in walking at 5.2 mm/s when the oscillation frequency of the valve was 1.5 Hz, showing the applicability of the proposed valve to soft robotics.

Human-in-the-loop approaches can greatly enhance the human–robot interaction by making the user an active part of the control loop, who can provide a feedback to the robot in order to augment its capabilities. Such feedback becomes even more important in all those situations where safety is of utmost concern, such as in assistive robotics. This study aims to realize a human-in-the-loop approach, where the human can provide a feedback to a specific robot, namely, a smart wheelchair, to augment its artificial sensory set, extending and improving its capabilities to detect and avoid obstacles. The feedback is provided by both a keyboard and a brain–computer interface: with this scope, the work has also included a protocol design phase to elicit and evoke human brain event–related potentials. The whole architecture has been validated within a simulated robotic environment, with electroencephalography signals acquired from different test subjects.

While the potential of using helical microrobots for biomedical applications, such as cargo transport, drug delivery, and micromanipulation, had been demonstrated, the viability to use them for practical applications is hindered by the cost, speed, and repeatability of current fabrication techniques. Hence, this paper introduces a simple, low-cost, high-throughput manufacturing process for single nickel layer helical microrobots with consistent dimensions. Photolithography and electron-beam (e-beam) evaporation were used to fabricate 2D parallelogram patterns that were sequentially rolled up into helical microstructures through the swelling effect of a photoresist sacrificial layer. Helical parameters were controlled by adjusting the geometric parameters of parallelogram patterns. To validate the fabrication process and characterize the microrobots’ mobility, we characterized the structures and surface morphology of the microrobots using a scanning electron microscope and tested their steerability using feedback control, respectively. Finally, we conducted a benchmark comparison to demonstrate that the fabrication method can produce helical microrobots with swimming properties comparable to previously reported microrobots.

Damage detection is one of the critical challenges in operating soft robots in an industrial setting. In repetitive tasks, even a small cut or fatigue can propagate to large damage ceasing the complete operation process. Although research has shown that damage detection can be performed through an embedded sensor network, this approach leads to complicated sensorized systems with additional wiring and equipment, made using complex fabrication processes and often compromising the flexibility of the soft robotic body. Alternatively, in this paper, we proposed a non-invasive approach for damage detection and localization on soft grippers. The essential idea is to track changes in non-linear dynamics of a gripper due to possible damage, where minor changes in material and morphology lead to large differences in the force and torque feedback over time. To test this concept, we developed a classification model based on a bidirectional long short-time memory (biLSTM) network that discovers patterns of dynamics changes in force and torque signals measured at the mounting point. To evaluate this model, we employed a two-fingered Fin Ray gripper and collected data for 43 damage configurations. The experimental results show nearly perfect damage detection accuracy and 97% of its localization. We have also tested the effect of the gripper orientation and the length of time-series data. By shaking the gripper with an optimal roll angle, the localization accuracy can exceed 95% and increase further with additional gripper orientations. The results also show that two periods of the gripper oscillation, i.e., roughly 50 data points, are enough to achieve a reasonable level of damage localization.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

Happy Thanksgiving, for those who celebrate it. Now spend 10 minutes watching a telepresence robot assemble a turkey sandwich.

[ Sanctuary ]

Ayato Kanada, an assistant professor at Kyushu University in Japan, wrote in to share "the world's simplest omnidirectional mobile robot."

We propose a palm-sized omnidirectional mobile robot with two torus wheels. A single torus wheel is made of an elastic elongated coil spring in which the two ends of the coil connected each other and is driven by a piezoelectric actuator (stator) that can generate 2-degrees-of-freedom (axial and angular) motions. The stator converts its thrust force and torque into longitudinal and meridian motions of the torus wheel, respectively, making the torus work as an omnidirectional wheel on a plane.

[ Paper ]

Thanks, Ayato!

This work entitled "Virtually turning robotic manipulators into worn devices: opening new horizons for wearable assistive robotics" proposes a novel hybrid system using a virtually worn robotic arm in augmented-reality, and a real robotic manipulator servoed on such virtual representation. We basically aim at bringing an illusion of wearing a robotic system while its weight is fully deported. We believe that this approach could offers a solution to the critical challenge of wight and discomfort cause by robotic sensorimotor extensions (such as supernumerary robotics limbs (SRL), prostheses or handheld tools), and open new horizons for the development of wearable robotics.

[ Paper ]

Thanks, Nathanaël!

Engineers at Georgia Tech are the first to study the mechanics of springtails, which leap in the water to avoid predators. The researchers learned how the tiny hexapods control their jump, self-right in midair, and land on their feet in the blink of an eye. The team used the findings to build penny-sized jumping robots.

[ Georgia Tech ]

Thanks, Jason!

The European Space Agency (ESA) and the European Space Resources Innovation Centre (ESRIC) have asked European space industries and research institutions to develop innovative technologies for the exploration of resources on the Moon in the framework of the ESA-ESRIC Space Resources Challenge. As part of the challenge, teams of engineers have developed vehicles capable of prospecting for resources in a test-bed simulating the Moon's shaded polar regions. From 5 to 9 September 2022, the final of the ESA-ESRIC Space Resource Challenge took place at the Rockhal in Esch-sur-Alzette. On this occasion, lunar rover prototypes competed on a 1,800 m² 'lunar' terrain. The winning team will have the opportunity to have their technology implemented on the Moon.

[ ESA ]

Thanks, Arne!

If only cobots were as easy to use as this video from Kuka makes it seem.

The Kuka website doesn't say how much this thing costs, which means it's almost certainly not something that you impulse buy.

[ Kuka ]

We present the tensegrity aerial vehicle, a design of collision-resilient rotor robots with icosahedron tensegrity structures. With collision resilience and re-orientation ability, the tensegrity aerial vehicles can operate in cluttered environments without complex collision-avoidance strategies. These capabilities are validated by a test of an experimental tensegrity aerial vehicle operating with only onboard inertial sensors in a previously-unknown forest.

[ HiPeR Lab ]

The robotics research group Brubotics and polymer science and physical chemistry group FYSC of the university of Brussels have developed together self-healing materials that can be scratched, punctured or completely cut through and heal themselves back together, with the required heat, or even at room temperature.

[ Brubotics ]

Apparently, the World Cup needs more drone footage, because this is kinda neat.

[ DJI ]

Researchers at MIT's Center for Bits and Atoms have made significant progress toward creating robots that could build nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots.

[ MIT ]

The researchers from North Carolina State University have recently developed a fast and efficient soft robotic swimmer that swims resembling human's butterfly-stroke style. It can achieve a high average swimming speed of 3.74 body length per second, close to five times faster than the fastest similar soft swimmers, and also a high-power efficiency with low cost of energy.

[ NC State ]

To facilitate sensing and physical interaction in remote and/or constrained environments, high-extension, lightweight robot manipulators are easier to transport and reach substantially further than traditional serial chain manipulators. We propose a novel planar 3-degree-of-freedom manipulator that achieves low weight and high extension through the use of a pair of spooling bistable tapes, commonly used in self-retracting tape measures, which are pinched together to form a reconfigurable revolute joint.

[ Charm Lab ]

SLURP!

[ River Lab ]

This video may encourage you to buy a drone. Or a snowmobile.

[ Skydio ]

Moxie is getting an update for the holidays!

[ Embodied ]

Robotics professor Henny Admoni answers the internet's burning questions about robots! How do you program a personality? Can robots pick up a single M&M? Why do we keep making humanoid robots? What is Elon Musk's goal for the Tesla Optimus robot? Will robots take over my job writing video descriptions...I mean, um, all our jobs? Henny answers all these questions and much more.

[ CMU ]

This GRASP on Robotics talk is from Julie Adams at Oregon State University, on “Towards Adaptive Human-Robot Teams: Workload Estimation.”

The ability for robots, be it a single robot, multiple robots or a robot swarm, to adapt to the humans with which they are teamed requires algorithms that allow robots to detect human performance in real time. The multi-dimensional workload algorithm incorporates physiological metrics to estimate overall workload and its components (i.e., cognitive, speech, auditory, visual and physical). The algorithm is sensitive to changes in a human’s individual workload components and overall workload across domains, human-robot teaming relationships (i.e., supervisory, peer-based), and individual differences. The algorithm has also been demonstrated to detect shifts in workload in real-time in order to adapt the robot’s interaction with the human and autonomously change task responsibilities when the human’s workload is over- or underloaded. Recently, the algorithm was used to post-hoc analyze the resulting workload for a single human deploying a heterogeneous robot swarm in an urban environment. Current efforts are focusing on predicting the human’s future workload, recognizing the human’s current tasks, and estimating workload for previously unseen tasks.

[ UPenn ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

Happy Thanksgiving, for those who celebrate it. Now spend 10 minutes watching a telepresence robot assemble a turkey sandwich.

[ Sanctuary ]

Ayato Kanada, an assistant professor at Kyushu University in Japan, wrote in to share "the world's simplest omnidirectional mobile robot."

We propose a palm-sized omnidirectional mobile robot with two torus wheels. A single torus wheel is made of an elastic elongated coil spring in which the two ends of the coil connected each other and is driven by a piezoelectric actuator (stator) that can generate 2-degrees-of-freedom (axial and angular) motions. The stator converts its thrust force and torque into longitudinal and meridian motions of the torus wheel, respectively, making the torus work as an omnidirectional wheel on a plane.

[ Paper ]

Thanks, Ayato!

This work entitled "Virtually turning robotic manipulators into worn devices: opening new horizons for wearable assistive robotics" proposes a novel hybrid system using a virtually worn robotic arm in augmented-reality, and a real robotic manipulator servoed on such virtual representation. We basically aim at bringing an illusion of wearing a robotic system while its weight is fully deported. We believe that this approach could offers a solution to the critical challenge of wight and discomfort cause by robotic sensorimotor extensions (such as supernumerary robotics limbs (SRL), prostheses or handheld tools), and open new horizons for the development of wearable robotics.

[ Paper ]

Thanks, Nathanaël!

Engineers at Georgia Tech are the first to study the mechanics of springtails, which leap in the water to avoid predators. The researchers learned how the tiny hexapods control their jump, self-right in midair, and land on their feet in the blink of an eye. The team used the findings to build penny-sized jumping robots.

[ Georgia Tech ]

Thanks, Jason!

The European Space Agency (ESA) and the European Space Resources Innovation Centre (ESRIC) have asked European space industries and research institutions to develop innovative technologies for the exploration of resources on the Moon in the framework of the ESA-ESRIC Space Resources Challenge. As part of the challenge, teams of engineers have developed vehicles capable of prospecting for resources in a test-bed simulating the Moon's shaded polar regions. From 5 to 9 September 2022, the final of the ESA-ESRIC Space Resource Challenge took place at the Rockhal in Esch-sur-Alzette. On this occasion, lunar rover prototypes competed on a 1,800 m² 'lunar' terrain. The winning team will have the opportunity to have their technology implemented on the Moon.

[ ESA ]

Thanks, Arne!

If only cobots were as easy to use as this video from Kuka makes it seem.

The Kuka website doesn't say how much this thing costs, which means it's almost certainly not something that you impulse buy.

[ Kuka ]

We present the tensegrity aerial vehicle, a design of collision-resilient rotor robots with icosahedron tensegrity structures. With collision resilience and re-orientation ability, the tensegrity aerial vehicles can operate in cluttered environments without complex collision-avoidance strategies. These capabilities are validated by a test of an experimental tensegrity aerial vehicle operating with only onboard inertial sensors in a previously-unknown forest.

[ HiPeR Lab ]

The robotics research group Brubotics and polymer science and physical chemistry group FYSC of the university of Brussels have developed together self-healing materials that can be scratched, punctured or completely cut through and heal themselves back together, with the required heat, or even at room temperature.

[ Brubotics ]

Apparently, the World Cup needs more drone footage, because this is kinda neat.

[ DJI ]

Researchers at MIT's Center for Bits and Atoms have made significant progress toward creating robots that could build nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots.

[ MIT ]

The researchers from North Carolina State University have recently developed a fast and efficient soft robotic swimmer that swims resembling human's butterfly-stroke style. It can achieve a high average swimming speed of 3.74 body length per second, close to five times faster than the fastest similar soft swimmers, and also a high-power efficiency with low cost of energy.

[ NC State ]

To facilitate sensing and physical interaction in remote and/or constrained environments, high-extension, lightweight robot manipulators are easier to transport and reach substantially further than traditional serial chain manipulators. We propose a novel planar 3-degree-of-freedom manipulator that achieves low weight and high extension through the use of a pair of spooling bistable tapes, commonly used in self-retracting tape measures, which are pinched together to form a reconfigurable revolute joint.

[ Charm Lab ]

SLURP!

[ River Lab ]

This video may encourage you to buy a drone. Or a snowmobile.

[ Skydio ]

Moxie is getting an update for the holidays!

[ Embodied ]

Robotics professor Henny Admoni answers the internet's burning questions about robots! How do you program a personality? Can robots pick up a single M&M? Why do we keep making humanoid robots? What is Elon Musk's goal for the Tesla Optimus robot? Will robots take over my job writing video descriptions...I mean, um, all our jobs? Henny answers all these questions and much more.

[ CMU ]

This GRASP on Robotics talk is from Julie Adams at Oregon State University, on “Towards Adaptive Human-Robot Teams: Workload Estimation.”

The ability for robots, be it a single robot, multiple robots or a robot swarm, to adapt to the humans with which they are teamed requires algorithms that allow robots to detect human performance in real time. The multi-dimensional workload algorithm incorporates physiological metrics to estimate overall workload and its components (i.e., cognitive, speech, auditory, visual and physical). The algorithm is sensitive to changes in a human’s individual workload components and overall workload across domains, human-robot teaming relationships (i.e., supervisory, peer-based), and individual differences. The algorithm has also been demonstrated to detect shifts in workload in real-time in order to adapt the robot’s interaction with the human and autonomously change task responsibilities when the human’s workload is over- or underloaded. Recently, the algorithm was used to post-hoc analyze the resulting workload for a single human deploying a heterogeneous robot swarm in an urban environment. Current efforts are focusing on predicting the human’s future workload, recognizing the human’s current tasks, and estimating workload for previously unseen tasks.

[ UPenn ]

Dynamic hopping maneuvers using mechanical actuation are proposed as a method of locomotion for free-flyer vehicles near or on large space structures. Such maneuvers are of interest for applications related to proximity maneuvers, observation, cargo carrying, fabrication, and sensor data collection. This study describes a set of dynamic hopping maneuver experiments performed using two Astrobees. Both vehicles were made to initially grasp onto a common free-floating handrail. From this initial condition, the active Astrobee launched itself using mechanical actuation of its robotic arm manipulator. The results are presented from the ground and flight experimental sessions completed at the Spacecraft Robotics Laboratory of the Naval Postgraduate School, the Intelligent Robotics Group facility at NASA Ames Research Center, and hopping maneuvers aboard the International Space Station. Overall, this study demonstrates that locomotion through mechanical actuation could successfully launch a free-flyer vehicle in an initial desired trajectory from another object of similar size and mass.

Tele-manipulation is indispensable for the nuclear industry since teleoperated robots cancel the radiation hazard problem for the operator. The majority of the teleoperated solutions used in the nuclear industry rely on bilateral teleoperation, utilizing a variation of the 4-channel architecture, where the motion and force signals of the local and remote robots are exchanged in the communication channel. However, the performance limitation of teleoperated robots for nuclear decommissioning tasks is not clearly answered in the literature. In this study, we assess the task performance in bilateral tele-manipulation for radiation surveying in gloveboxes and compare it to radiation surveying of a glovebox operator. To analyze the performance, an experimental setup suitable for human operation (manual operation) and tele-manipulation is designed. Our results showed that a current commercial off-the-shelf (COTS) teleoperated robotic manipulation solution is flexible, yet insufficient, as its task performance is significantly lower when compared to manual operation and potentially hazardous for the equipment inside the glovebox. Finally, we propose a set of potential solutions, derived from both our observations and expert interviews, that could improve the performance of teleoperation systems in glovebox environments in future work.



While being able to drive the ball 300 yards might get the fans excited, a solid putting game is often what separates a golf champion from the journeymen. A robot built by German researchers is quickly becoming a master of this short game using a clever combination of classical control engineering and machine learning.

In golf tournaments, players often scout out the greens the day beforehand to think through how they are going to play their shots, says Annika Junker, a doctoral student at Paderborn University in Germany. So she and her colleagues decided to see if giving a robot similar capabilities could help it to sink a putt from anywhere on the green, without assistance from a human.

Golfi, as the team has dubbed their creation, uses a 3D camera to take a snapshot of the green, which it then feeds into a physics-based model to simulate thousands of random shots from different positions. These are used to train a neural network that can then predict exactly how hard and in what direction to hit a ball to get it in the hole, from anywhere on the green.

On the green, Golfi was successful six or seven times out of ten.

Like even the best pros, it doesn’t get a hole in one every time. The goal isn’t really to build a tournament winning golf robot though, says Junker, but to demonstrate the power of hybrid approaches to robotic control. “We try to combine data-driven and physics based methods and we searched for a nice example, which everyone can easily understand,” she says. “It's only a toy for us, but we hope to see some advantages of our approach for industrial applications.”

So far, the researchers have only tested their approach on a small mock-up green inside their lab. The robot, which is described in a paper due to be presented at the IEEE International Conference on Robotic Computing in Italy next month, navigates its way around the two meter-square space on four wheels, two of which are powered. Once in position it then uses a belt driven gear shaft with a putter attached to the end to strike the ball towards the hole.

First though, it needs to work out what shot to play given the position of the ball. The researchers begin by using a Microsoft Kinect 3D camera mounted on the ceiling to capture a depth map of the green. This data is then fed into a physics-based model, alongside other parameters like the rolling resistance of the turf, the weight of the ball and its starting velocity, to simulate three thousand random shots from various starting points.

golfi video youtu.be

This data is used to train a neural network that can predict how hard and in what direction to hit the ball to get it in the hole from anywhere on the green. While it’s possible to solve this problem by combining the physics based model with classical optimization, says Junker, it’s far more computationally expensive. And training the robot on simulated golf shots takes just five minutes, compared to around 30 to 40 hours if they collected data on real-world strokes, she adds.

Before it can make it’s shot though, the robot first has to line its putter up with the ball just right, which requires it to work out where on the green both itself and the ball are. To do so, it uses a neural network that has been trained to spot golf balls and a hard-coded object detection algorithm that picks out colored dots on the top of the robot to work out its orientation. This positioning data is then combined with a physical model of the robot and fed into an optimization algorithm that works out how to control its wheel motors to navigate to the ball.

Junker admits that the approach isn’t flawless. The current set-up relies on a bird’s eye view, which would be hard to replicate on a real golf course, and switching to cameras on the robot would present major challenges, she says. The researchers also didn’t report how often Golfi successfully sinks the putt in their paper, because the figures were thrown off by the fact that it occasionally drove over the ball, knocking it out of position. When that didn’t happen though, Junker says it was successful six or seven times out of ten, and since they submitted the paper a colleague has reworked the navigation system to avoid the ball.

Golfi isn’t the first machine to try its hand at the sport. In 2016, a robot called LDRICK hit a hole-in-one at Arizona's TPC Scottsdale course and several devices have been built to test out golf clubs. But Noel Rousseau, a golf coach with a PhD in motor learning, says that typically they require an operator painstakingly setting them up for each shot, and any adjustments take considerable time. “The most impressive part to me is that the golf robot is able to find the ball, sight the hole and move itself into position for an accurate stoke,” he says.

Beyond mastering putting, the hope is that the underlying techniques the researchers have developed could translate to other robotics problems, says Niklas Fittkau, a doctoral student at Paderborn University and co-lead author of the paper. “You can also transfer that to other problems, where you have some knowledge about the system and could model parts of it to obtain some data, but you can’t model everything,” he says.



While being able to drive the ball 300 yards might get the fans excited, a solid putting game is often what separates a golf champion from the journeymen. A robot built by German researchers is quickly becoming a master of this short game using a clever combination of classical control engineering and machine learning.

In golf tournaments, players often scout out the greens the day beforehand to think through how they are going to play their shots, says Annika Junker, a doctoral student at Paderborn University in Germany. So she and her colleagues decided to see if giving a robot similar capabilities could help it to sink a putt from anywhere on the green, without assistance from a human.

Golfi, as the team has dubbed their creation, uses a 3D camera to take a snapshot of the green, which it then feeds into a physics-based model to simulate thousands of random shots from different positions. These are used to train a neural network that can then predict exactly how hard and in what direction to hit a ball to get it in the hole, from anywhere on the green.

On the green, Golfi was successful six or seven times out of ten.

Like even the best pros, it doesn’t get a hole in one every time. The goal isn’t really to build a tournament winning golf robot though, says Junker, but to demonstrate the power of hybrid approaches to robotic control. “We try to combine data-driven and physics based methods and we searched for a nice example, which everyone can easily understand,” she says. “It's only a toy for us, but we hope to see some advantages of our approach for industrial applications.”

So far, the researchers have only tested their approach on a small mock-up green inside their lab. The robot, which is described in a paper due to be presented at the IEEE International Conference on Robotic Computing in Italy next month, navigates its way around the two meter-square space on four wheels, two of which are powered. Once in position it then uses a belt driven gear shaft with a putter attached to the end to strike the ball towards the hole.

First though, it needs to work out what shot to play given the position of the ball. The researchers begin by using a Microsoft Kinect 3D camera mounted on the ceiling to capture a depth map of the green. This data is then fed into a physics-based model, alongside other parameters like the rolling resistance of the turf, the weight of the ball and its starting velocity, to simulate three thousand random shots from various starting points.

golfi video youtu.be

This data is used to train a neural network that can predict how hard and in what direction to hit the ball to get it in the hole from anywhere on the green. While it’s possible to solve this problem by combining the physics based model with classical optimization, says Junker, it’s far more computationally expensive. And training the robot on simulated golf shots takes just five minutes, compared to around 30 to 40 hours if they collected data on real-world strokes, she adds.

Before it can make it’s shot though, the robot first has to line its putter up with the ball just right, which requires it to work out where on the green both itself and the ball are. To do so, it uses a neural network that has been trained to spot golf balls and a hard-coded object detection algorithm that picks out colored dots on the top of the robot to work out its orientation. This positioning data is then combined with a physical model of the robot and fed into an optimization algorithm that works out how to control its wheel motors to navigate to the ball.

Junker admits that the approach isn’t flawless. The current set-up relies on a bird’s eye view, which would be hard to replicate on a real golf course, and switching to cameras on the robot would present major challenges, she says. The researchers also didn’t report how often Golfi successfully sinks the putt in their paper, because the figures were thrown off by the fact that it occasionally drove over the ball, knocking it out of position. When that didn’t happen though, Junker says it was successful six or seven times out of ten, and since they submitted the paper a colleague has reworked the navigation system to avoid the ball.

Golfi isn’t the first machine to try its hand at the sport. In 2016, a robot called LDRICK hit a hole-in-one at Arizona's TPC Scottsdale course and several devices have been built to test out golf clubs. But Noel Rousseau, a golf coach with a PhD in motor learning, says that typically they require an operator painstakingly setting them up for each shot, and any adjustments take considerable time. “The most impressive part to me is that the golf robot is able to find the ball, sight the hole and move itself into position for an accurate stoke,” he says.

Beyond mastering putting, the hope is that the underlying techniques the researchers have developed could translate to other robotics problems, says Niklas Fittkau, a doctoral student at Paderborn University and co-lead author of the paper. “You can also transfer that to other problems, where you have some knowledge about the system and could model parts of it to obtain some data, but you can’t model everything,” he says.



All things considered, we humans are kind of big, which is very limiting to how we can comfortably interact with the world. The practical effect of this is that we tend to prioritize things that we can see and touch and otherwise directly experience, even if those things are only a small part of the world in which we live. A recent study conservatively estimates that there are 2.5 million ants for every one human on Earth. And that’s just ants. There are probably something like 7 million different species of terrestrial insects, and humans have only even noticed like 10 percent of them. The result of this disconnect is that when (for example) insect populations around the world start to crater, it takes us much longer to first notice, care, and act.

To give the small scale the attention that it deserves, we need a way of interacting with it. In a paper recently published in Scientific Reports, roboticists from Ritsumeikan University in Japan demonstrate a haptic teleoperation system that connects a human hand on one end with microfingers on the other, letting the user feel what it’s like to give a pill bug a tummy rub.

At top, a microfinger showing the pneumatic balloon actuator (PBA) and liquid metal strain gauge. At bottom left, when the PBA is deflated, the microfinger is straight. At bottom right, inflating the PBA causes the finger to bend downwards.

These microfingers are just 12 millimeters long, 3 mm wide, and 490 microns (μm) thick. Inside of each microfinger is a pneumatic balloon actuator, which is just a hollow channel that can be pressurized with air. Since the channel is on the top of the microfinger, when the channel is inflated, it bulges upward, causing the microfinger to bend down. When pressure is reduced, the microfinger returns to its original position. Separate channels in the microfinger are filled with liquid metal, and as the microfinger bends, the channels elongate, thinning out the metal. By measuring the resistance of the metal, you can tell how much the finger is being bent. This combination of actuation and force sensing means that a human-size haptic system can be used as a force feedback interface: As you move your fingers, the microfingers will move, and forces can be transmitted back to you, allowing you to feel what the microfingers feel.

The microfingers (left) can be connected to a haptic feedback and control system for use by a human.

Fans of the golden age of science fiction will recognize this system as a version of Waldo F. Jones' Synchronous Reduplicating Pantograph, although the concept has even deeper roots in sci-fi:

The thought suddenly struck me: I can make micro hands for my little hands. I can make the same gloves for them as I did for my living hands, use the same system to connect them to the handles ten times smaller than my micro arms, and then ... I will have real micro arms, they will chop my movements two hundred times. With these hands I will burst into such a smallness of life that they have only seen, but where no one else has disposed of their own hands. And I got to work.

With their very real and not science fiction system, the researchers were able to successfully determine that pill bugs can exert about 10 micro-Newtons of force through their legs, which is about the same as what has been estimated using other techniques. This is just a proof of concept study, but I’m excited about the potential here, because there is still so much of the world that humans haven’t yet been able to really touch. And besides just insect-scale tickling, there’s a broader practical context here around the development of insect-scale robots. Insects have had insect-scale sensing and mobility and whatnot pretty well figured out for a long time now, and if we’re going to make robots that can do insect-like things, we’re going to do it by learning as much as we can directly from insects themselves.

“With our strain-sensing microfinger, we were able to directly measure the pushing motion and force of the legs and torso of a pill bug—something that has been impossible to achieve previously. We anticipate that our results will lead to further technological development for microfinger-insect interactions, leading to human-environment interactions at much smaller scales.”
—Satoshi Konishi, Ritsumeikan University

I should also be clear that despite the headline, I don’t know if it’s actually possible to tickle a bug. A Google search for “are insects ticklish” turns up one single result, from someone asking this question on the "StonerThoughts" subreddit. There is some suggestion that tickling, or more specifically the kind of tickling that is surprising and can lead to laughter called gargalesis, has evolved in social mammals to promote bonding. The other kind of tickling is called knismesis, which is more of an unpleasant sensation that causes irritation or distress. You know, like the feeling of a bug crawling on you. It seems plausible (to me, anyway) that bugs may experience some kind of knismesis—but I think that someone needs to get in there and do some science, especially now that we have the tools to make it happen.

Pages