Feed aggregator



When Xiaomi announced its CyberOne humanoid robot a couple of months back, it wasn’t entirely clear what the company was actually going to do with the robot. Our guess was that rather than pretending that CyberOne was going to have some sort of practical purpose, Xiaomi would use it as a way of exploring possibilities with technology that may have useful applications elsewhere, but there were no explicit suggestions that there would be any actual research to come out of it. In a nice surprise, Xiaomi roboticists have taught the robot to do something that is, if not exactly useful, at least loud: to play the drums.

The input for this performance is a MIDI file, which the robot is able to parse into drum beats. It then generates song-length sequences of coordinated whole-body trajectories which are synchronized to the music, which is tricky because the end effectors have to make sure to actuate the drums exactly on the beat. CyberOne does a pretty decent job even when it’s going back and forth across the drum kit. This is perhaps not super cutting-edge humanoid research, but it’s still interesting to see what a company like Xiaomi has been up to. And to that end, we asked Zeyu Ren, a senior hardware engineer at the Xiaomi Robotics Lab, to answer a couple of questions for us.

IEEE Spectrum: So why is Xiaomi working on a humanoid robot, anyway?

Zeyu Ren: There are three reasons why Xiaomi is working on humanoid robots. The first reason is that we are seeing a huge decline in the labor force in China, and the world. We are working on replacing the human labor force with humanoid robots even though there is a long way to go. The second reason is that we believe humanoid robots are the most technically challenging of all robot forms. By working on humanoid robots, we can also use this technology to solve problems on other robot forms, such as quadruped robots, robotic arms, and even wheeled robots. The third reason is that Xiaomi wants to be the most technically advanced company in China, and humanoid robots are sexy.

Why did you choose drumming to demonstrate your research?

Ren: After the official release of Xiaomi CyberOne on August 11, we got a lot of feedback from the public who didn’t have a background in robotics. They are more interested in seeing humanoid robots doing things that humans cannot easily do. Honestly speaking, it’s pretty difficult to find such scenarios, since we know that the first prototype of CyberOne is far behind humans.

But one day, one of our engineers who had just begun to play drums suggested that drumming may be an exception. She thought that compared to rookie drummers, humanoid robots have more advantages in hand-foot coordinated motion and rhythmic control. We all thought it was a good idea, and drumming itself is super cool and interesting. So we choose drumming to demonstrate our research.

What was the most challenging part of this research?

Ren: The most challenging part of this research was that when receiving the long sequences of drum beats, CyberOne needs to assign sequences to each arm and leg and generate continuous collision-free whole-body trajectories within the hardware constraints. So, we extract the basic beats and build our drum beat motion trajectory library offline by optimization. Then, CyberOne can generate continuous trajectories consistent with any drum score. This approach gives more freedom to CyberOne playing drums, and is only limited by the robotics capability.

What different things do you hope that this research will help your robot do in the future?

Ren: Drumming requires CyberOne to coordinate whole-body motions to achieve a fast, accurate, and large range of movement. We first want to find the limit of our robot in terms of hardware and software to provide a reference for the next-generation design. Also, through this research, we have formed a complete set of automatic drumming methods for robots to perform different songs, and this experience also helps us to more quickly realize the development of other musical instruments to be played by robots.

What are you working on next?

Ren: We are working on the second generation of CyberOne, and hope to further improve its locomotion and manipulation ability. On the hardware level, we plan to add more degrees of freedom, integrate self-developed dexterous hands, and add more sensors. On the software level, more robust control algorithms for locomotion and vision will be developed.

Flapping wing micro aerial vehicles (FWMAVs) are known for their flight agility and maneuverability. These bio-inspired and lightweight flying robots still present limitations in their ability to fly in direct wind and gusts, as their stability is severely compromised in contrast with their biological counterparts. To this end, this work aims at making in-gust flight of flapping wing drones possible using an embodied airflow sensing approach combined with an adaptive control framework at the velocity and position control loops. At first, an extensive experimental campaign is conducted on a real FWMAV to generate a reliable and accurate model of the in-gust flight dynamics, which informs the design of the adaptive position and velocity controllers. With an extended experimental validation, this embodied airflow-sensing approach integrated with the adaptive controller reduces the root-mean-square errors along the wind direction by 25.15% when the drone is subject to frontal wind gusts of alternating speeds up to 2.4 m/s, compared to the case with a standard cascaded PID controller. The proposed sensing and control framework improve flight performance reliably and serve as the basis of future progress in the field of in-gust flight of lightweight FWMAVs.

The use of manipulators in space missions has become popular, as their applications can be extended to various space missions such as on-orbit servicing, assembly, and debris removal. Due to space reachability limitations, such robots must accomplish their tasks in space autonomously and under severe operating conditions such as the occurrence of faults or uncertainties. For robots and manipulators used in space missions, this paper provides a unique, robust control technique based on Model Predictive Path Integral Control (MPPI). The proposed algorithm, named Planner-Estimator MPPI (PE-MPPI), comprises a planner and an estimator. The planner controls a system, while the estimator modifies the system parameters in the case of parameter uncertainties. The performance of the proposed controller is investigated under parameter uncertainties and system component failure in the pre-capture phase of the debris removal mission. Simulation results confirm the superior performance of PE-MPPI against vanilla MPPI.

Due to the coronavirus-2019 pandemic, people have had to work and study using the Internet such that the strengthened metaverse has become a part of the lives of people worldwide. The advent of technology linking the real and virtual worlds has facilitated the transmission of spatial audio and haptics to allow the metaverse to offer multisensory experiences in diverse fields, especially in teaching. The main idea of the proposed project is the development of a simple intelligent system for meta-learning. The suggested system should be self-configurable according to the different users of the metaverse. We aimed to design and create a virtual learning environment using Open Simulator based on a 3D virtual environment and simulation of the real-world environment. We then connected this environment to a learning management system (Moodle) through technology for 3D virtual environments (Sloodle) to allow the management of students, especially those with different abilities, and followed up on their activities, tests, and exams. This environment also has the advantage of storing educational content. We evaluated the performance of the Open Simulator in both standalone and grid modes based on the login times. The result showed times the standalone and grid modes of 12 s and 16 s, which demonstrated the robustness of the proposed platform. We also tested the system on 50 disabled learners, according to the t-test of independent samples. A test was conducted in the mathematics course, in which the students were divided into two equal groups (n = 25 each) to take the test traditionally and using the chair test tool, which is one of the most important tools of the Sloodle technology. According to the results, the null hypothesis was rejected, and we accepted the alternative hypothesis that demonstrated a difference in achievement between the two groups.

The interest towards using telepresence robots in a variety of educational contexts is growing, as they have a great potential to enhance the educational experience of remote learners and provide support for teachers. This paper describes a study, examining the perception of Georgian university personnel about the use of telepresence robots in education. This exploratory research aimed to obtain evidence-based information on how the personnel (16 persons) from eight Georgian universities perceived the telepresence robots’ role in enhancing learning and teaching, and what challenges, benefits, opportunities, weaknesses and threats would characterise these robots. The results of the study revealed that the university personnel perceived telepresence robots to have a great potential to enhance educational activities. In addition, the participants indicated the major challenges, benefits, opportunities, weaknesses and threats, regarding integrating telepresence robotics into the teaching and learning in Georgia. Recommendations for future research are also presented.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALANDICRA 2023: 29 May–2 June 2023, LONDON

Enjoy today’s videos!

The videos show scenes from the RoboCup 2022 Humanoid AdultSize competition in Bangkok, Thailand. The robots of Team NimbRo of the University of Bonn, Germany, won the main soccer tournament, the Drop-In tournament, and the Technical Challenges. Consequently, NimbRo came in first in the overall Best-Humanoid ranking.

[ NimbRo ]

Have you ever seen a robot dancing? One of the highlights of the 20th anniversary event of Robotnik was the choreography between the professional ballet dancer Sherezade Soriano and the mobile manipulator robot RB-KAIROS+.

[ Robotnik ]

This video celebrates the 10-year anniversary of the University of Zurich’s Robotics and Perception Group, led by Prof. Davide Scaramuzza. The lab was founded in 2012. More than 300 people worked in our lab as BSc, MSc, and Ph.D. students, postdocs, and visiting researchers. We thank all of them for contributing to our research. The lab made important contributions to autonomous, agile, vision-based navigation of micro aerial vehicles and event cameras for mobile robotics and computer vision.

Ten years, so much accomplished, and they’re just getting started!

[ UZH RPG ]

Printed fiducial markers are inexpensive, easy to deploy, robust, and deservedly popular. However, their data payload is also static, unable to express any state beyond being present. Our “DynaTags” are simple mechanisms constructed from paper that express multiple payloads, allowing practitioners and researchers to create new and compelling physical-digital experiences.

[ CMU FIG ]

CNN’s “Tech for Good” hears from Marko Bjelonic, from ETH Zürich’s Robotic Systems Lab and founder of the Swiss Mile robot, who believes automated machines are the key to automating our cities. His four-legged-and-wheeled robot is able to change shape within seconds, overcome steps, and navigate between indoor and outdoor environments. It’s hoped that the bot, which can travel up to 20 kilometers an hour and carry 50 kilograms, has the potential to serve as a member of search-and-rescue teams in the future.

[ Swiss-Mile ]

Thanks, Marko!

Be the tiny DIY robot cat you’ve always wanted to be!

All of this is open source, and you can get it running on your own Nybble (which makes a great holiday gift!) at the link below.

[ Petoi ]

Thanks, Rz!

In his dissertation “Autonomous Operation of a Reconfigurable Multi-Robot System for Planetary Space Missions,” Thomas Röhr deals with heterogeneous robot teams whose individual agents can also join to form more capable agents due to their modular structure. This video highlights an experiment that shows the feasibility and the potential of the autonomous use of reconfigurable systems for planetary-exploration missions. The experiments feature the autonomous execution of an action sequence for multirobot cooperation for soil sampling and handover of a payload containing the soil sample.

[ DFKI ]

Thanks, Thomas!

Haru has had a busy year!

[ Haru Fest ]

Thanks, Randy!

This is really pretty impressive for remote operation, but it’s hard to tell how much of what we see is the capability of the system, and how much is the skill and experience of the operator.

[ Sanctuary AI ]

Cargo drones are designed to carry payloads with a predefined shape, size, and/or mass. This lack of flexibility requires a fleet of diverse drones tailored to specific cargo dimensions. Here we propose a new reconfigurable drone based on a modular design that adapts to different cargo shapes, sizes, and masses.

[ Paper ]

Building tiny giant robots requires lots of little fixtures, and I’m here for it.

[ Gundam Factory ]

The load-bearing assessment that’s part of this research is particularly cool.

[ DFKI ]

The Utah Bionic Leg, developed by University of Utah mechanical-engineering associate professor Tommaso Lenzi and his team in the HGN Lab, is a motorized prosthetic for lower-limb amputees. The leg uses motors, processors, and advanced artificial intelligence that all work together to give amputees more power to walk, stand up, sit down, and ascend and descend stairs and ramps.

[ Utah Engineering ]

PLEN is all ready for the World Cup.

[ PLEN ]

The Misty platform supports multiple programming languages, including Blockly and Python, making it the perfect programming and robotics learning tool for students of all ages.

[ Misty ]

Sarcos Technology and Robotics Corp. designs, develops, and manufactures a broad range of advanced mobile robotic systems that redefine human possibilities and are designed to enable the safest, most productive workforce in the world. Sarcos robotic systems operate in challenging, unstructured, industrial environments and include teleoperated robotic systems, a powered robotic exoskeleton, and software solutions that enable task autonomy.

[ Sarcos ]

Teaser for the NCCR Robotics documentary coming in late 2022.

[ NCCR Robotics ]

A robotic feeding system must be able to acquire a variety of foods. We propose a general bimanual scooping primitive and an adaptive stabilization strategy that enables successful acquisition of a diverse set of food geometries and physical properties. Our approach, CARBS: Coordinated Acquisition with Reactive Bimanual Scooping, learns to stabilize without impeding task progress by identifying high-risk foods and robustly scooping them using closed-loop visual feedback.

[ Paper ]

Join Jonathan Gammell and our guest speaker Larry Matthies, NASA JPL, discussing In situ mobility for planetary exploration in the third seminar of our Anniversary Series.

[ ORI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALANDICRA 2023: 29 May–2 June 2023, LONDON

Enjoy today’s videos!

The videos show scenes from the RoboCup 2022 Humanoid AdultSize competition in Bangkok, Thailand. The robots of Team NimbRo of the University of Bonn, Germany, won the main soccer tournament, the Drop-In tournament, and the Technical Challenges. Consequently, NimbRo came in first in the overall Best-Humanoid ranking.

[ NimbRo ]

Have you ever seen a robot dancing? One of the highlights of the 20th anniversary event of Robotnik was the choreography between the professional ballet dancer Sherezade Soriano and the mobile manipulator robot RB-KAIROS+.

[ Robotnik ]

This video celebrates the 10-year anniversary of the University of Zurich’s Robotics and Perception Group, led by Prof. Davide Scaramuzza. The lab was founded in 2012. More than 300 people worked in our lab as BSc, MSc, and Ph.D. students, postdocs, and visiting researchers. We thank all of them for contributing to our research. The lab made important contributions to autonomous, agile, vision-based navigation of micro aerial vehicles and event cameras for mobile robotics and computer vision.

Ten years, so much accomplished, and they’re just getting started!

[ UZH RPG ]

Printed fiducial markers are inexpensive, easy to deploy, robust, and deservedly popular. However, their data payload is also static, unable to express any state beyond being present. Our “DynaTags” are simple mechanisms constructed from paper that express multiple payloads, allowing practitioners and researchers to create new and compelling physical-digital experiences.

[ CMU FIG ]

CNN’s “Tech for Good” hears from Marko Bjelonic, from ETH Zürich’s Robotic Systems Lab and founder of the Swiss Mile robot, who believes automated machines are the key to automating our cities. His four-legged-and-wheeled robot is able to change shape within seconds, overcome steps, and navigate between indoor and outdoor environments. It’s hoped that the bot, which can travel up to 20 kilometers an hour and carry 50 kilograms, has the potential to serve as a member of search-and-rescue teams in the future.

[ Swiss-Mile ]

Thanks, Marko!

Be the tiny DIY robot cat you’ve always wanted to be!

All of this is open source, and you can get it running on your own Nybble (which makes a great holiday gift!) at the link below.

[ Petoi ]

Thanks, Rz!

In his dissertation “Autonomous Operation of a Reconfigurable Multi-Robot System for Planetary Space Missions,” Thomas Röhr deals with heterogeneous robot teams whose individual agents can also join to form more capable agents due to their modular structure. This video highlights an experiment that shows the feasibility and the potential of the autonomous use of reconfigurable systems for planetary-exploration missions. The experiments feature the autonomous execution of an action sequence for multirobot cooperation for soil sampling and handover of a payload containing the soil sample.

[ DFKI ]

Thanks, Thomas!

Haru has had a busy year!

[ Haru Fest ]

Thanks, Randy!

This is really pretty impressive for remote operation, but it’s hard to tell how much of what we see is the capability of the system, and how much is the skill and experience of the operator.

[ Sanctuary AI ]

Cargo drones are designed to carry payloads with a predefined shape, size, and/or mass. This lack of flexibility requires a fleet of diverse drones tailored to specific cargo dimensions. Here we propose a new reconfigurable drone based on a modular design that adapts to different cargo shapes, sizes, and masses.

[ Paper ]

Building tiny giant robots requires lots of little fixtures, and I’m here for it.

[ Gundam Factory ]

The load-bearing assessment that’s part of this research is particularly cool.

[ DFKI ]

The Utah Bionic Leg, developed by University of Utah mechanical-engineering associate professor Tommaso Lenzi and his team in the HGN Lab, is a motorized prosthetic for lower-limb amputees. The leg uses motors, processors, and advanced artificial intelligence that all work together to give amputees more power to walk, stand up, sit down, and ascend and descend stairs and ramps.

[ Utah Engineering ]

PLEN is all ready for the World Cup.

[ PLEN ]

The Misty platform supports multiple programming languages, including Blockly and Python, making it the perfect programming and robotics learning tool for students of all ages.

[ Misty ]

Sarcos Technology and Robotics Corp. designs, develops, and manufactures a broad range of advanced mobile robotic systems that redefine human possibilities and are designed to enable the safest, most productive workforce in the world. Sarcos robotic systems operate in challenging, unstructured, industrial environments and include teleoperated robotic systems, a powered robotic exoskeleton, and software solutions that enable task autonomy.

[ Sarcos ]

Teaser for the NCCR Robotics documentary coming in late 2022.

[ NCCR Robotics ]

A robotic feeding system must be able to acquire a variety of foods. We propose a general bimanual scooping primitive and an adaptive stabilization strategy that enables successful acquisition of a diverse set of food geometries and physical properties. Our approach, CARBS: Coordinated Acquisition with Reactive Bimanual Scooping, learns to stabilize without impeding task progress by identifying high-risk foods and robustly scooping them using closed-loop visual feedback.

[ Paper ]

Join Jonathan Gammell and our guest speaker Larry Matthies, NASA JPL, discussing In situ mobility for planetary exploration in the third seminar of our Anniversary Series.

[ ORI ]

Preoperative planning and intra-operative system setup are crucial steps to successfully integrate robotically assisted surgical systems (RASS) into the operating room. Efficiency in terms of setup planning directly affects the overall procedural costs and increases acceptance of RASS by surgeons and clinical personnel. Due to the kinematic limitations of RASS, selecting an optimal robot base location and surgery access point for the patient is essential to avoid potentially critical complications due to reachability issues. To this end, this work proposes a novel versatile method for RASS setup and planning based on robot capability maps (CMAPs). CMAPs are a common tool to perform workspace analysis in robotics, as they are in general applicable to any robot kinematics. However, CMAPs have not been completely exploited so far for RASS setup and planning. By adapting global CMAPs to surgical procedure-specific tasks and constraints, a novel RASS capability map (RASSCMAP) is generated. Furthermore, RASSCMAPs can be derived to also comply with kinematic access constraints such as access points in laparoscopy. RASSCMAPs are versatile and applicable to any kind of surgical procedure; they can be used on the one hand for aiding in intra-operative experience-based system setup by visualizing online the robot’s capability to perform a task. On the other hand, they can be used to find the optimal setup by applying a multi-objective optimization based on a genetic algorithm preoperatively, which is then transfered to the operating room during system setup. To illustrate these applications, the method is evaluated in two different use cases, namely, pedicle screw placement in vertebral fixation procedures and general laparoscopy. The proposed RASSCMAPs help in increasing the overall clinical value of RASS by reducing system setup time and guaranteeing proper robot reachability to successfully perform the intended surgeries.

Living systems ensure their fitness by self-regulating. The optimal matching of their behavior to the opportunities and demands of the ever-changing natural environment is crucial for satisfying physiological and cognitive needs. Although homeostasis has explained how organisms maintain their internal states within a desirable range, the problem of orchestrating different homeostatic systems has not been fully explained yet. In the present paper, we argue that attractor dynamics emerge from the competitive relation of internal drives, resulting in the effective regulation of adaptive behaviors. To test this hypothesis, we develop a biologically-grounded attractor model of allostatic orchestration that is embedded into a synthetic agent. Results show that the resultant neural mass model allows the agent to reproduce the navigational patterns of a rodent in an open field. Moreover, when exploring the robustness of our model in a dynamically changing environment, the synthetic agent pursues the stability of the self, being its internal states dependent on environmental opportunities to satisfy its needs. Finally, we elaborate on the benefits of resetting the model’s dynamics after drive-completion behaviors. Altogether, our studies suggest that the neural mass allostatic model adequately reproduces self-regulatory dynamics while overcoming the limitations of previous models.

Swarm behaviors offer scalability and robustness to failure through a decentralized and distributed design. When designing coherent group motion as in swarm flocking, virtual potential functions are a widely used mechanism to ensure the aforementioned properties. However, arbitrating through different virtual potential sources in real-time has proven to be difficult. Such arbitration is often affected by fine tuning of the control parameters used to select among the different sources and by manually set cut-offs used to achieve a balance between stability and velocity. A reliance on parameter tuning makes these methods not ideal for field operations of aerial drones which are characterized by fast non-linear dynamics hindering the stability of potential functions designed for slower dynamics. A situation that is further exacerbated by parameters that are fine-tuned in the lab is often not appropriate to achieve satisfying performances on the field. In this work, we investigate the problem of dynamic tuning of local interactions in a swarm of aerial vehicles with the objective of tackling the stability–velocity trade-off. We let the focal agent autonomously and adaptively decide which source of local information to prioritize and at which degree—for example, which neighbor interaction or goal direction. The main novelty of the proposed method lies in a Gaussian kernel used to regulate the importance of each element in the swarm scheme. Each agent in the swarm relies on such a mechanism at every algorithmic iteration and uses it to tune the final output velocities. We show that the presented approach can achieve cohesive flocking while at the same time navigating through a set of way-points at speed. In addition, the proposed method allows to achieve other desired field properties such as automatic group splitting and joining over long distances. The aforementioned properties have been empirically proven by an extensive set of simulated and field experiments, in communication-full and communication-less scenarios. Moreover, the presented approach has been proven to be robust to failures, intermittent communication, and noisy perceptions.

This article presents perspective on the research challenge of understanding and synthesizing anthropomorphic whole-body contact motions through a platform called “interactive cyber-physical human (iCPH)” for data collection and augmentation. The iCPH platform combines humanoid robots as “physical twins” of human and “digital twins” that simulates humans and robots in cyber-space. Several critical research topics are introduced to address this challenge by leveraging the advanced model-based analysis together with data-driven learning to exploit collected data from the integrated platform of iCPH. Definition of general description is identified as the first topic as a common basis of contact motions compatible to both humans and humanoids. Then, we set continual learning of a feasible contact motion network as the second challenge by benefiting from model-based approach and machine learning bridged by the efficient analytical gradient computation developed by the author and his collaborators. The final target is to establish a high-level symbolic system allowing automatic understanding and generation of contact motions in unexperienced environments. The proposed approaches are still under investigation, and the author expects that this article triggers discussions and further collaborations from different research communities, including robotics, artificial intelligence, neuroscience, and biomechanics.

The smart factory is at the heart of Industry 4.0 and is the new paradigm for establishing advanced manufacturing systems and realizing modern manufacturing objectives such as mass customization, automation, efficiency, and self-organization all at once. Such manufacturing systems, however, are characterized by dynamic and complex environments where a large number of decisions should be made for smart components such as production machines and the material handling system in a real-time and optimal manner. AI offers key intelligent control approaches in order to realize efficiency, agility, and automation all at once. One of the most challenging problems faced in this regard is uncertainty, meaning that due to the dynamic nature of the smart manufacturing environments, sudden seen or unseen events occur that should be handled in real-time. Due to the complexity and high-dimensionality of smart factories, it is not possible to predict all the possible events or prepare appropriate scenarios to respond. Reinforcement learning is an AI technique that provides the intelligent control processes needed to deal with such uncertainties. Due to the distributed nature of smart factories and the presence of multiple decision-making components, multi-agent reinforcement learning (MARL) should be incorporated instead of single-agent reinforcement learning (SARL), which, due to the complexities involved in the development process, has attracted less attention. In this research, we will review the literature on the applications of MARL to tasks within a smart factory and then demonstrate a mapping connecting smart factory attributes to the equivalent MARL features, based on which we suggest MARL to be one of the most effective approaches for implementing the control mechanism for smart factories.

Road infrastructure is one of the most vital assets of any country. Keeping the road infrastructure clean and unpolluted is important for ensuring road safety and reducing environmental risk. However, roadside litter picking is an extremely laborious, expensive, monotonous and hazardous task. Automating the process would save taxpayers money and reduce the risk for road users and the maintenance crew. This work presents LitterBot, an autonomous robotic system capable of detecting, localizing and classifying common roadside litter. We use a learning-based object detection and segmentation algorithm trained on the TACO dataset for identifying and classifying garbage. We develop a robust modular manipulation framework by using soft robotic grippers and a real-time visual-servoing strategy. This enables the manipulator to pick up objects of variable sizes and shapes even in dynamic environments. The robot achieves greater than 80% classified picking and binning success rates for all experiments; which was validated on a wide variety of test litter objects in static single and cluttered configurations and with dynamically moving test objects. Our results showcase how a deep model trained on an online dataset can be deployed in real-world applications with high accuracy by the appropriate design of a control framework around it.

Communication therapies based on conversations with caregivers, such as reminiscence therapy and music therapy, have been proposed to delay the progression of dementia. Although these therapies have been reported to improve the cognitive and behavioral functions of elderly people suffering from dementia, caregivers do not have enough time to spend on administering such communication therapies, especially in Japan where the workforce of caregivers is inadequate. Consequently, the progression of dementia in the elderly and the accompanying increased burden on caregivers has become a social problem. While the automation of communication therapy using robots and virtual agents has been proposed, the accuracy of both speech recognition and dialogue control is still insufficient to improve the cognitive and behavioral functions of the dementia elderly. In this study, we examine the effect of a Japanese word-chain game (Shiritori game) with an interactive robot and that of music listening on the maintenance and improvement of cognitive and behavioral scales [Mini-Mental State Examination (MMSE) and Dementia Behavior Disturbance scale (DBD)] of the dementia elderly. These activities can provide linguistic and phonetic stimuli, and they are simpler to implement than conventional daily conversation. The results of our Wizard-of-Oz-based experiments show that the cognitive and behavioral function scores of the elderly who periodically played the Shiritori game with an interactive robot were significantly improved over the elderly in a control group. On the other hand, no such effect was observed with the music listening stimuli. Our further experiments showed that, in the Shiritori intervention group, there was a ceiling on the increase in MMSE. The lower the MMSE before participating in the experiment, the greater the increase. Furthermore, greater improvement in DBD was observed when the participants actively played the Shiritori game. Since the Shiritori game is relatively easy to automate, our findings show the potential benefits of automating dementia therapies to maintain cognitive and behavioral functions.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Humanoid robots are a lot more capable than they used to be, but for most of them, falling over is still borderline catastrophic. Understandably, the focus has been on getting humanoid robots to succeed at things as opposed to getting robots to tolerate (or recover from) failing at things, but sometimes, failure is inevitable because stuff happens that’s outside your control. Earthquakes, accidentally clumsy grad students, tornadoes, deliberately malicious grad students—the list goes on.

When humans lose their balance, the go-to strategy is a highly effective one: use whatever happens to be nearby to keep from falling over. While for humans this approach is instinctive, it’s a hard problem for robots, involving perception, semantic understanding, motion planning, and careful force control, all executed under aggressive time constraints. In a paper published earlier this year in IEEE Robotics and Automation Letters, researchers at Inria in France show some early work getting a TALOS humanoid robot to use a nearby wall to successfully keep itself from taking a tumble.

The tricky thing about this technique is how little time a robot has to understand that it’s going to fall, sense its surroundings, make a plan to save itself, and execute that plan in time to avoid falling. In this paper, the researchers address most of these things—the biggest caveat is probably that they’re assuming that the location of the nearby wall is known, but that’s a relatively straightforward problem to solve if your robot has the right sensors on it.

Once the robot detects that something in its leg has given out, its Damage Reflex (“D-Reflex”) kicks in. D-Reflex is based around a neural network that was trained in simulation (taking a mere 882,000 simulated trials), and with the posture of the robot and the location of the wall as inputs, the network outputs how likely a potential wall contact is to stabilize the robot, taking just just a few milliseconds. The system doesn’t actually need to know anything specific about the robot’s injury, and will work whether the actuator is locked up, moving freely but not controllably, or completely absent, the “amputation” case. Of course, reality rarely matches simulation, and it turns out that a damaged and tipping over robot doesn’t reliably make contact with the the wall exactly where it should, so the researchers had to tweak things to make sure that the robot stops its hand as soon as it touches the wall whether it’s in the right spot or not. This method worked pretty well—using D-Reflex, the TALOS robot was able to avoid falling in three out of four trials where it would otherwise have fallen. Considering how expensive robots like TALOS are, this is a pretty great result, if you ask me.

The obvious question at this point is, “okay, now what?” Well, that’s beyond the scope of this research, but generally “now what” consists of one of two things. Either the robot falls anyway, which can definitely happen even with this method because some configurations of robot and wall are simply not avoidable, or the robot doesn’t fall and you end up with a slightly busted robot leaning precariously against a wall. In either case, though, there are options. We’ve seen a bunch of complementary work on surviving falls with humanoid robots in one way or another. And in fact one of the authors of this paper, Jean-Baptiste Mouret, has already published some very cool research on injury adaptation for legged robots.

In the future, the idea is to extend this idea to robots that are moving dynamically, which is definitely going to be a lot more challenging, but potentially a lot more useful.

First do not fall: learning to exploit a wall with a damaged humanoid robot, by Timothee Anne, Eloïse Dalin, Ivan Bergonzani, Serena Ivaldi, and Jean-Baptiste Mouret from Inria, is published in IEEE Robotics and Automation Letters.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Humanoid robots are a lot more capable than they used to be, but for most of them, falling over is still borderline catastrophic. Understandably, the focus has been on getting humanoid robots to succeed at things as opposed to getting robots to tolerate (or recover from) failing at things, but sometimes, failure is inevitable because stuff happens that’s outside your control. Earthquakes, accidentally clumsy grad students, tornadoes, deliberately malicious grad students—the list goes on.

When humans lose their balance, the go-to strategy is a highly effective one: use whatever happens to be nearby to keep from falling over. While for humans this approach is instinctive, it’s a hard problem for robots, involving perception, semantic understanding, motion planning, and careful force control, all executed under aggressive time constraints. In a paper published earlier this year in IEEE Robotics and Automation Letters, researchers at Inria in France show some early work getting a TALOS humanoid robot to use a nearby wall to successfully keep itself from taking a tumble.

The tricky thing about this technique is how little time a robot has to understand that it’s going to fall, sense its surroundings, make a plan to save itself, and execute that plan in time to avoid falling. In this paper, the researchers address most of these things—the biggest caveat is probably that they’re assuming that the location of the nearby wall is known, but that’s a relatively straightforward problem to solve if your robot has the right sensors on it.

Once the robot detects that something in its leg has given out, its Damage Reflex (“D-Reflex”) kicks in. D-Reflex is based around a neural network that was trained in simulation (taking a mere 882,000 simulated trials), and with the posture of the robot and the location of the wall as inputs, the network outputs how likely a potential wall contact is to stabilize the robot, taking just just a few milliseconds. The system doesn’t actually need to know anything specific about the robot’s injury, and will work whether the actuator is locked up, moving freely but not controllably, or completely absent, the “amputation” case. Of course, reality rarely matches simulation, and it turns out that a damaged and tipping over robot doesn’t reliably make contact with the the wall exactly where it should, so the researchers had to tweak things to make sure that the robot stops its hand as soon as it touches the wall whether it’s in the right spot or not. This method worked pretty well—using D-Reflex, the TALOS robot was able to avoid falling in three out of four trials where it would otherwise have fallen. Considering how expensive robots like TALOS are, this is a pretty great result, if you ask me.

The obvious question at this point is, “okay, now what?” Well, that’s beyond the scope of this research, but generally “now what” consists of one of two things. Either the robot falls anyway, which can definitely happen even with this method because some configurations of robot and wall are simply not avoidable, or the robot doesn’t fall and you end up with a slightly busted robot leaning precariously against a wall. In either case, though, there are options. We’ve seen a bunch of complementary work on surviving falls with humanoid robots in one way or another. And in fact one of the authors of this paper, Jean-Baptiste Mouret, has already published some very cool research on injury adaptation for legged robots.

In the future, the idea is to extend this idea to robots that are moving dynamically, which is definitely going to be a lot more challenging, but potentially a lot more useful.

First do not fall: learning to exploit a wall with a damaged humanoid robot, by Timothee Anne, Eloïse Dalin, Ivan Bergonzani, Serena Ivaldi, and Jean-Baptiste Mouret from Inria, is published in IEEE Robotics and Automation Letters.

Complex and bulky driving systems are among the main issues for soft robots driven by pneumatic actuators. Self-excited oscillation is a promising approach for dealing with this problem: oscillatory actuation is generated from non-oscillatory input. However, small varieties of self-excited pneumatic actuators currently limit their applications. We present a simple, self-excited pneumatic valve that uses a flat ring tube (FRT), a device originally developed as a self-excited pneumatic actuator. First, we explore the driving principle of the self-excited valve and investigate the effect of the flow rate and FRT length on its driving frequency. Then, a locomotive robot containing the valve is demonstrated. The prototype succeeded in walking at 5.2 mm/s when the oscillation frequency of the valve was 1.5 Hz, showing the applicability of the proposed valve to soft robotics.

Pages