Feed aggregator

Over the past few years, we’ve seen 3D printers used in increasingly creative ways. There’s been a realization that fundamentally, a 3D printer is a full-fledged, multi-axis robotic manipulation system—which is an extraordinarily versatile thing to have in your home. Rather than just printing static objects, folks are now using 3D printers as pick-and-place systems to manufacture drones, and as custom filament printers to make objects out of programmable materials, to highlight just two examples.

In an update to some research first presented at the end of 2019, researchers from Meiji University in Japan have developed one of the cleverest 3D printer enhancements that we’ve yet seen. Called Functgraph, it turns a conventional 3D printer into a “personal factory automation” system by printing and manipulating the tools required to do complex tasks entirely on the print bed. A paper on Functgraph, by Yuto Kuroki and Keita Watanabe, was presented at the Conference on 4D and Functional Fabrication 2020 in October.

Far as I can tell, this is a bone-stock 3D printer with the exception of two modifications, both of which it presumably printed itself. The first is a tool holder on the print head, and the second is a tool release mechanism that sits off to the side. These two things, taken together, give Functgraph access to custom tools limited only by what it can print; and when used in combination with 3D printed objects designed to interact with these tools (support structures with tool interfaces to snap them off, for example), it really is possible to print, assemble, manipulate, and actuate entire small-scale factories.

Yuto Kuroki, first author on the paper describing Functgraph, describes his inspiration for some of the particular tasks shown in the demo video:

The future that Functgraph aims for is as a new platform that downloads apps like smartphones and provides physical support in the real world— the realization of personal factory automation. 

When it comes to sandwich apps, there are many ways to look at recipes, but in the end, humans have to make them. I made a prototype based on the idea of ​​how easy it would be if I could wake up in the morning saying "OK Google, make a breakfast sandwich." 

Regarding the rabbit factory, it’s an application that mass-produces and packs rabbit figures. The box on the right is an interior box to prevent the product from slipping, and the box on the left is an exterior box that is placed in the store and catches the eyes of customers. This is a realization that the manufactured figure is packed as it is and ready for shipment. In this video, two are packed in a row, so in principle it is possible to make hundreds or thousands of them in a row. 

The reason for making a prototype of an app to make a car is a strange story, but the idea is that if you send a 3D printer to a remote place like space, it will be able to generate what you need on the spot. Even if you’re exploring the Moon and your car breaks, I think that you can procure it on the spot again if you have a 3D printer, even without specialized knowledge, dedicated machines, and human hands. This research shows that 3D printers can realize individual desires and purposes unattended and automatically. I think that 3D printers can truly evolve into ‘machines that can do anything’ with Functgraph.

Over the past few years, we’ve seen 3D printers used in increasingly creative ways. There’s been a realization that fundamentally, a 3D printer is a full-fledged, multi-axis robotic manipulation system—which is an extraordinarily versatile thing to have in your home. Rather than just printing static objects, folks are now using 3D printers as pick-and-place systems to manufacture drones, and as custom filament printers to make objects out of programmable materials, to highlight just two examples.

In an update to some research first presented at the end of 2019, researchers from Meiji University in Japan have developed one of the cleverest 3D printer enhancements that we’ve yet seen. Called Functgraph, it turns a conventional 3D printer into a “personal factory automation” system by printing and manipulating the tools required to do complex tasks entirely on the print bed. A paper on Functgraph, by Yuto Kuroki and Keita Watanabe, was presented at the Conference on 4D and Functional Fabrication 2020 in October.

Far as I can tell, this is a bone-stock 3D printer with the exception of two modifications, both of which it presumably printed itself. The first is a tool holder on the print head, and the second is a tool release mechanism that sits off to the side. These two things, taken together, give Functgraph access to custom tools limited only by what it can print; and when used in combination with 3D printed objects designed to interact with these tools (support structures with tool interfaces to snap them off, for example), it really is possible to print, assemble, manipulate, and actuate entire small-scale factories.

Yuto Kuroki, first author on the paper describing Functgraph, describes his inspiration for some of the particular tasks shown in the demo video:

The future that Functgraph aims for is as a new platform that downloads apps like smartphones and provides physical support in the real world— the realization of personal factory automation. 

When it comes to sandwich apps, there are many ways to look at recipes, but in the end, humans have to make them. I made a prototype based on the idea of ​​how easy it would be if I could wake up in the morning saying "OK Google, make a breakfast sandwich." 

Regarding the rabbit factory, it’s an application that mass-produces and packs rabbit figures. The box on the right is an interior box to prevent the product from slipping, and the box on the left is an exterior box that is placed in the store and catches the eyes of customers. This is a realization that the manufactured figure is packed as it is and ready for shipment. In this video, two are packed in a row, so in principle it is possible to make hundreds or thousands of them in a row. 

The reason for making a prototype of an app to make a car is a strange story, but the idea is that if you send a 3D printer to a remote place like space, it will be able to generate what you need on the spot. Even if you’re exploring the Moon and your car breaks, I think that you can procure it on the spot again if you have a 3D printer, even without specialized knowledge, dedicated machines, and human hands. This research shows that 3D printers can realize individual desires and purposes unattended and automatically. I think that 3D printers can truly evolve into ‘machines that can do anything’ with Functgraph.

The field of musical robotics presents an interesting case study of the intersection between creativity and robotics. While the potential for machines to express creativity represents an important issue in the field of robotics and AI, this subject is especially relevant in the case of machines that replicate human activities that are traditionally associated with creativity, such as music making. There are several different approaches that fall under the broad category of musical robotics, and creativity is expressed differently based on the design and goals of each approach. By exploring elements of anthropomorphic form, capacity for sonic nuance, control, and musical output, this article evaluates the locus of creativity in six of the most prominent approaches to musical robots, including: 1) nonspecialized anthropomorphic robots that can play musical instruments, 2) specialized anthropomorphic robots that model the physical actions of human musicians, 3) semi-anthropomorphic robotic musicians, 4) non-anthropomorphic robotic instruments, 5) cooperative musical robots, and 6) individual actuators used for their own sound production capabilities.

The assessment of rehabilitation robot safety is a vital aspect of the development process, which is often experienced as difficult. There are gaps in best practices and knowledge to ensure safe usage of rehabilitation robots. Currently, safety is commonly assessed by monitoring adverse events occurrence. The aim of this article is to explore how safety of rehabilitation robots can be assessed early in the development phase, before they are used with patients. We are suggesting a uniform approach for safety validation of robots closely interacting with humans, based on safety skills and validation protocols. Safety skills are an abstract representation of the ability of a robot to reduce a specific risk or deal with a specific hazard. They can be implemented in various ways, depending on the application requirements, which enables the use of a single safety skill across a wide range of applications and domains. Safety validation protocols have been developed that correspond to these skills and consider domain-specific conditions. This gives robot users and developers concise testing procedures to prove the mechanical safety of their robotic system, even when the applications are in domains with a lack of standards and best practices such as the healthcare domain. Based on knowledge about adverse events occurring in rehabilitation robot use, we identified multi-directional excessive forces on the soft tissue level and musculoskeletal level as most relevant hazards for rehabilitation robots and related them to four safety skills, providing a concrete starting point for safety assessment of rehabilitation robots. We further identified a number of gaps which need to be addressed in the future to pave the way for more comprehensive guidelines for rehabilitation robot safety assessments. Predominantly, besides new developments of safety by design features, there is a strong need for reliable measurement methods as well as acceptable limit values for human-robot interaction forces both on skin and joint level.

Tracking the 6D pose and velocity of objects represents a fundamental requirement for modern robotics manipulation tasks. This paper proposes a 6D object pose tracking algorithm, called MaskUKF, that combines deep object segmentation networks and depth information with a serial Unscented Kalman Filter to track the pose and the velocity of an object in real-time. MaskUKF achieves and in most cases surpasses state-of-the-art performance on the YCB-Video pose estimation benchmark without the need for expensive ground truth pose annotations at training time. Closed loop control experiments on the iCub humanoid platform in simulation show that joint pose and velocity tracking helps achieving higher precision and reliability than with one-shot deep pose estimation networks. A video of the experiments is available as Supplementary Material.

In recent years, communication robots aiming to offer mental support to the elderly have attracted increasing attention. Dialogue systems consisting of two robots could provide the elderly with opportunities to hold longer conversations in care homes. In this study, we conducted an experiment to compare two types of scenario-based dialogue systems with different types of bodies—physical and virtual robots—to investigate the effects of embodying such dialogue systems. Forty elderly people aged from 65 to 84 interacted with either an embodied desktop-sized humanoid robot or computer graphic agent displayed on a monitor. The elderly participants were divided into groups depending on the success of the interactions. The results revealed that (i) in the group where the robots responded more successfully with the expected conversation flow, the elderly are more engaged in the conversation with the physical robots than the virtual robots, and (ii) the elderly in the group in which robots responded successfully are more engaged in the conversation with the physical robots than those in the group in which the robots responded with ambiguous responses owing to unexpected utterances from the elderly. These results suggest that having a physical body is advantageous in promoting high engagement, and the potential advantage appears depending on whether the system can handle the conversation flow. These findings provide new insight into the development of dialogue systems assisting elderly in maintaining a better mental health.

The behavior of an android robot face is difficult to predict because of the complicated interactions between many and various attributes (size, weight, and shape) of system components. Therefore, the system behavior should be analyzed after these components are assembled to improve their performance. In this study, the three-dimensional displacement distributions for the facial surfaces of two android robots were measured for the analysis. The faces of three adult males were also analyzed for comparison. The visualized displacement distributions indicated that the androids lacked two main deformation features observed in the human upper face: curved flow lines and surface undulation, where the upstream areas of the flow lines elevate. These features potentially characterize the human-likeness. These findings suggest that innovative composite motion mechanisms to control both the flow lines and surface undulations are required to develop advanced androids capable of exhibiting more realistic facial expressions. Our comparative approach between androids and humans will improve androids’ impressions in future real-life application scenes, e.g., receptionists in hotels and banks, and clerks in shops.

During an ultrasound (US) scan, the sonographer is in close contact with the patient, which puts them at risk of COVID-19 transmission. In this paper, we propose a robot-assisted system that automatically scans tissue, increasing sonographer/patient distance and decreasing contact duration between them. This method is developed as a quick response to the COVID-19 pandemic. It considers the preferences of the sonographers in terms of how US scanning is done and can be trained quickly for different applications. Our proposed system automatically scans the tissue using a dexterous robot arm that holds US probe. The system assesses the quality of the acquired US images in real-time. This US image feedback will be used to automatically adjust the US probe contact force based on the quality of the image frame. The quality assessment algorithm is based on three US image features: correlation, compression and noise characteristics. These US image features are input to the SVM classifier, and the robot arm will adjust the US scanning force based on the SVM output. The proposed system enables the sonographer to maintain a distance from the patient because the sonographer does not have to be holding the probe and pressing against the patient's body for any prolonged time. The SVM was trained using bovine and porcine biological tissue, the system was then tested experimentally on plastisol phantom tissue. The result of the experiments shows us that our proposed quality assessment algorithm successfully maintains US image quality and is fast enough for use in a robotic control loop.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Man-Machine Synergy Effectors, Inc. is a Japanese company working on an absolutely massive “human machine synergistic effect device,” which is a huge robot controlled by a nearby human using a haptic rig.

From the look of things, the next generation will be able to move around. Whoa.

[ MMSE ]

This method of loading and unloading AMRs without having them ever stop moving is so obvious that there must be some equally obvious reason why I've never seen it done in practice.

The LoadRunner is able to transport and sort parcels weighing up to 30 kilograms. This makes it the perfect luggage carrier for airports. These AI-driven go-carts can also work in concert as larger collectives to carry large, heavy and bulky objects. Every LoadRunner can also haul up to four passive trailers. Powered by four electric motors, the LoadRunner sharply brakes at just the right moment right in front of its destination and the payload slides from the robot onto the delivery platform.

[ Fraunhofer ] via [ Gizmodo ]

Ayato Kanada at Kyushu University wrote in to share this clever “dislocatable joint,” a way of combining continuum and rigid robots.

[ Paper ]

Thanks Ayato!

The DodgeDrone challenge revisits the popular dodgeball game in the context of autonomous drones. Specifically, participants will have to code navigation policies to fly drones between waypoints while avoiding dynamic obstacles. Drones are fast but fragile systems: as soon as something hits them, they will crash! Since objects will move towards the drone with different speeds and acceleration, smart algorithms are required to avoid them!

This could totally happen in real life, and we need to be prepared for it!

[ DodgeDrone Challenge ]

In addition to winning the Best Student Design Competition CREATIVITY Award at HRI 2021, this paper would also have won the Best Paper Title award, if that award existed.

[ Paper ]

Robots are traditionally bound by a fixed morphology during their operational lifetime, which is limited to adapting only their control strategies. Here we present the first quadrupedal robot that can morphologically adapt to different environmental conditions in outdoor, unstructured environments.

We show that the robot exploits its training to effectively transition between different morphological configurations, exhibiting substantial performance improvements over a non-adaptive approach. The demonstrated benefits of real-world morphological adaptation demonstrate the potential for a new embodied way of incorporating adaptation into future robotic designs.

[ Nature ]

A drone video shot in a Minneapolis bowling alley was hailed as an instant classic. One Hollywood veteran said it “adds to the language and vocabulary of cinema.” One IEEE Spectrum editor said “hey that's pretty cool.”

[ Bryant Lake Bowl ]

It doesn't take a robot to convince me to buy candy, but I think if I buy candy from Relay it's a business expense, right?

[ RIS ]

DARPA is making progress on its AI dogfighting program, with physical flight tests expected this year.

[ DARPA ACE ]

Unitree Robotics has realized that the Empire needs to be overthrown!

[ Unitree ]

Windhover Labs, an emerging leader in open and reliable flight software and hardware, announces the upcoming availability of its first hardware product, a low cost modular flight computer for commercial drones and small satellites.

[ Windhover ]

As robots and autonomous systems are poised to become part of our everyday lives, the University of Michigan and Ford are opening a one-of-a-kind facility where they’ll develop robots and roboticists that help make lives better, keep people safer and build a more equitable society.

[ U Michigan ]

The adaptive robot Rizon combined with a new hybrid electrostatic and gecko-inspired gripping pad developed by Stanford BDML can manipulate bulky, non-smooth items in the most effort-saving way, which broadens the applications in retail and household environments.

[ Flexiv ]

Thanks Yunfan!

I don't know why anyone would want things to get MORE icy, but if you do for some reason, you can make it happen with a Husky.

Is winter over yet?

[ Clearpath ]

Skip ahead to about 1:20 to see a pair of Gita robots following a Spot following a human like a chain of lil’ robot duckings.

[ PFF ]

Here are a couple of retro robotics videos, one showing teleoperated humanoids from 2000, and the other showing a robotic guide dog from 1976 (!)

[ Tachi Lab ]

Thanks Fan!

If you missed Chad Jenkins' talk “That Ain’t Right: AI Mistakes and Black Lives” last time, here's another opportunity to watch from Robotics Today, and it includes a top notch panel discussion at the end.

[ Robotics Today ]

Since its founding in 1979, the Robotics Institute (RI) at Carnegie Mellon University has been leading the world in robotics research and education. In the mid 1990s, RI created NREC as the applied R&D center within the Institute with a specific mission to apply robotics technology in an impactful way on real-world applications. In this talk, I will go over numerous R&D programs that I have led at NREC in the past 25 years.

[ CMU ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Man-Machine Synergy Effectors, Inc. is a Japanese company working on an absolutely massive “human machine synergistic effect device,” which is a huge robot controlled by a nearby human using a haptic rig.

From the look of things, the next generation will be able to move around. Whoa.

[ MMSE ]

This method of loading and unloading AMRs without having them ever stop moving is so obvious that there must be some equally obvious reason why I've never seen it done in practice.

The LoadRunner is able to transport and sort parcels weighing up to 30 kilograms. This makes it the perfect luggage carrier for airports. These AI-driven go-carts can also work in concert as larger collectives to carry large, heavy and bulky objects. Every LoadRunner can also haul up to four passive trailers. Powered by four electric motors, the LoadRunner sharply brakes at just the right moment right in front of its destination and the payload slides from the robot onto the delivery platform.

[ Fraunhofer ] via [ Gizmodo ]

Ayato Kanada at Kyushu University wrote in to share this clever “dislocatable joint,” a way of combining continuum and rigid robots.

[ Paper ]

Thanks Ayato!

The DodgeDrone challenge revisits the popular dodgeball game in the context of autonomous drones. Specifically, participants will have to code navigation policies to fly drones between waypoints while avoiding dynamic obstacles. Drones are fast but fragile systems: as soon as something hits them, they will crash! Since objects will move towards the drone with different speeds and acceleration, smart algorithms are required to avoid them!

This could totally happen in real life, and we need to be prepared for it!

[ DodgeDrone Challenge ]

In addition to winning the Best Student Design Competition CREATIVITY Award at HRI 2021, this paper would also have won the Best Paper Title award, if that award existed.

[ Paper ]

Robots are traditionally bound by a fixed morphology during their operational lifetime, which is limited to adapting only their control strategies. Here we present the first quadrupedal robot that can morphologically adapt to different environmental conditions in outdoor, unstructured environments.

We show that the robot exploits its training to effectively transition between different morphological configurations, exhibiting substantial performance improvements over a non-adaptive approach. The demonstrated benefits of real-world morphological adaptation demonstrate the potential for a new embodied way of incorporating adaptation into future robotic designs.

[ Nature ]

A drone video shot in a Minneapolis bowling alley was hailed as an instant classic. One Hollywood veteran said it “adds to the language and vocabulary of cinema.” One IEEE Spectrum editor said “hey that's pretty cool.”

[ Bryant Lake Bowl ]

It doesn't take a robot to convince me to buy candy, but I think if I buy candy from Relay it's a business expense, right?

[ RIS ]

DARPA is making progress on its AI dogfighting program, with physical flight tests expected this year.

[ DARPA ACE ]

Unitree Robotics has realized that the Empire needs to be overthrown!

[ Unitree ]

Windhover Labs, an emerging leader in open and reliable flight software and hardware, announces the upcoming availability of its first hardware product, a low cost modular flight computer for commercial drones and small satellites.

[ Windhover ]

As robots and autonomous systems are poised to become part of our everyday lives, the University of Michigan and Ford are opening a one-of-a-kind facility where they’ll develop robots and roboticists that help make lives better, keep people safer and build a more equitable society.

[ U Michigan ]

The adaptive robot Rizon combined with a new hybrid electrostatic and gecko-inspired gripping pad developed by Stanford BDML can manipulate bulky, non-smooth items in the most effort-saving way, which broadens the applications in retail and household environments.

[ Flexiv ]

Thanks Yunfan!

I don't know why anyone would want things to get MORE icy, but if you do for some reason, you can make it happen with a Husky.

Is winter over yet?

[ Clearpath ]

Skip ahead to about 1:20 to see a pair of Gita robots following a Spot following a human like a chain of lil’ robot duckings.

[ PFF ]

Here are a couple of retro robotics videos, one showing teleoperated humanoids from 2000, and the other showing a robotic guide dog from 1976 (!)

[ Tachi Lab ]

Thanks Fan!

If you missed Chad Jenkins' talk “That Ain’t Right: AI Mistakes and Black Lives” last time, here's another opportunity to watch from Robotics Today, and it includes a top notch panel discussion at the end.

[ Robotics Today ]

Since its founding in 1979, the Robotics Institute (RI) at Carnegie Mellon University has been leading the world in robotics research and education. In the mid 1990s, RI created NREC as the applied R&D center within the Institute with a specific mission to apply robotics technology in an impactful way on real-world applications. In this talk, I will go over numerous R&D programs that I have led at NREC in the past 25 years.

[ CMU ]

Ocean ecosystems have spatiotemporal variability and dynamic complexity that require a long-term deployment of an autonomous underwater vehicle for data collection. A new generation of long-range autonomous underwater vehicles (LRAUVs), such as the Slocum glider and Tethys-class AUV, has emerged with high endurance, long-range, and energy-aware capabilities. These new vehicles provide an effective solution to study different oceanic phenomena across multiple spatial and temporal scales. For these vehicles, the ocean environment has forces and moments from changing water currents which are generally on the order of magnitude of the operational vehicle velocity. Therefore, it is not practical to generate a simple trajectory from an initial location to a goal location in an uncertain ocean, as the vehicle can deviate significantly from the prescribed trajectory due to disturbances resulted from water currents. Since state estimation remains challenging in underwater conditions, feedback planning must incorporate state uncertainty that can be framed into a stochastic energy-aware path planning problem. This article presents an energy-aware feedback planning method for an LRAUV utilizing its kinematic model in an underwater environment under motion and sensor uncertainties. Our method uses ocean dynamics from a predictive ocean model to understand the water flow pattern and introduces a goal-constrained belief space to make the feedback plan synthesis computationally tractable. Energy-aware feedback plans for different water current layers are synthesized through sampling and ocean dynamics. The synthesized feedback plans provide strategies for the vehicle that drive it from an environment’s initial location toward the goal location. We validate our method through extensive simulations involving the Tethys vehicle’s kinematic model and incorporating actual ocean model prediction data.

Most of what we cover in the Human Robot Interaction (HRI) space involves collaboration, because collaborative interactions tend to be productive, positive, and happy. Yay! But sometimes, collaboration is not what you want. Sometimes, you want competition.

Competition between humans and robots doesn’t have to be a bad thing, in the same way that competition between humans and humans doesn’t have to be a bad thing. There are all kinds of scenarios in which humans respond favorably to competition, and exercise is an obvious example.

Studies have shown that humans can perform significantly better when they’re exercising competitively as opposed to when they’re exercising individually. And while researchers have looked at whether robots can be effective exercise coaches (they can be), there hasn’t been a lot of exploration of physical robots actually competing directly with humans. Roboticists from the University of Washington decided to put adversarial exercise robots to the test, and they did it by giving a PR2 a giant foam sword. Awesome.

This exercise game matches a PR2 with a human in a zero-sum competitive fencing game with foam swords. Expecting the PR2 to actually be a competitive fencer isn’t realistic because, like, it’s a PR2. Instead, the objective of the game is for the human to keep their foam sword within a target area near the PR2 while also avoiding the PR2’s low-key sword-waving. A VR system allows the user to see the target area, while also giving the system a way to track the user’s location and pose.

Looks like fun, right? It’s also exercise, at least in the sense that the user’s heart rate nearly doubled over their resting heart rate during the highest scoring game. This is super preliminary research, though, and there’s still a lot of work to do. It’ll be important to figure out how skilled a competitive robot should be in order to keep providing a reasonable challenge to a human who gradually improves over time, while also being careful to avoid generating any negative reactions. For example, the robot should probably not beat you over the head with its foam sword, even if that’s a highly effective strategy for getting your heart rate up.

Competitive Physical Human-Robot Game Play, by Boling Yang, Xiangyu Xie, Golnaz Habibi, and Joshua R. Smith from the University of Washington and MIT, was presented as a late-breaking report at the ACM/IEEE International Conference on Human-Robot Interaction.

Most of what we cover in the Human Robot Interaction (HRI) space involves collaboration, because collaborative interactions tend to be productive, positive, and happy. Yay! But sometimes, collaboration is not what you want. Sometimes, you want competition.

Competition between humans and robots doesn’t have to be a bad thing, in the same way that competition between humans and humans doesn’t have to be a bad thing. There are all kinds of scenarios in which humans respond favorably to competition, and exercise is an obvious example.

Studies have shown that humans can perform significantly better when they’re exercising competitively as opposed to when they’re exercising individually. And while researchers have looked at whether robots can be effective exercise coaches (they can be), there hasn’t been a lot of exploration of physical robots actually competing directly with humans. Roboticists from the University of Washington decided to put adversarial exercise robots to the test, and they did it by giving a PR2 a giant foam sword. Awesome.

This exercise game matches a PR2 with a human in a zero-sum competitive fencing game with foam swords. Expecting the PR2 to actually be a competitive fencer isn’t realistic because, like, it’s a PR2. Instead, the objective of the game is for the human to keep their foam sword within a target area near the PR2 while also avoiding the PR2’s low-key sword-waving. A VR system allows the user to see the target area, while also giving the system a way to track the user’s location and pose.

Looks like fun, right? It’s also exercise, at least in the sense that the user’s heart rate nearly doubled over their resting heart rate during the highest scoring game. This is super preliminary research, though, and there’s still a lot of work to do. It’ll be important to figure out how skilled a competitive robot should be in order to keep providing a reasonable challenge to a human who gradually improves over time, while also being careful to avoid generating any negative reactions. For example, the robot should probably not beat you over the head with its foam sword, even if that’s a highly effective strategy for getting your heart rate up.

Competitive Physical Human-Robot Game Play, by Boling Yang, Xiangyu Xie, Golnaz Habibi, and Joshua R. Smith from the University of Washington and MIT, was presented as a late-breaking report at the ACM/IEEE International Conference on Human-Robot Interaction.

COVID-19 has severely impacted mental health in vulnerable demographics, in particular older adults, who face unprecedented isolation. Consequences, while globally severe, are acutely pronounced in low- and middle-income countries (LMICs) confronting pronounced gaps in resources and clinician accessibility. Social robots are well-recognized for their potential to support mental health, yet user compliance (i.e., trust) demands seamless affective human-robot interactions; natural ‘human-like’ conversations are required in simple, inexpensive, deployable platforms. We present the design, development, and pilot testing of a multimodal robotic framework fusing verbal (contextual speech) and nonverbal (facial expressions) social cues, aimed to improve engagement in human-robot interaction and ultimately facilitate mental health telemedicine during and beyond the COVID-19 pandemic. We report the design optimization of a hybrid face robot, which combines digital facial expressions based on mathematical affect space mapping with static 3D facial features. We further introduce a contextual virtual assistant with integrated cloud-based AI coupled to the robot’s facial representation of emotions, such that the robot adapts its emotional response to users’ speech in real-time. Experiments with healthy participants demonstrate emotion recognition exceeding 90% for happy, tired, sad, angry, surprised and stern/disgusted robotic emotions. When separated, stern and disgusted are occasionally transposed (70%+ accuracy overall) but are easily distinguishable from other emotions. A qualitative user experience analysis indicates overall enthusiastic and engaging reception to human-robot multimodal interaction with the new framework. The robot has been modified to enable clinical telemedicine for cognitive engagement with older adults and people with dementia (PwD) in LMICs. The mechanically simple and low-cost social robot has been deployed in pilot tests to support older individuals and PwD at the Schizophrenia Research Foundation (SCARF) in Chennai, India. A procedure for deployment addressing challenges in cultural acceptance, end-user acclimatization and resource allocation is further introduced. Results indicate strong promise to stimulate human-robot psychosocial interaction through the hybrid-face robotic system. Future work is targeting deployment for telemedicine to mitigate the mental health impact of COVID-19 on older adults and PwD in both LMICs and higher income regions.

Representations of gender in new technologies like the Siri, Pepper, and Sophia robotic assistants, as well as the commodification of features associated with gender on platforms like Instagram, inspire questions about how and whether robotic tools can have gender and what it means to people if they do. One possible response to this is through artistic creation of dance performance. This paper reports on one such project where, along the route to this inquiry, creation of machine augmentation – of both the performer and audience member – was necessary to communicate the artistic ideas grappled with therein. Thus, this article describes the presentation of Babyface, a machine-augmented, participatory contemporary dance performance. This work is a reaction to feminized tropes in popular media and modern technology, and establishes a parallel between the ways that women and machines are talked about, treated, and – in the case of machines – designed to look and behave. This paper extends prior reports on the creation of this piece and its accompanying devices to describe extensions with audience member participation, and reflect on the responses of these audience members. These fabricated elements alongside the actions of the performer and a soundscape that quotes statements made by real “female” robots create an otherwordly, sad cyborg character that causes viewers to question their assumptions about and pressures on the feminine ideal.

This paper introduces and validates a real-time dynamic predictive model based on a neural network approach for soft continuum manipulators. The presented model provides a real-time prediction framework using neural-network-based strategies and continuum mechanics principles. A time-space integration scheme is employed to discretize the continuous dynamics and decouple the dynamic equations for translation and rotation for each node of a soft continuum manipulator. Then the resulting architecture is used to develop distributed prediction algorithms using recurrent neural networks. The proposed RNN-based parallel predictive scheme does not rely on computationally intensive algorithms; therefore, it is useful in real-time applications. Furthermore, simulations are shown to illustrate the approach performance on soft continuum elastica, and the approach is also validated through an experiment on a magnetically-actuated soft continuum manipulator. The results demonstrate that the presented model can outperform classical modeling approaches such as the Cosserat rod model while also shows possibilities for being used in practice.

This paper presents the design, fabrication, and operation of a soft robotic compression device that is remotely powered by laser illumination. We combined the rapid and wireless response of hybrid nanomaterials with state-of-the-art microengineering techniques to develop machinery that can apply physiologically relevant mechanical loading. The passive hydrogel structures that constitute the compliant skeleton of the machines were fabricated using single-step in situ polymerization process and directly incorporated around the actuators without further assembly steps. Experimentally validated computational models guided the design of the compression mechanism. We incorporated a cantilever beam to the prototype for life-time monitoring of mechanical properties of cell clusters on optical microscopes. The mechanical and biochemical compatibility of the chosen materials with living cells together with the on-site manufacturing process enable seamless interfacing of soft robotic devices with biological specimen.

Robots are well known to be specialists, doing best when they’re designed for one very specific task without much of an expectation that they’ll do anything else. This is fine, as long as you’re OK with getting a new specialist robot every time you want something different done robotically. Making generalist robots is hard, but what’s less hard is enabling a generalist to easily adapt into different kinds of specialists, which we humans do all the time when we use tools.

While we’ve written about tool using robots in the past, roboticists at the MIT Media Lab have taken inspiration from the proud and noble hermit crab to design a robot that’s able to effortlessly transition from a total generalist to highly specialized and back again, simply by switching in and out of clever, custom made mechanical shells.

Image: Ken Nakagaki MIT’s HERMITS combine small robotic cubes with mechanical shells.

HERMITS, which almost certainly does not stand for Highly Extendable Robotic Modular Interactive Toio Shells, even though I’m going to pretend that it does (#backronym), are based around Sony’s little Toio robots. We wrote about Toio a few years ago—they’re two-wheeled robotic cubes that can localize themselves based on infrared patterns in a special mat that they zip around on, allowing them to interact with each other and with other objects through a centralized controller. Toios are designed to be modified, but mostly just as toys, which apparently doesn’t take them anywhere close to their full potential.

Ken Nakagaki, a roboticist at the MIT Media Lab, made a minor modification to the Toio robots by adding a little servo motor that can poke a pin up out of the robot’s top. It’s just a small change, but it enables all kinds of new things, since it allows the robots to drive inside of custom shells and dock with them, just like a hermit crab. But unlike any hermit crab I’ve ever seen, these shells can be endowed with clever mechanical transmission systems that leverage the robots’ motors to give them highly specialized capabilities on-demand.

This concept is really cool— with just a few generalist mobile bases, you can make as many specialist shells as you want, most of which are passive without any kind of electronics inside, making them relatively easy to produce using a 3D printer. The HERMITS can then swap in and out of shells whenever they need to. You can scale up the system by adding more HERMITS if you want, but the important thing is that you’re investing in additional generalist capability, which is far more efficient than specialists which will just sit around not doing anything most of the time. 

Image: MIT Future research directions for MIT’s HERMITS.

The researchers have been able to control up to 70 robots at once, using 14 Raspberry Pis, which is perhaps not the most streamlined approach but definitely reinforces how fundamentally low cost and accessible the HERMITS system is. At the same time, there’s a massive amount of future potential, as shown in the figure above, from new form factors to fabrication and assembly to shells with more sophisticated embedded mechanisms. There’s way more detail on the HERMITS website, and if you want a Toio of your own, you can find a kit online for about $270.

HERMITS: Dynamically Reconfiguring the Interactivity of Self-Propelled TUIs with Mechanical Shell Add-ons, by Ken Nakagaki, Joanne Leong, Jordan L Tappa, João Wilbert, and Hiroshi Ishii from MIT, was presented at UIST 2020.

Robots are well known to be specialists, doing best when they’re designed for one very specific task without much of an expectation that they’ll do anything else. This is fine, as long as you’re OK with getting a new specialist robot every time you want something different done robotically. Making generalist robots is hard, but what’s less hard is enabling a generalist to easily adapt into different kinds of specialists, which we humans do all the time when we use tools.

While we’ve written about tool using robots in the past, roboticists at the MIT Media Lab have taken inspiration from the proud and noble hermit crab to design a robot that’s able to effortlessly transition from a total generalist to highly specialized and back again, simply by switching in and out of clever, custom made mechanical shells.

Image: Ken Nakagaki MIT’s HERMITS combine small robotic cubes with mechanical shells.

HERMITS, which almost certainly does not stand for Highly Extendable Robotic Modular Interactive Toio Shells, even though I’m going to pretend that it does (#backronym), are based around Sony’s little Toio robots. We wrote about Toio a few years ago—they’re two-wheeled robotic cubes that can localize themselves based on infrared patterns in a special mat that they zip around on, allowing them to interact with each other and with other objects through a centralized controller. Toios are designed to be modified, but mostly just as toys, which apparently doesn’t take them anywhere close to their full potential.

Ken Nakagaki, a roboticist at the MIT Media Lab, made a minor modification to the Toio robots by adding a little servo motor that can poke a pin up out of the robot’s top. It’s just a small change, but it enables all kinds of new things, since it allows the robots to drive inside of custom shells and dock with them, just like a hermit crab. But unlike any hermit crab I’ve ever seen, these shells can be endowed with clever mechanical transmission systems that leverage the robots’ motors to give them highly specialized capabilities on-demand.

This concept is really cool— with just a few generalist mobile bases, you can make as many specialist shells as you want, most of which are passive without any kind of electronics inside, making them relatively easy to produce using a 3D printer. The HERMITS can then swap in and out of shells whenever they need to. You can scale up the system by adding more HERMITS if you want, but the important thing is that you’re investing in additional generalist capability, which is far more efficient than specialists which will just sit around not doing anything most of the time. 

Image: MIT Future research directions for MIT’s HERMITS.

The researchers have been able to control up to 70 robots at once, using 14 Raspberry Pis, which is perhaps not the most streamlined approach but definitely reinforces how fundamentally low cost and accessible the HERMITS system is. At the same time, there’s a massive amount of future potential, as shown in the figure above, from new form factors to fabrication and assembly to shells with more sophisticated embedded mechanisms. There’s way more detail on the HERMITS website, and if you want a Toio of your own, you can find a kit online for about $270.

HERMITS: Dynamically Reconfiguring the Interactivity of Self-Propelled TUIs with Mechanical Shell Add-ons, by Ken Nakagaki, Joanne Leong, Jordan L Tappa, João Wilbert, and Hiroshi Ishii from MIT, was presented at UIST 2020.

Pages