Feed aggregator

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

The Shadow Robot team couldn't resist! Our Operator, Joanna, is using the Shadow Teleoperation System which, fun and games aside, can help those in difficult, dangerous and distant jobs.

Shadow could challenge this MIT Jenga-playing robot, but I bet they wouldn't win:

[ Shadow Robot ]

Digit is gradually stomping the Agility Robotics logo into a big grassy field fully autonomously.

[ Agility Robotics ]

This is a pretty great and very short robotic magic show.

[ Mario the Magician ]

A research team at the Georgia Institute of Technology has developed a modular solution for drone delivery of larger packages without the need for a complex fleet of drones of varying sizes. By allowing teams of small drones to collaboratively lift objects using an adaptive control algorithm, the strategy could allow a wide range of packages to be delivered using a combination of several standard-sized vehicles.

[ GA Tech ]

I've seen this done using vision before, but Flexiv's Rizon 4s can keep a ball moving along a specific trajectory using only force sensing and control.

[ Flexiv ]

Thanks Yunfan!

This combination of a 3D aerial projection system and a sensing interface can be used as an interactive and intuitive control system for things like robot arms, but in this case, it's being used to make simulated pottery. Much less messy than the traditional way of doing it.

More details on Takafumi Matsumaru's work at the Bio-Robotics & Human-Mechatronics Laboratory at Waseda University at the link below.

[ BLHM ]

U.S. Vice President Kamala Harris called astronauts Shannon Walker and Kate Rubins on the ISS, and they brought up Astrobee, at which point Shannon reaches over and rips Honey right off of her charging dock to get her on camera.

[ NASA ]

Here's a quick three minute update on Perseverance and Ingenuity from JPL.

[ Mars 2020 ]

Rigid grippers used in existing aerial manipulators require precise positioning to achieve successful grasps and transmit large contact forces that may destabilize the drone. This limits the speed during grasping and prevents “dynamic grasping,” where the drone attempts to grasp an object while moving. On the other hand, biological systems (e.g. birds) rely on compliant and soft parts to dampen contact forces and compensate for grasping inaccuracy, enabling impressive feats. This paper presents the first prototype of a soft drone—a quadrotor where traditional (i.e. rigid) landing gears are replaced with a soft tendon-actuated gripper to enable aggressive grasping.

[ MIT ]

In this video we present results from a field deployment inside the Løkken Mine underground pyrite mine in Norway. The Løkken mine was operative from 1654 to 1987 and contains narrow but long corridors, alongside vast rooms and challenging vertical stopes. In this field study we evaluated selected autonomous exploration and visual search capabilities of a subset of the aerial robots of Team CERBERUS towards the goal of complete subterranean autonomy.

[ Team CERBERUS ]

What you can do with a 1,000 FPS projector with a high speed tracking system.

[ Ishikawa Group ]

ANYbotics’ collaboration with BASF, one of the largest global chemical manufacturers, displays the efficiency, quality, and scalability of robotic inspection and data-collection capabilities in complex industrial environments.

[ ANYbotics ]

Does your robot arm need a stylish jacket?

[ Fraunhofer ]

Trossen Robotics unboxes a Unitree A1, and it's actually an unboxing where they have to figure out everything from scratch.

[ Trossen ]

Robots have learned to drive cars, assist in surgeries―and vacuum our floors. But can they navigate the unwritten rules of a busy sidewalk? Until they can, robotics experts Leila Takayama and Chris Nicholson believe, robots won’t be able to fulfill their immense potential. In this conversation, Chris and Leila explore the future of robotics and the role open source will play in it.

[ Red Hat ]

Christoph Bartneck's keynote at the 6th Joint UAE Symposium on Social Robotics, focusing on what roles robots can play during the Covid crisis and why so many social robots fail in the market.

[ HIT Lab ]

Decision-making based on arbitrary criteria is legal in some contexts, such as employment, and not in others, such as criminal sentencing. As algorithms replace human deciders, HAI-EIS fellow Kathleen Creel argues arbitrariness at scale is morally and legally problematic. In this HAI seminar, she explains how the heart of this moral issue relates to domination and a lack of sufficient opportunity for autonomy. It relates in interesting ways to the moral wrong of discrimination. She proposes technically informed solutions that can lessen the impact of algorithms at scale and so mitigate or avoid the moral harm identified.

[ Stanford HAI ]

Sawyer B. Fuller speaks on Autonomous Insect-Sized Robots at the UC Berkeley EECS Colloquium series.

Sub-gram (insect-sized) robots have enormous potential that is largely untapped. From a research perspective, their extreme size, weight, and power (SWaP) constraints also forces us to reimagine everything from how they compute their control laws to how they are fabricated. These questions are the focus of the Autonomous Insect Robotics Laboratory at the University of Washington. I will discuss potential applications for insect robots and recent advances from our group. These include the first wireless flights of a sub-gram flapping-wing robot that weighs barely more than a toothpick. I will describe efforts to expand its capabilities, including the first multimodal ground-flight locomotion, the first demonstration of steering control, and how to find chemical plume sources by integrating the smelling apparatus of a live moth. I will also describe a backpack for live beetles with a steerable camera and conceptual design of robots that could scale all the way down to the “gnat robots” first envisioned by Flynn & Brooks in the ‘80s.

[ UC Berkeley ]

Thanks Fan!

Joshua Vander Hook, Computer Scientist, NIAC Fellow, and Technical Group Supervisor at NASA JPL, presents an overview of the AI Group(s) at JPL, and recent work on single and multi-agent autonomous systems supporting space exploration, Earth science, NASA technology development, and national defense programs.

[ UMD ]

Food products are usually difficult to handle for robots because of their large variations in shape, size, softness, and surface conditions. It is ideal to use one robotic gripper to handle as many food products as possible. In this study, a scooping-binding robotic gripper is proposed to achieve this goal. The gripper was constructed using a pneumatic parallel actuator and two identical scooping-binding mechanisms. The mechanism consists of a thin scooping plate and multiple rubber strings for binding. When grasping an object, the mechanisms actively makes contact with the environment for scooping, and the object weight is mainly supported by the scooping plate. The binding strings are responsible for stabilizing the grasping by wrapping around the object. Therefore, the gripper can perform high-speed pick-and-place operations. Contact analysis was conducted using a simple beam model and a finite element model that were experimentally validated. Tension property of the binding string was characterized and an analytical model was established to predict binding force based on object geometry and binding displacement. Finally, handling tests on 20 food items, including products with thin profiles and slippery surfaces, were performed. The scooping-binding gripper succeeded in handling all items with a takt time of approximately 4 s. The gripper showed potential for actual applications in the food industry.

AI is endowing robots, autonomous vehicles and countless of other forms of tech with new abilities and levels of self-sufficiency. Yet these models faithfully “make decisions” based on whatever data is fed into them, which could have dangerous consequences. For instance, if an autonomous car is driving down a highway and the sensor picks up a confusing signal (e.g., a paint smudge that is incorrectly interpreted as a lane marking), this could cause the car to swerve into another lane unnecessarily.

But in the ever-evolving world of AI, researchers are developing new ways to address challenges like this. One group of researchers has devised a new algorithm that allows the AI model to account for uncertain data, which they describe in a study published February 15 in IEEE Transactions on Neural Networks and Learning Systems.

“While we would like robots to work seamlessly in the real world, the real world is full of uncertainty,” says Michael Everett, a post-doctoral associate at MIT who helped develop the new approach. “It's important for a system to be aware of what it knows and what it is unsure about, which has been a major challenge for modern AI.”

His team focused on a type of AI called reinforcement learning (RL), whereby the model tries to learn the "value" of taking each action in a given scenario through trial-and-error. They developed a secondary algorithm, called Certified Adversarial Robustness for deep RL (CARRL), that can be built on top of an existing RL model.

“Our key innovation is that rather than blindly trusting the measurements, as is done today [by AI models], our algorithm CARRL thinks through all possible measurements that could have been made, and makes a decision that considers the worst-case outcome,” explains Everett.

In their study, the researchers tested CARRL across several different tasks, including collision avoidance simulations and Atari pong. For younger readers who may not be familiar with it, Atari pong is a classic computer game whereby an electronic paddle is used to direct a ping pong on the screen. In the test scenario, CARRL helped move the paddle slightly higher or lower to compensate for the possibility that the ball could approach at a slightly different point than what the input data indicated. All the while, CARRL would try to ensure that the ball would make contact with at least some part of paddle.

Gif: MIT Aerospace Controls Laboratory In a perfect world, the information that an AI model is fed would be accurate all the time and AI model will perform well (left). But in some cases, the AI may be given inaccurate data, causing it to miss its targets (middle). The new algorithm CARRL helps AIs account for uncertainty in its data inputs, yielding a better performance when relying on poor data (right).

Across all test scenarios, the RL model was better at compensating for potential inaccurate or “noisy” data with CARRL, than without CARRL.

But the results also show that, like with humans, too much self-doubt and uncertainty can be unhelpful. In the collision avoidance scenario, for example, indulging in too much uncertainty caused the main moving object in the simulation to avoid both the obstacle and its goal. “There is definitely a limit to how ‘skeptical’ the algorithm can be without becoming overly conservative,” Everett says.

This research was funded by Ford Motor Company, but Everett notes that it could be applicable under many other commercial applications requiring safety-aware AI, including aerospace, healthcare, or manufacturing domains.

“This work is a step toward my vision of creating ‘certifiable learning machines’—systems that can discover how to explore and perform in the real world on their own, while still having safety and robustness guarantees,” says Everett. “We'd like to bring CARRL into robotic hardware while continuing to explore the theoretical challenges at the interface of robotics and AI.”

Wrist disability caused by a series of diseases or injuries hinders the patient’s capability to perform activities of daily living (ADL). Rehabilitation devices for the wrist motor function have gained popularity among clinics and researchers due to the convenience of self-rehabilitation. The inherent compliance of soft robots enabled safe human-robot interaction and light-weight characteristics, providing new possibilities to develop wearable devices. Compared with the conventional apparatus, soft robotic wearable rehabilitation devices showed advantages in flexibility, cost, and comfort. In this work, a compact and low-profile soft robotic wrist brace was proposed by directly integrating eight soft origami-patterned actuators on the commercially available wrist brace. The linear motion of the actuators was defined by their origami pattern. The extensions of the actuators were constrained by the brace fabrics, deriving the motions of the wrist joint, i.e., extension/flexion, ulnar/radial deviation. The soft actuators were made of ethylene-vinyl acetate by blow molding, achieving mass-production capability, low cost, and high repeatability. The design and fabrication of the soft robotic wrist brace are presented in this work. The experiments on the range of motion, output force, wearing position adaptivity, and performance under disturbance have been carried out with results analyzed. The modular soft actuator approach of design and fabrication of the soft robotic wrist brace has a wide application potential in wearable devices.

Two of the major revolutions of this century are the Artificial Intelligence and Robotics. These technologies are penetrating through all disciplines and faculties at a very rapid pace. The application of these technologies in medicine, specifically in the context of Covid 19 is paramount. This article briefly reviews the commonly applied protocols in the Health Care System and provides a perspective in improving the efficiency and effectiveness of the current system. This article is not meant to provide a literature review of the current technology but rather provides a personal perspective of the author regarding what could happen in the ideal situation.

The exponentially increasing advances in robotics and machine learning are facilitating the transition of robots from being confined to controlled industrial spaces to performing novel everyday tasks in domestic and urban environments. In order to make the presence of robots safe as well as comfortable for humans, and to facilitate their acceptance in public environments, they are often equipped with social abilities for navigation and interaction. Socially compliant robot navigation is increasingly being learned from human observations or demonstrations. We argue that these techniques that typically aim to mimic human behavior do not guarantee fair behavior. As a consequence, social navigation models can replicate, promote, and amplify societal unfairness, such as discrimination and segregation. In this work, we investigate a framework for diminishing bias in social robot navigation models so that robots are equipped with the capability to plan as well as adapt their paths based on both physical and social demands. Our proposed framework consists of two components: learning which incorporates social context into the learning process to account for safety and comfort, and relearning to detect and correct potentially harmful outcomes before the onset. We provide both technological and societal analysis using three diverse case studies in different social scenarios of interaction. Moreover, we present ethical implications of deploying robots in social environments and propose potential solutions. Through this study, we highlight the importance and advocate for fairness in human-robot interactions in order to promote more equitable social relationships, roles, and dynamics and consequently positively influence our society.

Over the past few years, we’ve seen 3D printers used in increasingly creative ways. There’s been a realization that fundamentally, a 3D printer is a full-fledged, multi-axis robotic manipulation system—which is an extraordinarily versatile thing to have in your home. Rather than just printing static objects, folks are now using 3D printers as pick-and-place systems to manufacture drones, and as custom filament printers to make objects out of programmable materials, to highlight just two examples.

In an update to some research first presented at the end of 2019, researchers from Meiji University in Japan have developed one of the cleverest 3D printer enhancements that we’ve yet seen. Called Functgraph, it turns a conventional 3D printer into a “personal factory automation” system by printing and manipulating the tools required to do complex tasks entirely on the print bed. A paper on Functgraph, by Yuto Kuroki and Keita Watanabe, was presented at the Conference on 4D and Functional Fabrication 2020 in October.

Far as I can tell, this is a bone-stock 3D printer with the exception of two modifications, both of which it presumably printed itself. The first is a tool holder on the print head, and the second is a tool release mechanism that sits off to the side. These two things, taken together, give Functgraph access to custom tools limited only by what it can print; and when used in combination with 3D printed objects designed to interact with these tools (support structures with tool interfaces to snap them off, for example), it really is possible to print, assemble, manipulate, and actuate entire small-scale factories.

Yuto Kuroki, first author on the paper describing Functgraph, describes his inspiration for some of the particular tasks shown in the demo video:

The future that Functgraph aims for is as a new platform that downloads apps like smartphones and provides physical support in the real world— the realization of personal factory automation. 

When it comes to sandwich apps, there are many ways to look at recipes, but in the end, humans have to make them. I made a prototype based on the idea of ​​how easy it would be if I could wake up in the morning saying "OK Google, make a breakfast sandwich." 

Regarding the rabbit factory, it’s an application that mass-produces and packs rabbit figures. The box on the right is an interior box to prevent the product from slipping, and the box on the left is an exterior box that is placed in the store and catches the eyes of customers. This is a realization that the manufactured figure is packed as it is and ready for shipment. In this video, two are packed in a row, so in principle it is possible to make hundreds or thousands of them in a row. 

The reason for making a prototype of an app to make a car is a strange story, but the idea is that if you send a 3D printer to a remote place like space, it will be able to generate what you need on the spot. Even if you’re exploring the Moon and your car breaks, I think that you can procure it on the spot again if you have a 3D printer, even without specialized knowledge, dedicated machines, and human hands. This research shows that 3D printers can realize individual desires and purposes unattended and automatically. I think that 3D printers can truly evolve into ‘machines that can do anything’ with Functgraph.

Over the past few years, we’ve seen 3D printers used in increasingly creative ways. There’s been a realization that fundamentally, a 3D printer is a full-fledged, multi-axis robotic manipulation system—which is an extraordinarily versatile thing to have in your home. Rather than just printing static objects, folks are now using 3D printers as pick-and-place systems to manufacture drones, and as custom filament printers to make objects out of programmable materials, to highlight just two examples.

In an update to some research first presented at the end of 2019, researchers from Meiji University in Japan have developed one of the cleverest 3D printer enhancements that we’ve yet seen. Called Functgraph, it turns a conventional 3D printer into a “personal factory automation” system by printing and manipulating the tools required to do complex tasks entirely on the print bed. A paper on Functgraph, by Yuto Kuroki and Keita Watanabe, was presented at the Conference on 4D and Functional Fabrication 2020 in October.

Far as I can tell, this is a bone-stock 3D printer with the exception of two modifications, both of which it presumably printed itself. The first is a tool holder on the print head, and the second is a tool release mechanism that sits off to the side. These two things, taken together, give Functgraph access to custom tools limited only by what it can print; and when used in combination with 3D printed objects designed to interact with these tools (support structures with tool interfaces to snap them off, for example), it really is possible to print, assemble, manipulate, and actuate entire small-scale factories.

Yuto Kuroki, first author on the paper describing Functgraph, describes his inspiration for some of the particular tasks shown in the demo video:

The future that Functgraph aims for is as a new platform that downloads apps like smartphones and provides physical support in the real world— the realization of personal factory automation. 

When it comes to sandwich apps, there are many ways to look at recipes, but in the end, humans have to make them. I made a prototype based on the idea of ​​how easy it would be if I could wake up in the morning saying "OK Google, make a breakfast sandwich." 

Regarding the rabbit factory, it’s an application that mass-produces and packs rabbit figures. The box on the right is an interior box to prevent the product from slipping, and the box on the left is an exterior box that is placed in the store and catches the eyes of customers. This is a realization that the manufactured figure is packed as it is and ready for shipment. In this video, two are packed in a row, so in principle it is possible to make hundreds or thousands of them in a row. 

The reason for making a prototype of an app to make a car is a strange story, but the idea is that if you send a 3D printer to a remote place like space, it will be able to generate what you need on the spot. Even if you’re exploring the Moon and your car breaks, I think that you can procure it on the spot again if you have a 3D printer, even without specialized knowledge, dedicated machines, and human hands. This research shows that 3D printers can realize individual desires and purposes unattended and automatically. I think that 3D printers can truly evolve into ‘machines that can do anything’ with Functgraph.

The field of musical robotics presents an interesting case study of the intersection between creativity and robotics. While the potential for machines to express creativity represents an important issue in the field of robotics and AI, this subject is especially relevant in the case of machines that replicate human activities that are traditionally associated with creativity, such as music making. There are several different approaches that fall under the broad category of musical robotics, and creativity is expressed differently based on the design and goals of each approach. By exploring elements of anthropomorphic form, capacity for sonic nuance, control, and musical output, this article evaluates the locus of creativity in six of the most prominent approaches to musical robots, including: 1) nonspecialized anthropomorphic robots that can play musical instruments, 2) specialized anthropomorphic robots that model the physical actions of human musicians, 3) semi-anthropomorphic robotic musicians, 4) non-anthropomorphic robotic instruments, 5) cooperative musical robots, and 6) individual actuators used for their own sound production capabilities.

The assessment of rehabilitation robot safety is a vital aspect of the development process, which is often experienced as difficult. There are gaps in best practices and knowledge to ensure safe usage of rehabilitation robots. Currently, safety is commonly assessed by monitoring adverse events occurrence. The aim of this article is to explore how safety of rehabilitation robots can be assessed early in the development phase, before they are used with patients. We are suggesting a uniform approach for safety validation of robots closely interacting with humans, based on safety skills and validation protocols. Safety skills are an abstract representation of the ability of a robot to reduce a specific risk or deal with a specific hazard. They can be implemented in various ways, depending on the application requirements, which enables the use of a single safety skill across a wide range of applications and domains. Safety validation protocols have been developed that correspond to these skills and consider domain-specific conditions. This gives robot users and developers concise testing procedures to prove the mechanical safety of their robotic system, even when the applications are in domains with a lack of standards and best practices such as the healthcare domain. Based on knowledge about adverse events occurring in rehabilitation robot use, we identified multi-directional excessive forces on the soft tissue level and musculoskeletal level as most relevant hazards for rehabilitation robots and related them to four safety skills, providing a concrete starting point for safety assessment of rehabilitation robots. We further identified a number of gaps which need to be addressed in the future to pave the way for more comprehensive guidelines for rehabilitation robot safety assessments. Predominantly, besides new developments of safety by design features, there is a strong need for reliable measurement methods as well as acceptable limit values for human-robot interaction forces both on skin and joint level.

Tracking the 6D pose and velocity of objects represents a fundamental requirement for modern robotics manipulation tasks. This paper proposes a 6D object pose tracking algorithm, called MaskUKF, that combines deep object segmentation networks and depth information with a serial Unscented Kalman Filter to track the pose and the velocity of an object in real-time. MaskUKF achieves and in most cases surpasses state-of-the-art performance on the YCB-Video pose estimation benchmark without the need for expensive ground truth pose annotations at training time. Closed loop control experiments on the iCub humanoid platform in simulation show that joint pose and velocity tracking helps achieving higher precision and reliability than with one-shot deep pose estimation networks. A video of the experiments is available as Supplementary Material.

In recent years, communication robots aiming to offer mental support to the elderly have attracted increasing attention. Dialogue systems consisting of two robots could provide the elderly with opportunities to hold longer conversations in care homes. In this study, we conducted an experiment to compare two types of scenario-based dialogue systems with different types of bodies—physical and virtual robots—to investigate the effects of embodying such dialogue systems. Forty elderly people aged from 65 to 84 interacted with either an embodied desktop-sized humanoid robot or computer graphic agent displayed on a monitor. The elderly participants were divided into groups depending on the success of the interactions. The results revealed that (i) in the group where the robots responded more successfully with the expected conversation flow, the elderly are more engaged in the conversation with the physical robots than the virtual robots, and (ii) the elderly in the group in which robots responded successfully are more engaged in the conversation with the physical robots than those in the group in which the robots responded with ambiguous responses owing to unexpected utterances from the elderly. These results suggest that having a physical body is advantageous in promoting high engagement, and the potential advantage appears depending on whether the system can handle the conversation flow. These findings provide new insight into the development of dialogue systems assisting elderly in maintaining a better mental health.

The behavior of an android robot face is difficult to predict because of the complicated interactions between many and various attributes (size, weight, and shape) of system components. Therefore, the system behavior should be analyzed after these components are assembled to improve their performance. In this study, the three-dimensional displacement distributions for the facial surfaces of two android robots were measured for the analysis. The faces of three adult males were also analyzed for comparison. The visualized displacement distributions indicated that the androids lacked two main deformation features observed in the human upper face: curved flow lines and surface undulation, where the upstream areas of the flow lines elevate. These features potentially characterize the human-likeness. These findings suggest that innovative composite motion mechanisms to control both the flow lines and surface undulations are required to develop advanced androids capable of exhibiting more realistic facial expressions. Our comparative approach between androids and humans will improve androids’ impressions in future real-life application scenes, e.g., receptionists in hotels and banks, and clerks in shops.

During an ultrasound (US) scan, the sonographer is in close contact with the patient, which puts them at risk of COVID-19 transmission. In this paper, we propose a robot-assisted system that automatically scans tissue, increasing sonographer/patient distance and decreasing contact duration between them. This method is developed as a quick response to the COVID-19 pandemic. It considers the preferences of the sonographers in terms of how US scanning is done and can be trained quickly for different applications. Our proposed system automatically scans the tissue using a dexterous robot arm that holds US probe. The system assesses the quality of the acquired US images in real-time. This US image feedback will be used to automatically adjust the US probe contact force based on the quality of the image frame. The quality assessment algorithm is based on three US image features: correlation, compression and noise characteristics. These US image features are input to the SVM classifier, and the robot arm will adjust the US scanning force based on the SVM output. The proposed system enables the sonographer to maintain a distance from the patient because the sonographer does not have to be holding the probe and pressing against the patient's body for any prolonged time. The SVM was trained using bovine and porcine biological tissue, the system was then tested experimentally on plastisol phantom tissue. The result of the experiments shows us that our proposed quality assessment algorithm successfully maintains US image quality and is fast enough for use in a robotic control loop.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Man-Machine Synergy Effectors, Inc. is a Japanese company working on an absolutely massive “human machine synergistic effect device,” which is a huge robot controlled by a nearby human using a haptic rig.

From the look of things, the next generation will be able to move around. Whoa.

[ MMSE ]

This method of loading and unloading AMRs without having them ever stop moving is so obvious that there must be some equally obvious reason why I've never seen it done in practice.

The LoadRunner is able to transport and sort parcels weighing up to 30 kilograms. This makes it the perfect luggage carrier for airports. These AI-driven go-carts can also work in concert as larger collectives to carry large, heavy and bulky objects. Every LoadRunner can also haul up to four passive trailers. Powered by four electric motors, the LoadRunner sharply brakes at just the right moment right in front of its destination and the payload slides from the robot onto the delivery platform.

[ Fraunhofer ] via [ Gizmodo ]

Ayato Kanada at Kyushu University wrote in to share this clever “dislocatable joint,” a way of combining continuum and rigid robots.

[ Paper ]

Thanks Ayato!

The DodgeDrone challenge revisits the popular dodgeball game in the context of autonomous drones. Specifically, participants will have to code navigation policies to fly drones between waypoints while avoiding dynamic obstacles. Drones are fast but fragile systems: as soon as something hits them, they will crash! Since objects will move towards the drone with different speeds and acceleration, smart algorithms are required to avoid them!

This could totally happen in real life, and we need to be prepared for it!

[ DodgeDrone Challenge ]

In addition to winning the Best Student Design Competition CREATIVITY Award at HRI 2021, this paper would also have won the Best Paper Title award, if that award existed.

[ Paper ]

Robots are traditionally bound by a fixed morphology during their operational lifetime, which is limited to adapting only their control strategies. Here we present the first quadrupedal robot that can morphologically adapt to different environmental conditions in outdoor, unstructured environments.

We show that the robot exploits its training to effectively transition between different morphological configurations, exhibiting substantial performance improvements over a non-adaptive approach. The demonstrated benefits of real-world morphological adaptation demonstrate the potential for a new embodied way of incorporating adaptation into future robotic designs.

[ Nature ]

A drone video shot in a Minneapolis bowling alley was hailed as an instant classic. One Hollywood veteran said it “adds to the language and vocabulary of cinema.” One IEEE Spectrum editor said “hey that's pretty cool.”

[ Bryant Lake Bowl ]

It doesn't take a robot to convince me to buy candy, but I think if I buy candy from Relay it's a business expense, right?

[ RIS ]

DARPA is making progress on its AI dogfighting program, with physical flight tests expected this year.

[ DARPA ACE ]

Unitree Robotics has realized that the Empire needs to be overthrown!

[ Unitree ]

Windhover Labs, an emerging leader in open and reliable flight software and hardware, announces the upcoming availability of its first hardware product, a low cost modular flight computer for commercial drones and small satellites.

[ Windhover ]

As robots and autonomous systems are poised to become part of our everyday lives, the University of Michigan and Ford are opening a one-of-a-kind facility where they’ll develop robots and roboticists that help make lives better, keep people safer and build a more equitable society.

[ U Michigan ]

The adaptive robot Rizon combined with a new hybrid electrostatic and gecko-inspired gripping pad developed by Stanford BDML can manipulate bulky, non-smooth items in the most effort-saving way, which broadens the applications in retail and household environments.

[ Flexiv ]

Thanks Yunfan!

I don't know why anyone would want things to get MORE icy, but if you do for some reason, you can make it happen with a Husky.

Is winter over yet?

[ Clearpath ]

Skip ahead to about 1:20 to see a pair of Gita robots following a Spot following a human like a chain of lil’ robot duckings.

[ PFF ]

Here are a couple of retro robotics videos, one showing teleoperated humanoids from 2000, and the other showing a robotic guide dog from 1976 (!)

[ Tachi Lab ]

Thanks Fan!

If you missed Chad Jenkins' talk “That Ain’t Right: AI Mistakes and Black Lives” last time, here's another opportunity to watch from Robotics Today, and it includes a top notch panel discussion at the end.

[ Robotics Today ]

Since its founding in 1979, the Robotics Institute (RI) at Carnegie Mellon University has been leading the world in robotics research and education. In the mid 1990s, RI created NREC as the applied R&D center within the Institute with a specific mission to apply robotics technology in an impactful way on real-world applications. In this talk, I will go over numerous R&D programs that I have led at NREC in the past 25 years.

[ CMU ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Man-Machine Synergy Effectors, Inc. is a Japanese company working on an absolutely massive “human machine synergistic effect device,” which is a huge robot controlled by a nearby human using a haptic rig.

From the look of things, the next generation will be able to move around. Whoa.

[ MMSE ]

This method of loading and unloading AMRs without having them ever stop moving is so obvious that there must be some equally obvious reason why I've never seen it done in practice.

The LoadRunner is able to transport and sort parcels weighing up to 30 kilograms. This makes it the perfect luggage carrier for airports. These AI-driven go-carts can also work in concert as larger collectives to carry large, heavy and bulky objects. Every LoadRunner can also haul up to four passive trailers. Powered by four electric motors, the LoadRunner sharply brakes at just the right moment right in front of its destination and the payload slides from the robot onto the delivery platform.

[ Fraunhofer ] via [ Gizmodo ]

Ayato Kanada at Kyushu University wrote in to share this clever “dislocatable joint,” a way of combining continuum and rigid robots.

[ Paper ]

Thanks Ayato!

The DodgeDrone challenge revisits the popular dodgeball game in the context of autonomous drones. Specifically, participants will have to code navigation policies to fly drones between waypoints while avoiding dynamic obstacles. Drones are fast but fragile systems: as soon as something hits them, they will crash! Since objects will move towards the drone with different speeds and acceleration, smart algorithms are required to avoid them!

This could totally happen in real life, and we need to be prepared for it!

[ DodgeDrone Challenge ]

In addition to winning the Best Student Design Competition CREATIVITY Award at HRI 2021, this paper would also have won the Best Paper Title award, if that award existed.

[ Paper ]

Robots are traditionally bound by a fixed morphology during their operational lifetime, which is limited to adapting only their control strategies. Here we present the first quadrupedal robot that can morphologically adapt to different environmental conditions in outdoor, unstructured environments.

We show that the robot exploits its training to effectively transition between different morphological configurations, exhibiting substantial performance improvements over a non-adaptive approach. The demonstrated benefits of real-world morphological adaptation demonstrate the potential for a new embodied way of incorporating adaptation into future robotic designs.

[ Nature ]

A drone video shot in a Minneapolis bowling alley was hailed as an instant classic. One Hollywood veteran said it “adds to the language and vocabulary of cinema.” One IEEE Spectrum editor said “hey that's pretty cool.”

[ Bryant Lake Bowl ]

It doesn't take a robot to convince me to buy candy, but I think if I buy candy from Relay it's a business expense, right?

[ RIS ]

DARPA is making progress on its AI dogfighting program, with physical flight tests expected this year.

[ DARPA ACE ]

Unitree Robotics has realized that the Empire needs to be overthrown!

[ Unitree ]

Windhover Labs, an emerging leader in open and reliable flight software and hardware, announces the upcoming availability of its first hardware product, a low cost modular flight computer for commercial drones and small satellites.

[ Windhover ]

As robots and autonomous systems are poised to become part of our everyday lives, the University of Michigan and Ford are opening a one-of-a-kind facility where they’ll develop robots and roboticists that help make lives better, keep people safer and build a more equitable society.

[ U Michigan ]

The adaptive robot Rizon combined with a new hybrid electrostatic and gecko-inspired gripping pad developed by Stanford BDML can manipulate bulky, non-smooth items in the most effort-saving way, which broadens the applications in retail and household environments.

[ Flexiv ]

Thanks Yunfan!

I don't know why anyone would want things to get MORE icy, but if you do for some reason, you can make it happen with a Husky.

Is winter over yet?

[ Clearpath ]

Skip ahead to about 1:20 to see a pair of Gita robots following a Spot following a human like a chain of lil’ robot duckings.

[ PFF ]

Here are a couple of retro robotics videos, one showing teleoperated humanoids from 2000, and the other showing a robotic guide dog from 1976 (!)

[ Tachi Lab ]

Thanks Fan!

If you missed Chad Jenkins' talk “That Ain’t Right: AI Mistakes and Black Lives” last time, here's another opportunity to watch from Robotics Today, and it includes a top notch panel discussion at the end.

[ Robotics Today ]

Since its founding in 1979, the Robotics Institute (RI) at Carnegie Mellon University has been leading the world in robotics research and education. In the mid 1990s, RI created NREC as the applied R&D center within the Institute with a specific mission to apply robotics technology in an impactful way on real-world applications. In this talk, I will go over numerous R&D programs that I have led at NREC in the past 25 years.

[ CMU ]

Ocean ecosystems have spatiotemporal variability and dynamic complexity that require a long-term deployment of an autonomous underwater vehicle for data collection. A new generation of long-range autonomous underwater vehicles (LRAUVs), such as the Slocum glider and Tethys-class AUV, has emerged with high endurance, long-range, and energy-aware capabilities. These new vehicles provide an effective solution to study different oceanic phenomena across multiple spatial and temporal scales. For these vehicles, the ocean environment has forces and moments from changing water currents which are generally on the order of magnitude of the operational vehicle velocity. Therefore, it is not practical to generate a simple trajectory from an initial location to a goal location in an uncertain ocean, as the vehicle can deviate significantly from the prescribed trajectory due to disturbances resulted from water currents. Since state estimation remains challenging in underwater conditions, feedback planning must incorporate state uncertainty that can be framed into a stochastic energy-aware path planning problem. This article presents an energy-aware feedback planning method for an LRAUV utilizing its kinematic model in an underwater environment under motion and sensor uncertainties. Our method uses ocean dynamics from a predictive ocean model to understand the water flow pattern and introduces a goal-constrained belief space to make the feedback plan synthesis computationally tractable. Energy-aware feedback plans for different water current layers are synthesized through sampling and ocean dynamics. The synthesized feedback plans provide strategies for the vehicle that drive it from an environment’s initial location toward the goal location. We validate our method through extensive simulations involving the Tethys vehicle’s kinematic model and incorporating actual ocean model prediction data.

Most of what we cover in the Human Robot Interaction (HRI) space involves collaboration, because collaborative interactions tend to be productive, positive, and happy. Yay! But sometimes, collaboration is not what you want. Sometimes, you want competition.

Competition between humans and robots doesn’t have to be a bad thing, in the same way that competition between humans and humans doesn’t have to be a bad thing. There are all kinds of scenarios in which humans respond favorably to competition, and exercise is an obvious example.

Studies have shown that humans can perform significantly better when they’re exercising competitively as opposed to when they’re exercising individually. And while researchers have looked at whether robots can be effective exercise coaches (they can be), there hasn’t been a lot of exploration of physical robots actually competing directly with humans. Roboticists from the University of Washington decided to put adversarial exercise robots to the test, and they did it by giving a PR2 a giant foam sword. Awesome.

This exercise game matches a PR2 with a human in a zero-sum competitive fencing game with foam swords. Expecting the PR2 to actually be a competitive fencer isn’t realistic because, like, it’s a PR2. Instead, the objective of the game is for the human to keep their foam sword within a target area near the PR2 while also avoiding the PR2’s low-key sword-waving. A VR system allows the user to see the target area, while also giving the system a way to track the user’s location and pose.

Looks like fun, right? It’s also exercise, at least in the sense that the user’s heart rate nearly doubled over their resting heart rate during the highest scoring game. This is super preliminary research, though, and there’s still a lot of work to do. It’ll be important to figure out how skilled a competitive robot should be in order to keep providing a reasonable challenge to a human who gradually improves over time, while also being careful to avoid generating any negative reactions. For example, the robot should probably not beat you over the head with its foam sword, even if that’s a highly effective strategy for getting your heart rate up.

Competitive Physical Human-Robot Game Play, by Boling Yang, Xiangyu Xie, Golnaz Habibi, and Joshua R. Smith from the University of Washington and MIT, was presented as a late-breaking report at the ACM/IEEE International Conference on Human-Robot Interaction.

Most of what we cover in the Human Robot Interaction (HRI) space involves collaboration, because collaborative interactions tend to be productive, positive, and happy. Yay! But sometimes, collaboration is not what you want. Sometimes, you want competition.

Competition between humans and robots doesn’t have to be a bad thing, in the same way that competition between humans and humans doesn’t have to be a bad thing. There are all kinds of scenarios in which humans respond favorably to competition, and exercise is an obvious example.

Studies have shown that humans can perform significantly better when they’re exercising competitively as opposed to when they’re exercising individually. And while researchers have looked at whether robots can be effective exercise coaches (they can be), there hasn’t been a lot of exploration of physical robots actually competing directly with humans. Roboticists from the University of Washington decided to put adversarial exercise robots to the test, and they did it by giving a PR2 a giant foam sword. Awesome.

This exercise game matches a PR2 with a human in a zero-sum competitive fencing game with foam swords. Expecting the PR2 to actually be a competitive fencer isn’t realistic because, like, it’s a PR2. Instead, the objective of the game is for the human to keep their foam sword within a target area near the PR2 while also avoiding the PR2’s low-key sword-waving. A VR system allows the user to see the target area, while also giving the system a way to track the user’s location and pose.

Looks like fun, right? It’s also exercise, at least in the sense that the user’s heart rate nearly doubled over their resting heart rate during the highest scoring game. This is super preliminary research, though, and there’s still a lot of work to do. It’ll be important to figure out how skilled a competitive robot should be in order to keep providing a reasonable challenge to a human who gradually improves over time, while also being careful to avoid generating any negative reactions. For example, the robot should probably not beat you over the head with its foam sword, even if that’s a highly effective strategy for getting your heart rate up.

Competitive Physical Human-Robot Game Play, by Boling Yang, Xiangyu Xie, Golnaz Habibi, and Joshua R. Smith from the University of Washington and MIT, was presented as a late-breaking report at the ACM/IEEE International Conference on Human-Robot Interaction.

Pages