Feed aggregator

A robot swarm is a decentralized system characterized by locality of sensing and communication, self-organization, and redundancy. These characteristics allow robot swarms to achieve scalability, flexibility and fault tolerance, properties that are especially valuable in the context of simultaneous localization and mapping (SLAM), specifically in unknown environments that evolve over time. So far, research in SLAM has mainly focused on single- and centralized multi-robot systems—i.e., non-swarm systems. While these systems can produce accurate maps, they are typically not scalable, cannot easily adapt to unexpected changes in the environment, and are prone to failure in hostile environments. Swarm SLAM is a promising approach to SLAM as it could leverage the decentralized nature of a robot swarm and achieve scalable, flexible and fault-tolerant exploration and mapping. However, at the moment of writing, swarm SLAM is a rather novel idea and the field lacks definitions, frameworks, and results. In this work, we present the concept of swarm SLAM and its constraints, both from a technical and an economical point of view. In particular, we highlight the main challenges of swarm SLAM for gathering, sharing, and retrieving information. We also discuss the strengths and weaknesses of this approach against traditional multi-robot SLAM. We believe that swarm SLAM will be particularly useful to produce abstract maps such as topological or simple semantic maps and to operate under time or cost constraints.

This article lays out the framework for relational-performative esthetics in human-robot interaction, comprising a theoretical lens and design approach for critical practice-based inquiries into embodied meaning-making in human-robot interaction. I explore the centrality of esthetics as a practice of embodied meaning-making by drawing on my arts-led, performance-based approach to human-robot encounters, as well as other artistic practices. Understanding social agency and meaning as being enacted through the situated dynamics of the interaction, I bring into focus a process of bodying-thinging; entangling and transforming subjects and objects in the encounter and rendering elastic boundaries in-between. Rather than serving to make the strange look more familiar, aesthetics here is about rendering the differences between humans and robots more relational. My notion of a relational-performative design approach—designing with bodying-thinging—proposes that we engage with human-robot encounters from the earliest stages of the robot design. This is where we begin to manifest boundaries that shape meaning-making and the potential for emergence, transformation, and connections arising from intra-bodily resonances (bodying-thinging). I argue that this relational-performative approach opens up new possibilities for how we design robots and how they socially participate in the encounter.

The deep tendon reflex exam is an important part of neurological assessment of patients consisting of two components, reflex elicitation and reflex grading. While this exam has traditionally been performed in person, with trained clinicians both eliciting and grading the reflex, this work seeks to enable the exam by novices. The COVID-19 pandemic has motivated greater utilization of telemedicine and other remote healthcare delivery tools. A smart tendon hammer capable of streaming acceleration measurements wirelessly allows differentiation of correct and incorrect tapping locations with 91.5% accuracy to provide feedback to users about the appropriateness of stimulation, enabling reflex elicitation by laypeople, while survey results demonstrate that novices are reasonably able to grade reflex responses. Novice reflex grading demonstrates adequate performance with a mean error of 0.2 points on a five point scale. This work shows that by assisting in the reflex elicitation component of the reflex exam via a smart hammer and feedback application, novices should be able to complete the reflex exam remotely, filling a critical gap in neurological care during the COVID-19 pandemic.

Machine learning algorithms provide a way to detect misinformation based on writing style and how articles are shared.

On topics as varied as climate change and the safety of vaccines, you will find a wave of misinformation all over social media. Trust in conventional news sources may seem lower than ever, but researchers are working on ways to give people more insight on whether they can believe what they read. Researchers have been testing artificial intelligence (AI) tools that could help filter legitimate news. But how trustworthy is AI when it comes to stopping the spread of misinformation?

Researchers at the Rensselaer Polytechnic Institute (RPI) and the University of Tennessee collaborated to study the role of AI in helping people identify whether the news they’re reading is legitimate or not.

The research paper, “Tailoring Heuristics and Timing AI Interventions for Supporting News Veracity Assessments,” was published in Computers in Human Behavior Reports.  It discussed how crowdsourcing marketplace Amazon Mechanical Turk (AMT) can be used to identify misinformation for fresh news and specific heuristics, which are rules of thumb used to process information and consider its veracity. In other words, heuristics are essentially “shortcuts for decisions,” explained Dorit Nevo, an associate professor at RPI’s Lally School of Management and a lead author for the paper.

The study found that AI would be successful in flagging false stories only if the reader did not already have an opinion on the topic, Nevo said. When study subjects were set in their beliefs, confirmation bias kept them from reassessing their views.

Nevo said the first part of the project focused on whether subjects could detect misinformation around climate change and vaccines like the one designed to prevent chicken pox. Then, beginning in April 2020, her team studied how people responded to news related to COVID-19.

“With COVID-19, there was a significant difference,” Nevo said. They found that about 72 percent of respondents could identify misinformation about the coronavirus without heuristic clues, and roughly 93 percent were able to be convinced by the researcher’s heuristics that the content was fake.

Examples of heuristic clues include text with too many capital letters or the use of strong language, Nevo said.

There were two types of heuristics mentioned in the team’s paper: objective heuristics and source heuristics. They put a statement at the top of each article the subjects read; it instructed them to read the article and indicate whether they believed its central thesis.

“We either put a statement that says the AI finds this article reliable and accurate based on the objective heuristics, or we said the AI finds the source reliable,” Nevo said. “So that's the source heuristic.”

In her research on heuristics, Nevo found that people’s thinking takes one of two paths: The first path is to read the article, think about it and decide if they believe it; the second is to consider the source and what others think about the news, and decide whether to believe it before reading it.

Image: Dorit Nevo/RPI/IEEE Spectrum Researchers at RPI researched the role of heuristics and AI in detecting whether people thought news was credible

Another research paper, “Timing Matters When Correcting Fake News,” published in the Proceedings of the National Academy of Science by researchers at Harvard University, differed from the RPI researchers in its findings. While Nevo and her collaborators found that it’s easier to convince people that a story is fake news before reading it, the Harvard researchers, led by Nadia M. Brashier, a psychologist and neuroscientist, discovered that a fact-check can convince people of misinformation even after reading headlines. When study subjects read true or false labels after reading a headline, that resulted in a 25.3 percent reduction in “subsequent misclassification,” when compared to headlines with no tag, Brashier and her team found.

In the end, fighting misinformation will require both computing and human efforts such as policy changes, says Benjamin D. Horne, an assistant professor of Information Sciences at the University of Tennessee and one of Nevo’s co-authors. He says the RPI-Tennessee work was inspired by AI tools he designed previously. Horne was previously a research assistant at RPI, where he developed machine learning (ML) algorithms that can detect partial truths as well as decontextualized truths and out-of-date information.

“Our algorithms are trained on source-level behavior, both when using the textual content of an article and the network of other news sources that it draws news from,” Horne said. “We have found that these two types of features together are quite good at distinguishing between sources labeled as reliable or unreliable by external news source ratings.”

The machine learning algorithms analyze the writing style and the content-sharing behavior of news outlets, Horne said. Researchers trained a supervised ML algorithm called Random Forest, a classification algorithm that uses decision trees.

AI for Detecting Fake News

So, what’s the potential for AI to be successful in detecting misinformation?

“The tools we have developed, and other tools developed in this area, have fairly high accuracy in lab settings,” says Horne. “For example, our most recent technical work showed around 83% accuracy in predicting when the source of a news article is reliable or unreliable.”

Despite the effectiveness of algorithms, old-fashioned fact-checking by journalists will still be required to combat fake news. AI could filter the information for fact-checkers to verify, according to Horne.

“AI tools are great at dealing with high quantities of information at fast speeds but lack the nuanced analysis that a journalist or fact-checker can provide,” Horne said. “I see a future where the two work together.”

The effective disinfection of hospitals is paramount in lowering the COVID-19 transmission risk to both patients and medical personnel. Autonomous mobile robots can perform the surface disinfection task in a timely and cost-effective manner, while preventing the direct contact of disinfecting agents with humans. This paper proposes an end-to-end coverage path planning technique that generates a continuous and uninterrupted collision-free path for a mobile robot to cover an area of interest. The aim of this work is to decrease the disinfection task completion time and cost by finding an optimal coverage path using a new graph-based representation of the environment. The results are compared with other existing state-of-the-art coverage path planning approaches. It is shown that the proposed approach generates a path with shorter total travelled distance (fewer number of overlaps) and smaller number of turns.

One of the main limiting factors in deployment of marine robots is the issue of energy sustainability. This is particularly challenging for traditional propeller-driven autonomous underwater vehicles which operate using energy intensive thrusters. One emerging technology to enable persistent performance is the use of autonomous recharging and retasking through underwater docking stations. This paper presents an integrated navigational algorithm to facilitate reliable underwater docking of autonomous underwater vehicles. Specifically, the algorithm dynamically re-plans Dubins paths to create an efficient trajectory from the current vehicle position through approach into terminal homing. The path is followed using integral line of sight control until handoff to the terminal homing method. A light tracking algorithm drives the vehicle from the handoff location into the dock. In experimental testing using an Oceanserver Iver3 and Bluefin SandShark, the approach phase reached the target handoff within 2 m in 48 of 48 tests. The terminal homing phase was capable of handling up to 5 m offsets with approximately 70% accuracy (12 of 17 tests). In the event of failed docking, a Dubins path is generated to efficiently drive the vehicle to re-attempt docking. The vehicle should be able to successfully dock in the majority of foreseeable scenarios when re-attempts are considered. This method, when combined with recent work on docking station design, intelligent cooperative path planning, underwater communication, and underwater power transfer, will enable true persistent undersea operation in the extremely dynamic ocean environment.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China

Let us know if you have suggestions for next week, and enjoy today's videos.

This is a pretty terrible video, I think because it was harvested from WeChat, which is where Tencent decided to premiere its new quadruped robot.

Not bad, right? Its name is Max, it has a top speed of 25 kph thanks to its elbow wheels, and we know almost nothing else about it.

[ Tencent ]

Thanks Fan!

Can't bring yourself to mask-shame others? Build a robot to do it for you instead!

[ GitHub ]

Researchers at Georgia Tech have recently developed an entirely soft, long-stroke electromagnetic actuator using liquid metal, compliant magnetic composites, and silicone polymers. The robot was inspired by the motion of the Xenia coral, which pulses its polyps to circulate oxygen under water to promote photosynthesis.

In this work, power applied to soft coils generates an electromagnetic field, which causes the internal compliant magnet to move upward. This forces the squishy silicone linkages to convert linear to the rotational motion with an arclength of up to 42 mm with a bandwidth up to 30 Hz. This highly deformable, fast, and long-stroke actuator topology can be utilized for a variety of applications from biomimicry to fully-soft grasping to wearables applications.

[ Paper ] via [ Georgia Tech ]

Thanks Noah!

Jueying Mini Lite may look a little like a Boston Dynamics Spot, but according to DeepRobotics, its coloring is based on Bruce Lee's Kung Fu clothes.

[ DeepRobotics ]

Henrique writes, “I would like to share with you the supplementary video of our recent work accepted to ICRA 2021. The video features a quadruped and a full-size humanoid performing dynamic jumps, after a brief animated intro of what direct transcription is. Me and my colleagues have put a lot of hard work into this, and I am very proud of the results.”

Making big robots jump is definitely something to be proud of!

[ SLMC Edinburgh ]

Thanks Henrique!

The finals of the Powered Exoskeleton Race for Cybathlon Global 2020.

[ Cybathlon ]

Thanks Fan!

It's nice that every once in a while, the world can get excited about science and robots.

[ NASA ]

Playing the Imperial March over footage of an army of black quadrupeds may not be sending quite the right message.

[ Unitree ]

Kod*lab PhD students Abriana Stewart-Height, Diego Caporale and Wei-Hsi Chen, with former Kod*lab student Garrett Wenger were on set in the summer of 2019 to operate RHex for the filming of Lapsis, a first feature film by director and screenwriter Noah Hutton.

[ Kod*lab ]

In class 2.008, Design and Manufacturing II, mechanical engineering students at MIT learn the fundamental principles of manufacturing at scale by designing and producing their own yo-yos. Instructors stress the importance of sustainable practices in the global supply chain.

[ MIT ]

A short history of robotics, from ABB.

[ ABB ]

In this paper, we propose a whole-body planning framework that unifies dynamic locomotion and manipulation tasks by formulating a single multi-contact optimal control problem. This is demonstrated in a set of real hardware experiments done in free-motion, such as base or end-effector pose tracking, and while pushing/pulling a heavy resistive door. Robustness against model mismatches and external disturbances is also verified during these test cases.

[ Paper ]

This paper presents PANTHER, a real-time perception-aware (PA) trajectory planner in dynamic environments. PANTHER plans trajectories that avoid dynamic obstacles while also keeping them in the sensor field of view (FOV) and minimizing the blur to aid in object tracking.

Extensive hardware experiments in unknown dynamic environments with all the computation running onboard are presented, with velocities of up to 5.8 m/s, and with relative velocities (with respect to the obstacles) of up to 6.3 m/s. The only sensors used are an IMU, a forward-facing depth camera, and a downward-facing monocular camera.

[ MIT ]

With our SaaS solution, we enable robots to inspect industrial facilities. One of the robots our software supports, is the Boston Dynamics Spot robot. In this video we demonstrate how autonomous industrial inspection with the Boston Dynamics Spot Robot is performed with our teach and repeat solution.

[ Energy Robotics ]

In this week’s episode of Tech on Deck, learn about our first technology demonstration sent to Station: The Robotic Refueling Mission. This tech demo helped us develop the tools and techniques needed to robotically refuel a satellite in space, an important capability for space exploration.

[ NASA ]

At Covariant we are committed to research and development that will bring AI Robotics to the real world. As a part of this, we believe it's important to educate individuals on how these exciting innovations will make a positive, fundamental and global impact for years to come. In this presentation, our co-founder Pieter Abbeel breaks down his thoughts on the current state of play for AI robotics.

[ Covariant ]

How do you fly a helicopter on Mars? It takes Ingenuity and Perseverance. During this technology demo, Farah Alibay and Tim Canham will get into the details of how these craft will manage this incredible task.

[ NASA ]

Complex real-world environments continue to present significant challenges for fielding robotic teams, which often face expansive spatial scales, difficult and dynamic terrain, degraded environmental conditions, and severe communication constraints. Breakthrough technologies call for integrated solutions across autonomy, perception, networking, mobility, and human teaming thrusts. As such, the DARPA OFFSET program and the DARPA Subterranean Challenge seek novel approaches and new insights for discovering and demonstrating these innovative technologies, to help close critical gaps for robotic operations in complex urban and underground environments.

[ UPenn ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China

Let us know if you have suggestions for next week, and enjoy today's videos.

This is a pretty terrible video, I think because it was harvested from WeChat, which is where Tencent decided to premiere its new quadruped robot.

Not bad, right? Its name is Max, it has a top speed of 25 kph thanks to its elbow wheels, and we know almost nothing else about it.

[ Tencent ]

Thanks Fan!

Can't bring yourself to mask-shame others? Build a robot to do it for you instead!

[ GitHub ]

Researchers at Georgia Tech have recently developed an entirely soft, long-stroke electromagnetic actuator using liquid metal, compliant magnetic composites, and silicone polymers. The robot was inspired by the motion of the Xenia coral, which pulses its polyps to circulate oxygen under water to promote photosynthesis.

In this work, power applied to soft coils generates an electromagnetic field, which causes the internal compliant magnet to move upward. This forces the squishy silicone linkages to convert linear to the rotational motion with an arclength of up to 42 mm with a bandwidth up to 30 Hz. This highly deformable, fast, and long-stroke actuator topology can be utilized for a variety of applications from biomimicry to fully-soft grasping to wearables applications.

[ Paper ] via [ Georgia Tech ]

Thanks Noah!

Jueying Mini Lite may look a little like a Boston Dynamics Spot, but according to DeepRobotics, its coloring is based on Bruce Lee's Kung Fu clothes.

[ DeepRobotics ]

Henrique writes, “I would like to share with you the supplementary video of our recent work accepted to ICRA 2021. The video features a quadruped and a full-size humanoid performing dynamic jumps, after a brief animated intro of what direct transcription is. Me and my colleagues have put a lot of hard work into this, and I am very proud of the results.”

Making big robots jump is definitely something to be proud of!

[ SLMC Edinburgh ]

Thanks Henrique!

The finals of the Powered Exoskeleton Race for Cybathlon Global 2020.

[ Cybathlon ]

Thanks Fan!

It's nice that every once in a while, the world can get excited about science and robots.

[ NASA ]

Playing the Imperial March over footage of an army of black quadrupeds may not be sending quite the right message.

[ Unitree ]

Kod*lab PhD students Abriana Stewart-Height, Diego Caporale and Wei-Hsi Chen, with former Kod*lab student Garrett Wenger were on set in the summer of 2019 to operate RHex for the filming of Lapsis, a first feature film by director and screenwriter Noah Hutton.

[ Kod*lab ]

In class 2.008, Design and Manufacturing II, mechanical engineering students at MIT learn the fundamental principles of manufacturing at scale by designing and producing their own yo-yos. Instructors stress the importance of sustainable practices in the global supply chain.

[ MIT ]

A short history of robotics, from ABB.

[ ABB ]

In this paper, we propose a whole-body planning framework that unifies dynamic locomotion and manipulation tasks by formulating a single multi-contact optimal control problem. This is demonstrated in a set of real hardware experiments done in free-motion, such as base or end-effector pose tracking, and while pushing/pulling a heavy resistive door. Robustness against model mismatches and external disturbances is also verified during these test cases.

[ Paper ]

This paper presents PANTHER, a real-time perception-aware (PA) trajectory planner in dynamic environments. PANTHER plans trajectories that avoid dynamic obstacles while also keeping them in the sensor field of view (FOV) and minimizing the blur to aid in object tracking.

Extensive hardware experiments in unknown dynamic environments with all the computation running onboard are presented, with velocities of up to 5.8 m/s, and with relative velocities (with respect to the obstacles) of up to 6.3 m/s. The only sensors used are an IMU, a forward-facing depth camera, and a downward-facing monocular camera.

[ MIT ]

With our SaaS solution, we enable robots to inspect industrial facilities. One of the robots our software supports, is the Boston Dynamics Spot robot. In this video we demonstrate how autonomous industrial inspection with the Boston Dynamics Spot Robot is performed with our teach and repeat solution.

[ Energy Robotics ]

In this week’s episode of Tech on Deck, learn about our first technology demonstration sent to Station: The Robotic Refueling Mission. This tech demo helped us develop the tools and techniques needed to robotically refuel a satellite in space, an important capability for space exploration.

[ NASA ]

At Covariant we are committed to research and development that will bring AI Robotics to the real world. As a part of this, we believe it's important to educate individuals on how these exciting innovations will make a positive, fundamental and global impact for years to come. In this presentation, our co-founder Pieter Abbeel breaks down his thoughts on the current state of play for AI robotics.

[ Covariant ]

How do you fly a helicopter on Mars? It takes Ingenuity and Perseverance. During this technology demo, Farah Alibay and Tim Canham will get into the details of how these craft will manage this incredible task.

[ NASA ]

Complex real-world environments continue to present significant challenges for fielding robotic teams, which often face expansive spatial scales, difficult and dynamic terrain, degraded environmental conditions, and severe communication constraints. Breakthrough technologies call for integrated solutions across autonomy, perception, networking, mobility, and human teaming thrusts. As such, the DARPA OFFSET program and the DARPA Subterranean Challenge seek novel approaches and new insights for discovering and demonstrating these innovative technologies, to help close critical gaps for robotic operations in complex urban and underground environments.

[ UPenn ]

Living beings modulate the impedance of their joints to interact proficiently, robustly, and safely with the environment. These observations inspired the design of soft articulated robots with the development of Variable Impedance and Variable Stiffness Actuators. However, designing them remains a challenging task due to their mechanical complexity, encumbrance, and weight, but also due to the different specifications that the wide range of applications requires. For instance, as prostheses or parts of humanoid systems, there is currently a need for multi-degree-of-freedom joints that have abilities similar to those of human articulations. Toward this goal, we propose a new compact and configurable design for a two-degree-of-freedom variable stiffness joint that can match the passive behavior of a human wrist and ankle. Using only three motors, this joint can control its equilibrium orientation around two perpendicular axes and its overall stiffness as a one-dimensional parameter, like the co-contraction of human muscles. The kinematic architecture builds upon a state-of-the-art rigid parallel mechanism with the addition of nonlinear elastic elements to allow the control of the stiffness. The mechanical parameters of the proposed system can be optimized to match desired passive compliant behaviors and to fit various applications (e.g., prosthetic wrists or ankles, artificial wrists, etc.). After describing the joint structure, we detail the kinetostatic analysis to derive the compliant behavior as a function of the design parameters and to prove the variable stiffness ability of the system. Besides, we provide sets of design parameters to match the passive compliance of either a human wrist or ankle. Moreover, to show the versatility of the proposed joint architecture and as guidelines for the future designer, we describe the influence of the main design parameters on the system stiffness characteristic and show the potential of the design for more complex applications.

To further advance closed-loop control for soft robotics, suitable sensor and modeling strategies have to be investigated. Although there are many flexible and soft sensors available, the integration into the actuator and the use in a control loop is still challenging. Therefore, a state-space model for closed-loop low-level control of a fiber-reinforced actuator using pressure and orientation measurement is investigated. To do so, the integration of an inertial measurement unit and geometric modeling of actuator is presented. The piecewise constant curvature approach is used to describe the actuator’s shape and deformation variables. For low-level control, the chamber’s lengths are reconstructed from bending angles with a geometrical model and the identified material characteristics. For parameter identification and model validation, data from a camera tracking system is analyzed. Then, a closed-loop control of pressure and chambers’ length of the actuator is investigated. It will be shown, that the reconstruction model is suitable for estimating the state variables of the actuator. In addition, the use of the inertial measurement unit will demonstrate a cost-effective and compact sensor for soft pneumatic actuators.

I’ll admit to having been somewhat skeptical about the strategy of dangling payloads on long tethers for drone delivery. I mean, I get why Wing does it— it keeps the drone and all of its spinny bits well away from untrained users while preserving the capability of making deliveries to very specific areas that may have nearby obstacles. But it also seems like you’re adding some risk as well, because once your payload is out on that long tether, it’s more or less out of your control in at least two axes. And you can forget about your drone doing anything while this is going on, because who the heck knows what’s going to happen to your payload if the drone starts moving around?

NYU roboticists, that’s who.

This research is by Guanrui Li, Alex Tunchez, and Giuseppe Loianno at the Agile Robotics and Perception Lab (ARPL) at NYU. As you can see from the video, the drone makes keeping rock-solid control over that suspended payload look easy, but it’s very much not, especially considering that everything you see is running onboard the drone itself at 500Hz— all it takes is an IMU and a downward-facing monocular camera, along with the drone’s Snapdragon processor.

To get this to work, the drone has to be thinking about two things. First, there’s state estimation, which is the behavior of the drone itself along with its payload at the end of the tether. The drone figures this out by watching how the payload moves using its camera and tracking its own movement with its IMU. Second, there’s predicting what the payload is going to do next, and how that jibes (or not) with what the drone wants to do next. The researchers developed a model predictive control (MPC) system for this, with some added perception constraints to make sure that the behavior of the drone keeps the payload in view of the camera. 

At the moment, the top speed of the system is 4 m/s, but it sounds like rather than increasing the speed of a single payload-swinging drone, the next steps will be to make the overall system more complicated by somehow using multiple drones to cooperatively manage tethered payloads that are too big or heavy for one drone to handle alone.

For more on this, we spoke with Giuseppe Loianno, head of the ARPL.

IEEE Spectrum: We've seen some examples of delivery drones delivering suspended loads. How will this work improve their capabilities?

Giuseppe Loianno: For the first time, we jointly design a perception-constrained model predictive control and state estimation approaches to enable the autonomy of a quadrotor with a cable suspended payload using onboard sensing and computation. The proposed control method guarantees the visibility of the payload in the robot camera as well as the respect of the system dynamics and actuator constraints. These are critical design aspects to guarantee safety and resilience for such a complex and delicate task involving transportation of objects.

The additional challenge involves the fact that we aim to solve the aforementioned problem using a minimal sensor suite for autonomous navigation made by a single camera and IMU. This is an ambitious goal since it concurrently involves estimating the load and the vehicle states. Previous approaches leverage GPS or motion capture systems for state estimation and do not consider the perception and physical constraints when solving the problem. We are confident that our solution will contribute to making a reality the autonomous delivery process in warehouses or in dense urban areas where the GPS signal is currently absent or shadowed.

Will it make a difference to delivery systems that use an actuated cable and only leave the load suspended for the delivery itself?

This is certainly an interesting question. We believe that adding an actuated cable will introduce more disadvantages than benefits. Certainly, an actuated cable can be leveraged to compensate for cable's swinging motions in windy conditions and/or increase the delivery precision. However, the introduction of additional actuated mechanisms and components come at the price of an increased system mass and inertia. This will reduce the overall flight time and the vehicle’s agility as well as the system resilience with respect to the transportation task. Finally, active mechanisms are also more difficult to design compared to passive ones.

What's challenging about doing all of this on-vehicle?

There are several challenges to solve on-board this problem. First, it is very difficult to concurrently run perception and action on such computationally constrained platforms in real-time. Second, the first aspect becomes even more challenging if we consider as in our case a perception-based constrained receding horizon control problem that aims to guarantee the visibility of the payload during the motion, while concurrently respecting all the system physical and sensing limitations. Finally, it has been challenging to run the entire system at a high rate to fully unleash the system’s agility. We are currently able to reach rates of 500 Hz.

Can your method adapt to loads of varying shapes, sizes, and masses? What about aerodynamics or flying in wind?

Technically, our approach can easily be adapted to varying objects sizes and masses. Our previous contributions have already shown the ability to estimate online changes in the vehicle/load configuration and can potentially be used to operate the proposed system in dynamic conditions, where the load’s characteristics are unknown and/or may vary across consecutive flights. This can be useful for both package delivery or warehouse operations, where different types of objects need to be transported or manipulated.

The aerodynamics problem is a great point. Overall, our past work has investigated the aerodynamics of wind disturbances for a single robot without a load. Formulating these problems for the proposed system is challenging and is still an open research question. We have some ideas to approach this problem combining Bayesian estimation techniques with more recent machine learning approaches and we will tackle it in the near future.

What are the limitations on the performance of the system? How fast and agile can it be with a suspended payload? 

The limits of the performances are established by the actuating and sensing system. Our approach intrinsically considers both physical and sensing limitations of our system. From a sensing and computation perspective, we believe to be close to the limits with speeds of up to 4 m/s. Faster speeds can potentially introduce motion blur while decreasing the load tracking precision. Moreover, faster motions will increase as well aerodynamic disturbances that we have just mentioned. In the future, modeling these phenomena and their incorporation in the proposed solution can further push the agility.

Your paper talks about extending this approach to multiple vehicles cooperatively transporting a payload, can you tell us more about that?

We are currently working on a distributed perception and control approach for cooperative transportation. We already have some very exciting results that we will share with you very soon! Overall, we can employ a team of aerial robots to cooperatively transport a payload to increase the payload capacity and endow the system with additional resilience in case of vehicles’ failures. A cooperative cable suspended payload cooperative transportation system allows as well to concurrently and independently control the load’s position and orientation. This is not possible just using rigid connections. We believe that our approach will have a strong impact in real-world settings for delivery and constructions in warehouses and GPS-denied environments such as dense urban areas. Moreover, in post disaster scenarios, a team of physically interconnected aerial robots can deliver supplies and establish communication in areas where GPS signal is intermittent or unavailable.

PCMPC: Perception-Constrained Model Predictive Control for Quadrotors with Suspended Loads using a Single Camera and IMU, by Guanrui Li, Alex Tunchez, and Giuseppe Loianno from NYU, will be presented (virtually) at ICRA 2021.

<Back to IEEE Journal Watch

I’ll admit to having been somewhat skeptical about the strategy of dangling payloads on long tethers for drone delivery. I mean, I get why Wing does it— it keeps the drone and all of its spinny bits well away from untrained users while preserving the capability of making deliveries to very specific areas that may have nearby obstacles. But it also seems like you’re adding some risk as well, because once your payload is out on that long tether, it’s more or less out of your control in at least two axes. And you can forget about your drone doing anything while this is going on, because who the heck knows what’s going to happen to your payload if the drone starts moving around?

NYU roboticists, that’s who.

This research is by Guanrui Li, Alex Tunchez, and Giuseppe Loianno at the Agile Robotics and Perception Lab (ARPL) at NYU. As you can see from the video, the drone makes keeping rock-solid control over that suspended payload look easy, but it’s very much not, especially considering that everything you see is running onboard the drone itself at 500Hz— all it takes is an IMU and a downward-facing monocular camera, along with the drone’s Snapdragon processor.

To get this to work, the drone has to be thinking about two things. First, there’s state estimation, which is the behavior of the drone itself along with its payload at the end of the tether. The drone figures this out by watching how the payload moves using its camera and tracking its own movement with its IMU. Second, there’s predicting what the payload is going to do next, and how that jibes (or not) with what the drone wants to do next. The researchers developed a model predictive control (MPC) system for this, with some added perception constraints to make sure that the behavior of the drone keeps the payload in view of the camera. 

At the moment, the top speed of the system is 4 m/s, but it sounds like rather than increasing the speed of a single payload-swinging drone, the next steps will be to make the overall system more complicated by somehow using multiple drones to cooperatively manage tethered payloads that are too big or heavy for one drone to handle alone.

For more on this, we spoke with Giuseppe Loianno, head of the ARPL.

IEEE Spectrum: We've seen some examples of delivery drones delivering suspended loads. How will this work improve their capabilities?

Giuseppe Loianno: For the first time, we jointly design a perception-constrained model predictive control and state estimation approaches to enable the autonomy of a quadrotor with a cable suspended payload using onboard sensing and computation. The proposed control method guarantees the visibility of the payload in the robot camera as well as the respect of the system dynamics and actuator constraints. These are critical design aspects to guarantee safety and resilience for such a complex and delicate task involving transportation of objects.

The additional challenge involves the fact that we aim to solve the aforementioned problem using a minimal sensor suite for autonomous navigation made by a single camera and IMU. This is an ambitious goal since it concurrently involves estimating the load and the vehicle states. Previous approaches leverage GPS or motion capture systems for state estimation and do not consider the perception and physical constraints when solving the problem. We are confident that our solution will contribute to making a reality the autonomous delivery process in warehouses or in dense urban areas where the GPS signal is currently absent or shadowed.

Will it make a difference to delivery systems that use an actuated cable and only leave the load suspended for the delivery itself?

This is certainly an interesting question. We believe that adding an actuated cable will introduce more disadvantages than benefits. Certainly, an actuated cable can be leveraged to compensate for cable's swinging motions in windy conditions and/or increase the delivery precision. However, the introduction of additional actuated mechanisms and components come at the price of an increased system mass and inertia. This will reduce the overall flight time and the vehicle’s agility as well as the system resilience with respect to the transportation task. Finally, active mechanisms are also more difficult to design compared to passive ones.

What's challenging about doing all of this on-vehicle?

There are several challenges to solve on-board this problem. First, it is very difficult to concurrently run perception and action on such computationally constrained platforms in real-time. Second, the first aspect becomes even more challenging if we consider as in our case a perception-based constrained receding horizon control problem that aims to guarantee the visibility of the payload during the motion, while concurrently respecting all the system physical and sensing limitations. Finally, it has been challenging to run the entire system at a high rate to fully unleash the system’s agility. We are currently able to reach rates of 500 Hz.

Can your method adapt to loads of varying shapes, sizes, and masses? What about aerodynamics or flying in wind?

Technically, our approach can easily be adapted to varying objects sizes and masses. Our previous contributions have already shown the ability to estimate online changes in the vehicle/load configuration and can potentially be used to operate the proposed system in dynamic conditions, where the load’s characteristics are unknown and/or may vary across consecutive flights. This can be useful for both package delivery or warehouse operations, where different types of objects need to be transported or manipulated.

The aerodynamics problem is a great point. Overall, our past work has investigated the aerodynamics of wind disturbances for a single robot without a load. Formulating these problems for the proposed system is challenging and is still an open research question. We have some ideas to approach this problem combining Bayesian estimation techniques with more recent machine learning approaches and we will tackle it in the near future.

What are the limitations on the performance of the system? How fast and agile can it be with a suspended payload? 

The limits of the performances are established by the actuating and sensing system. Our approach intrinsically considers both physical and sensing limitations of our system. From a sensing and computation perspective, we believe to be close to the limits with speeds of up to 4 m/s. Faster speeds can potentially introduce motion blur while decreasing the load tracking precision. Moreover, faster motions will increase as well aerodynamic disturbances that we have just mentioned. In the future, modeling these phenomena and their incorporation in the proposed solution can further push the agility.

Your paper talks about extending this approach to multiple vehicles cooperatively transporting a payload, can you tell us more about that?

We are currently working on a distributed perception and control approach for cooperative transportation. We already have some very exciting results that we will share with you very soon! Overall, we can employ a team of aerial robots to cooperatively transport a payload to increase the payload capacity and endow the system with additional resilience in case of vehicles’ failures. A cooperative cable suspended payload cooperative transportation system allows as well to concurrently and independently control the load’s position and orientation. This is not possible just using rigid connections. We believe that our approach will have a strong impact in real-world settings for delivery and constructions in warehouses and GPS-denied environments such as dense urban areas. Moreover, in post disaster scenarios, a team of physically interconnected aerial robots can deliver supplies and establish communication in areas where GPS signal is intermittent or unavailable.

PCMPC: Perception-Constrained Model Predictive Control for Quadrotors with Suspended Loads using a Single Camera and IMU, by Guanrui Li, Alex Tunchez, and Giuseppe Loianno from NYU, will be presented (virtually) at ICRA 2021.

<Back to IEEE Journal Watch

We introduce a soft robot actuator composed of a pre-stressed elastomer film embedded with shape memory alloy (SMA) and a liquid metal (LM) curvature sensor. SMA-based actuators are commonly used as electrically-powered limbs to enable walking, crawling, and swimming of soft robots. However, they are susceptible to overheating and long-term degradation if they are electrically stimulated before they have time to mechanically recover from their previous activation cycle. Here, we address this by embedding the soft actuator with a capacitive LM sensor capable of measuring bending curvature. The soft sensor is thin and elastic and can track curvature changes without significantly altering the natural mechanical properties of the soft actuator. We show that the sensor can be incorporated into a closed-loop “bang-bang” controller to ensure that the actuator fully relaxes to its natural curvature before the next activation cycle. In this way, the activation frequency of the actuator can be dynamically adapted for continuous, cyclic actuation. Moreover, in the special case of slower, low power actuation, we can use the embedded curvature sensor as feedback for achieving partial actuation and limiting the amount of curvature change.

Model-Based Reinforcement Learning (MBRL) algorithms have been shown to have an advantage on data-efficiency, but often overshadowed by state-of-the-art model-free methods in performance, especially when facing high-dimensional and complex problems. In this work, a novel MBRL method is proposed, called Risk-Aware Model-Based Control (RAMCO). It combines uncertainty-aware deep dynamics models and the risk assessment technique Conditional Value at Risk (CVaR). This mechanism is appropriate for real-world application since it takes epistemic risk into consideration. In addition, we use a model-free solver to produce warm-up training data, and this setting improves the performance in low-dimensional environments and covers the shortage of MBRL’s nature in the high-dimensional scenarios. In comparison with other state-of-the-art reinforcement learning algorithms, we show that it produces superior results on a walking robot model. We also evaluate the method with an Eidos environment, which is a novel experimental method with multi-dimensional randomly initialized deep neural networks to measure the performance of any reinforcement learning algorithm, and the advantages of RAMCO are highlighted.

Over the past two decades, scholars developed various unmanned sailboat platforms, but most of them have specialized designs and controllers. Whereas these robotic sailboats have good performance with open-source designs, it is actually hard for interested researchers or fans to follow and make their own sailboats with these open-source designs. Thus, in this paper, a generic and flexible unmanned sailboat platform with easy access to the hardware and software architectures is designed and tested. The commonly used 1-m class RC racing sailboat was employed to install Pixhawk V2.4.8, Arduino Mega 2,560, GPS module M8N, custom-designed wind direction sensor, and wireless 433 Mhz telegram. The widely used open-source hardware modules were selected to keep reliable and low-cost hardware setup to emphasize the generality and feasibility of the unmanned sailboat platform. In software architecture, the Pixhawk V2.4.8 provided reliable states’ feedback. The Arduino Mega 2,560 received estimated states from Pixhawk V2.4.8 and the wind vane sensor, and then controlled servo actuators of rudder and sail using simplified algorithms. Due to the complexity of introducing robot operating system and its packages, we designed a generic but real-time software architecture just using Arduino Mega 2,560. A suitable line-of-sight guidance strategy and PID-based controllers were used to let the autonomous sailboat sail at user-defined waypoints. Field tests validated the sailing performance in facing WRSC challenges. Results of fleet race, station keeping, and area scanning proved that our design and algorithms could control the 1-m class RC sailboat with acceptable accuracy. The proposed design and algorithms contributed to developing educational, low-cost, micro class autonomous sailboats with accessible, generic, and flexible hardware and software. Besides, our sailboat platform also facilitates readers to develop similar sailboats with more focus on their missions.

Soft robots are ideal for underwater manipulation in sampling and other servicing applications. Their unique features of compliance, adaptability, and being naturally waterproof enable robotic designs to be compact and lightweight, while achieving uncompromized dexterity and flexibility. However, the inherent flexibility and high nonlinearity of soft materials also results in combined complex motions, which creates both soft actuator and sensor challenges for force output, modeling, and sensory feedback, especially under highly dynamic underwater environments. To tackle these limitations, a novel Soft Origami Optical-Sensing Actuator (SOSA) with actuation and sensing integration is proposed in this paper. Inspired by origami art, the proposed sensorized actuator enables a large force output, contraction/elongation/passive bending actuation by fluid, and hybrid motion sensing with optical waveguides. The SOSA design brings two major novelties over current designs. First, it involves a new actuation-sensing mode which enables a superior large payload output and a robust and accurate sensing performance by introducing the origami design, significantly facilitating the integration of sensing and actuating technology for wider applications. Secondly, it simplifies the fabrication process for harsh environment application by investigating the boundary features between optical waveguides and ambient water, meaning the external cladding layer of traditional sensors is unnecessary. With these merits, the proposed actuator could be applied to harsh environments for complex interaction/operation tasks. To showcase the performance of the proposed SOSA actuator, a hybrid underwater 3-DOFs manipulator has been developed. The entire workflow on concept design, fabrication, modeling, experimental validation, and application are presented in detail as reference for wider effective robot-environment applications.

When NASA first sent humans to the moon, astronauts often made risky blind landings on the lunar surface because of billowing dust clouds churned up during their descent. Astronauts could avoid repeating those harrowing experiences during future missions to the moon with the help of a 3D-printed lunar landing pad designed by a NASA-backed student team.

The landing pad developed by students from 10 U.S. universities and colleges is shaped to minimize the lunar dust clouds stirred up by rocket landing burns and could eventually be made from lunar regolith material found on the moon. A prototype of the pad is scheduled to undergo a rocket hot fire test under the watchful eye of both students and NASA engineers at Camp Swift, Texas in early March.

“We showed that you can 3D print the structure with our existing prototype,” says Helen Carson, a material science and engineering student at the University of Washington in Seattle and a principal investigator for the Lunar PAD team. “For now, we have a lot of flexibility with different directions we can take depending on how the materials develop.”

Such a Lunar PAD concept could prove especially helpful with NASA’s current roadmap aimed at returning humans to the moon through the Artemis Program; the U.S. space agency has already issued contracts to companies such as SpaceX, Blue Origin, and Dynetics to start developing ideas for a human lunar lander. Any future moon landings could benefit from reducing the risk of possible catastrophe that comes from flying blind in a dust cloud. Furthermore, dust and rocks accelerated to high speeds by engine exhaust could pose a serious danger to astronauts, robots, or other equipment already on the surface of the moon.

The Lunar PAD team first came together during NASA’s L’SPACE (Lucy Student Pipeline Accelerator and Competency Enabler) Virtual Academy held in the summer of 2019. Carson and her colleagues won funding from the NASA Proposal Writing and Evaluation process to move forward on the project and to make a presentation at NASA Marshall Space Flight Center in June 2020. At that event, additional funding was awarded so that the team could print and test their pad prototype. The students also presented a paper on Lunar PAD at the AIAA SciTech Forum and Exposition that was held 19-21 January 2021.

Image: Lunar PAD Team The multidisciplinary, multiuniversity team has come up with a solution to a problem that astronauts would most certainly face when humans return to the moon.

The team’s early idea included creating an inflatable deflector that would be inflated by the rocket engine exhaust and block any debris blasted outward from the landing (or launch) zone of the pad. But that would have required transporting flexible yet durable materials manufactured on Earth to the moon.

“That got pretty complicated with material choice and design, and the actual transportation of it,” says Luke Martin, a mechanical engineering student at Arizona State University. “So we tried coming up with other more in-situ resource ideas.”

Lunar PAD currently has a top surface layer where rockets and lunar landers could both land and launch. But the key to mitigating the worst of any dust clouds or small particles accelerated to high velocities is the open interior space of the pad that sits below the top layer. Slanted grates in the top layer would channel the rocket exhaust into the interior space.

The pad’s interior includes vent dividers—some shaped like teardrops or leaflets—that help channel the rocket exhaust and any accompanying dust or rock particles outward from the center of the pad. The cosmetically appealing layout of the vent dividers—which some liken to flower petals—proved to be the most efficient pattern that came out of numerous iterations tested through flow simulations.

“It's very practical, very efficient, and just so happens to also be very beautiful,” says Vincent Murai, a mechanical engineering student at Kapiolani Community College in Honolulu.

The exhaust and any accompanying particles leave the pad’s interior space through specific exits, called Kinetic Energy Diffusers, embedded in the outside walls of the circular pad. Such diffusers consist of hollow rectangular blocks that could also include fans to convert some of the rocket exhaust’s excess energy into the circular fan motion and block some particles with the turning fan blades. 

Any high-velocity particles that get through the fans would also encounter deflectors placed right outside the exits in the full-scale version of the pad. And an “apron” surrounding the landing pad would also include a perimeter deflector wall to direct any remaining exhaust-propelled particles up and away from any nearby spacecraft, people, or structures.

The subscale prototype of the pad was manufactured by a gantry-style 3D printer developed by the Austin-based company ICON. The company is already working with NASA to adapt its 3D printing technology for space-based construction on the moon and Mars.

3D printing the main layers of the subscale pad prototype took up just one day. The team also spent three additional days on tasks such as using the printer to fill various components with concrete and patching or smoothing certain parts of the pad. People also had to manually install fiber optic sensors to detect changes in strain and temperature.

But the most labor-intensive and hands-on part of the construction involved trimming and placing pre-cut blocks of water-soluble foam to provide temporary structural support for overhanging areas of the pad. Full-scale construction of such a pad on the moon or Mars would require a different and ideally more efficient solution for providing such removable supports.

“It became especially apparent after a few days of of cutting and wrapping and inserting foam that it's probably not the best use of an astronaut’s time,” says Andres Campbell, an integrated engineering student with an emphasis on aerospace engineering at Minnesota State University in Mankato and a principal investigator for the team. “This would also be something that would be robotically complex to do.”

In any case, a full-scale and operational Lunar PAD would not have to handle the dust mitigation work on its own. For example, Carson originally proposed an electrodynamic dust shielding technology that would passively push dust off the landing pad by taking advantage of the charged nature of lunar dust. Automated cleaning tools such what Campbell described as a “space Roomba” robot could also help keep the launch and landing zone dust free.

“The idea that you can combine the pad with not just electrodynamic dust shielding but any sort of passive dust mitigation system is still worth consideration,” Carson says. “Because in addition to that pad, you would still have dust that could be kicked up from other activities on the surface.”

The 3D-printed pad concept could eventually prove useful for future missions to Mars and other destinations. Such pad designs would have to account for some differences in atmosphere and gravity on rocket plumes and dust clouds, not to mention factors such as the moon’s electrostatically charged dust particles and Martian dust storms. Still, the team designed the pad to potentially work beyond lunar landing scenarios.

“Our goal was to build a reusable pad for all extraterrestrial environments,” Murai says.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference] RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China

Let us know if you have suggestions for next week, and enjoy today's videos.

If you’ve ever swatted a mosquito away from your face, only to have it return again (and again and again), you know that insects can be remarkably acrobatic and resilient in flight. Those traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Such traits are also hard to build into flying robots, but MIT Assistant Professor Kevin Yufeng Chen has built a system that approaches insects’ agility.

Chen’s actuators can flap nearly 500 times per second, giving the drone insect-like resilience. “You can hit it when it’s flying, and it can recover,” says Chen. “It can also do aggressive maneuvers like somersaults in the air.” And it weighs in at just 0.6 grams, approximately the mass of a large bumble bee. The drone looks a bit like a tiny cassette tape with wings, though Chen is working on a new prototype shaped like a dragonfly.

[ MIT ]

National Robotics Week is April 3-11, 2021!

[ NRW ]

This is in a motion capture environment, but still, super impressive!

[ Paper ]

Thanks Fan!

Why wait for Boston Dynamics to add an arm to your Spot if you can just do it yourself?

[ ETHZ ]

This video shows the deep-sea free swimming of soft robot in the South China Sea. The soft robot was grasped by a robotic arm on ‘HAIMA’ ROV and reached the bottom of the South China Sea (depth of 3,224 m). After the releasing, the soft robot was actuated with an on-board AC voltage of 8 kV at 1 Hz and demonstrated free swimming locomotion with its flapping fins.

Um, did they bring it back?

[ Nature ]

Quadruped Yuki Mini is 12 DOF robot equipped with a Raspberry Pi that runs ROS. Also, BUNNIES!

[ Lingkang Zhang ]

Thanks Lingkang!

Deployment of drone swarms usually relies on inter-agent communication or visual markers that are mounted on the vehicles to simplify their mutual detection. The vswarm package enables decentralized vision-based control of drone swarms without relying on inter-agent communication or visual fiducial markers. The results show that the drones can safely navigate in an outdoor environment despite substantial background clutter and difficult lighting conditions.

[ Vswarm ]

A conventional adopted method for operating a waiter robot is based on the static position control, where pre-defined goal positions are marked on a map. However, this solution is not optimal in a dynamic setting, such as in a coffee shop or an outdoor catering event, because the customers often change their positions. We explore an alternative human-robot interface design where a human operator communicates the identity of the customer to the robot instead. Inspired by how [a] human communicates, we propose a framework for communicating a visual goal to the robot, through interactive two-way communications.

[ Paper ]

Thanks Poramate!

In this video, LOLA reacts to undetected ground height changes, including a drop and leg-in-hole experiment. Further tests show the robustness to vertical disturbances using a seesaw. The robot is technically blind, not using any camera-based or prior information on the terrain.

[ TUM ]

RaiSim is a cross-platform multi-body physics engine for robotics and AI. It fully supports Linux, Mac OS, and Windows.

[ RaiSim ]

Thanks Fan!

The next generation of LoCoBot is here. The LoCoBot is an ROS research rover for mapping, navigation and manipulation (optional) that enables researchers, educators and students alike to focus on high level code development instead of hardware and building out lower level code. Development on the LoCoBot is simplified with open source software, full ROS-mapping and navigation packages and modular opensource Python API that allows users to move the platform as well as (optional) manipulator in as few as 10 lines of code.

[ Trossen ]

MIT Media Lab Research Specialist Dr. Kate Darling looks at how robots are portrayed in popular film and TV shows.

Kate's book, The New Breed: What Our History with Animals Reveals about Our Future with Robots can be pre-ordered now and comes out next month.

[ Kate Darling ]

The current autonomous mobility systems for planetary exploration are wheeled rovers, limited to flat, gently-sloping terrains and agglomerate regolith. These vehicles cannot tolerate instability and operate within a low-risk envelope (i.e., low-incline driving to avoid toppling). Here, we present ‘Mars Dogs’ (MD), four-legged robotic dogs, the next evolution of extreme planetary exploration.

[ Team CoSTAR ]

In 2020, first-year PhD students at the MIT Media Lab were tasked with a special project—to reimagine the Lab and write sci-fi stories about the MIT Media Lab in the year 2050. “But, we are researchers. We don't only write fiction, we also do science! So, we did what scientists do! We used a secret time machine under the MIT dome to go to the year 2050 and see what’s going on there! Luckily, the Media Lab still exists and we met someone…really cool!” Enjoy this interview of Cyber Joe, AI Mentor for MIT Media Lab Students of 2050.

[ MIT ]

In this talk, we will give an overview of the diverse research we do at CSIRO’s Robotics and Autonomous Systems Group and delve into some specific technologies we have developed including SLAM and Legged robotics. We will also give insights into CSIRO’s participation in the current DARPA Subterranean Challenge where we are deploying a fleet of heterogeneous robots into GPS-denied unknown underground environments.

[ GRASP Seminar ]

Marco Hutter (ETH) and Hae-Won Park (KAIST) talk about “Robotics Inspired by Nature.”

[ Swiss-Korean Science Club ]

Thanks Fan!

In this keynote, Guy Hoffman Assistant Professor and the Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering at Cornell University, discusses “The Social Uncanny of Robotic Companions.”

[ Designerly HRI ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference] RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China

Let us know if you have suggestions for next week, and enjoy today's videos.

If you’ve ever swatted a mosquito away from your face, only to have it return again (and again and again), you know that insects can be remarkably acrobatic and resilient in flight. Those traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Such traits are also hard to build into flying robots, but MIT Assistant Professor Kevin Yufeng Chen has built a system that approaches insects’ agility.

Chen’s actuators can flap nearly 500 times per second, giving the drone insect-like resilience. “You can hit it when it’s flying, and it can recover,” says Chen. “It can also do aggressive maneuvers like somersaults in the air.” And it weighs in at just 0.6 grams, approximately the mass of a large bumble bee. The drone looks a bit like a tiny cassette tape with wings, though Chen is working on a new prototype shaped like a dragonfly.

[ MIT ]

National Robotics Week is April 3-11, 2021!

[ NRW ]

This is in a motion capture environment, but still, super impressive!

[ Paper ]

Thanks Fan!

Why wait for Boston Dynamics to add an arm to your Spot if you can just do it yourself?

[ ETHZ ]

This video shows the deep-sea free swimming of soft robot in the South China Sea. The soft robot was grasped by a robotic arm on ‘HAIMA’ ROV and reached the bottom of the South China Sea (depth of 3,224 m). After the releasing, the soft robot was actuated with an on-board AC voltage of 8 kV at 1 Hz and demonstrated free swimming locomotion with its flapping fins.

Um, did they bring it back?

[ Nature ]

Quadruped Yuki Mini is 12 DOF robot equipped with a Raspberry Pi that runs ROS. Also, BUNNIES!

[ Lingkang Zhang ]

Thanks Lingkang!

Deployment of drone swarms usually relies on inter-agent communication or visual markers that are mounted on the vehicles to simplify their mutual detection. The vswarm package enables decentralized vision-based control of drone swarms without relying on inter-agent communication or visual fiducial markers. The results show that the drones can safely navigate in an outdoor environment despite substantial background clutter and difficult lighting conditions.

[ Vswarm ]

A conventional adopted method for operating a waiter robot is based on the static position control, where pre-defined goal positions are marked on a map. However, this solution is not optimal in a dynamic setting, such as in a coffee shop or an outdoor catering event, because the customers often change their positions. We explore an alternative human-robot interface design where a human operator communicates the identity of the customer to the robot instead. Inspired by how [a] human communicates, we propose a framework for communicating a visual goal to the robot, through interactive two-way communications.

[ Paper ]

Thanks Poramate!

In this video, LOLA reacts to undetected ground height changes, including a drop and leg-in-hole experiment. Further tests show the robustness to vertical disturbances using a seesaw. The robot is technically blind, not using any camera-based or prior information on the terrain.

[ TUM ]

RaiSim is a cross-platform multi-body physics engine for robotics and AI. It fully supports Linux, Mac OS, and Windows.

[ RaiSim ]

Thanks Fan!

The next generation of LoCoBot is here. The LoCoBot is an ROS research rover for mapping, navigation and manipulation (optional) that enables researchers, educators and students alike to focus on high level code development instead of hardware and building out lower level code. Development on the LoCoBot is simplified with open source software, full ROS-mapping and navigation packages and modular opensource Python API that allows users to move the platform as well as (optional) manipulator in as few as 10 lines of code.

[ Trossen ]

MIT Media Lab Research Specialist Dr. Kate Darling looks at how robots are portrayed in popular film and TV shows.

Kate's book, The New Breed: What Our History with Animals Reveals about Our Future with Robots can be pre-ordered now and comes out next month.

[ Kate Darling ]

The current autonomous mobility systems for planetary exploration are wheeled rovers, limited to flat, gently-sloping terrains and agglomerate regolith. These vehicles cannot tolerate instability and operate within a low-risk envelope (i.e., low-incline driving to avoid toppling). Here, we present ‘Mars Dogs’ (MD), four-legged robotic dogs, the next evolution of extreme planetary exploration.

[ Team CoSTAR ]

In 2020, first-year PhD students at the MIT Media Lab were tasked with a special project—to reimagine the Lab and write sci-fi stories about the MIT Media Lab in the year 2050. “But, we are researchers. We don't only write fiction, we also do science! So, we did what scientists do! We used a secret time machine under the MIT dome to go to the year 2050 and see what’s going on there! Luckily, the Media Lab still exists and we met someone…really cool!” Enjoy this interview of Cyber Joe, AI Mentor for MIT Media Lab Students of 2050.

[ MIT ]

In this talk, we will give an overview of the diverse research we do at CSIRO’s Robotics and Autonomous Systems Group and delve into some specific technologies we have developed including SLAM and Legged robotics. We will also give insights into CSIRO’s participation in the current DARPA Subterranean Challenge where we are deploying a fleet of heterogeneous robots into GPS-denied unknown underground environments.

[ GRASP Seminar ]

Marco Hutter (ETH) and Hae-Won Park (KAIST) talk about “Robotics Inspired by Nature.”

[ Swiss-Korean Science Club ]

Thanks Fan!

In this keynote, Guy Hoffman Assistant Professor and the Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering at Cornell University, discusses “The Social Uncanny of Robotic Companions.”

[ Designerly HRI ]

Pages