Feed aggregator



Everybody likes watching robots fall over. We get it, it’s funny. And we here at IEEE Spectrum are as guilty as anyone of making it a thing: Our compilation of robots falling down at the DARPA Robotics Challenge eight years ago has several million views on YouTube. But a couple of months ago, Agility Robotics shared a video of one of its Digit robots collapsing while stacking boxes during the ProMat trade show, which went nuts across Twitter, TikTok, and Instagram. Agility eventually issued a statement to the Associated Press clarifying that Digit didn’t deactivate itself due to the nature of the work, which is how some viewers reacted to the viral clip.

Agility isn’t the only robotics company to share its failures with an online audience. Boston Dynamics, developer of the Spot and Atlas robots, may have been the first company to be accused of “robot abuse” because of its videos, and the company frequently includes footage of its research robots being unsuccessful as well as successful on YouTube. And now there are 1,100 Spots out in the world being useful, falls happen both more frequently, and more visibly.

Even though falling robots aren’t a new thing, what may be a new(ish) thing are some technological advances that have changed the nature of falling. First, both Boston Dynamics and Agility Robotics have human-scale bipedal robots for which not falling seems pretty normal. This is a relatively recent development. Although a number of companies are working on humanoids, the Agility and Boston Dynamics humanoids are (as far as we are aware) the only ones that can routinely handle untethered dynamic walking.

“Sometimes the robot is going to break something when it falls. But it’s learning, and eventually I think these robots will fall even less often than people do.”
—Jonathan Hurst, Agility Robotics

The other important advance is that these humanoid robots are usually able to fall without destroying themselves. During the DARPA Robotics Challenge in 2015, falling generally meant doom for the competitors, with one exception: Carnegie Mellon University’s CHIMP, which was built like a literal tank. Since then, roboticists have tried adding things like armor and airbags to keep a falling robot in one piece. But now, these robots can fall with minimal drama and get back up again. If they do suffer damage, they can be easily fixed.

And yet, even though falling has become much less of a big deal for the roboticists, it’s still a big deal for the general public, as these viral videos of robots falling down prove. We recently spoke with Agility Robotics’ Chief Robot Officer Jonathan Hurst and Head of Customer Experience Bambi Brewer, as well as Boston Dynamics CTO Aaron Saunders to understand why that is, and whether they think things are likely to change anytime soon.

Boston Dynamics’s Aaron Saunders, and Agility Robitics’ Jonathan Hurst and Bambi Brewer on...

Why do you think people react so strongly to seeing robots fall over, especially bipedal robots?

Jonathan Hurst: People post funny videos of pets or kids, making some expression or having a reaction that you can identify with. It’s even funnier when it’s a robot that wouldn’t typically do that. And so when Digit [at ProMat] seems to be just like, “I’m so tired of doing this work” and falls down, people are like, “I understand you, robot!” But [seeing robots behave that way] is going to become more common, and when people see this and it becomes just a regular part of their experience, the novelty will wear off.

Bambi Brewer: People who make robots spend a lot of time trying to present them at their best. The way robots move does seem very repetitive, very scripted. I can see why it’s very interesting when something goes wrong, because the public usually doesn’t see what that looks like, and they’re not used to those moments yet.

“People perceive machines based on how they perceive themselves. Falling on its face is a good example of something that looks bad for a robot but might not actually be bad.”
—Aaron Saunders, Boston Dynamics

How different is falling for robots than for humans?

Hurst: The way I think about the robot right now is like a two-and-a-half-year-old child. They fall more often than adults do, and it’s not terribly concerning. Sometimes they skin their knee. And sometimes the robot is going to break something when it falls. But it’s learning, and eventually I think these robots will fall even less often than people do. Physics is still true, though, and so it’s probably going to be on the same order of magnitude as how often people fall. It won’t be rare.

When you think about this ‘physics is true’ thing—that’s actually where robots will be able to have superhuman capabilities. A robot is going to be close to human strength and close to human speed, but you can take much bigger risks with a robot because you don’t really care that much if you break something.

Fundamentally, I don’t care if the robot breaks. I mean, I care a little bit, but I care a lot if any of our employees were to fall.

Do you think that humanoid robots falling in nonhuman ways might be part of why people react so strongly to these videos?

Aaron Saunders: We have a massive metal frame around the front of Atlas. It’s okay if it face-plants. It tucks its limbs in to protect them and other parts of the robot. A human would do the opposite—we put our limbs out and try to protect our heads. Robots can handle certain types of impacts and forces better than humans can. We have a lot of conversations around how people perceive machines based on how they perceive themselves. Falling on its face is a good example of something that looks bad for a robot but might not actually be bad.

“I can see why it’s very interesting when something goes wrong, because the public usually doesn’t see what that looks like, and they’re not used to those moments yet.”
—Bambi Brewer, Agility Robotics

Return to top

How normal is it for your robot to fall?

Saunders: Almost everything we do on Atlas is about pushing some limit. We don’t shy away from falling, because staying in a safe place means leaving a lot on the table in terms of understanding the performance of the machine and how to solve problems. In our development work, it falls all the time, both because we’re pushing it and because there’s very little risk or hazard—we’re not delivering Atlas out into the world.

On a long flat sidewalk, I don’t think Atlas would fall in a statistically relevant way. People think back to the video of robots falling all over the place at the DARPA Robotics Challenge, and that’s not the type of falling we worry about now.

For Spot, falling can be more of a risk, because it is out in the world. On a weekly basis, our internal fleet of Spots are walking about 2,000 kilometers, and we also have them in these test cells where they’re walking on rocks, on grates, over obstacles, and on slippery floors. We want to robustly test all of this stuff and try to drive those cases of falling down to their minimums.

“If a person is carrying a baby and falls down some stairs, they have this intuition and natural ability to save the baby, even if it means injuring themselves. We can design our robots to do the same kind of thing to protect the people around it when it falls.”
—Jonathan Hurst, Agility Robotics

How big of a deal is it for your robot to fall?

Hurst: Digit was designed to fall. That’s one of the reasons that it has arms—to be able to survive a fall. When we were first designing the robot, we said, okay, at some point the robot’s going to fall, how can we protect it? We calculated how much padding we would need to minimize the acceleration on the electronic components. It turned out that we would have needed several inches of padding, and Digit would have ended up looking like the Michelin Man.

The only realistic way to have Digit safely decelerate was to have an appendage that’s going to stick out and absorb that fall. And where is the best place to locate that appendage? You get the same answer as you do when you think about inertial actuation and bimanual manipulation. Digit’s arms are where they are not because we’re trying to build a humanoid, but because we’re trying to solve locomotion challenges, manipulation challenges, and making sure that we can catch the robot when it falls.

Was there a point during the development of your robot where falling went from normal to unusual?

Saunders: The thing that really took us from worrying about normal walking to feeling pretty good about normal walking is when we pushed aggressively into things that went way beyond walking.

To jump and land successfully, we needed to develop control algorithms that could accommodate all of the mass and the dynamics of the robot. It was no longer about carefully picking where you put your foot for each step, it was about coordinating all of that moving mass in a really robust way. So when Atlas started jumping and doing parkour, it made walking easier too. A few weeks ago, we had a new team member go back and apply some of the latest control algorithms that we’re using for parkour to our standing algorithm. With those new algorithms we saw big improvements in the robot’s ability to handle disturbances from a stand—if somebody were to shove the robot, this new controller is able to think and reason about all of its dynamics, resulting in massive gains in how Atlas reacts.

“We need to give a very clear signal to people to tell them not to try and help—just step back and let the robot fall. It’ll be fine.”
—Bambi Brewer, Agility Robotics

Return to top

At this point, how much is falling just an “oops,” and how much is it a learning opportunity?

Hurst: We’re always looking for bugs that you can iron out. Digit’s collapse at ProMat was one. In this scenario, there really should not have been an emergency stop.

Brewer: Falls are points at which somebody is filing a bug card, or looking through the logs. They’re trying to figure out what happened, and how to make sure it doesn’t happen again. At ProMat, there was something wrong with an encoder in the arm. It’s been updated now. It was a bug that hadn’t occurred before. Now if that happens, the robot’s arm will freeze, but the robot will remain upright.

Saunders: On Spot, I think there are relatively few learning opportunities these days. We know pretty well what Spot’s capable of, in what situations a fall might occur, what the robot is likely to do in those situations, and how it’s going to recover. We designed Spot to be able to fall robustly and not break, and to get up from falls. Obviously, there are some extreme cases—one of our industrial customers had a need for Spot to cross a soapy floor, which is about as close as you can get to walking on ice, a challenge for anything with legs. So our control team set up a slippery environment in our lab, using cooking oil on plastic, and then just started “robustifying.” They figured out how to detect slips and adapt the gait of the robot, and went from a situation where falling was regular to one where falling was infrequent.

For Atlas, generally the falling state happens after the part that we care about. What we’re learning there is what went wrong right before the fall. If we’re working on one of Atlas’s aerial tricks—say, something that we’ve never landed before—then of course we’re doing a ton of work to figure out why falls happen. But if we’re just walking around the lab, and there was some misstep, I don’t think people stress out too much, and we just stand it back up and reset it and go again.

“Robots should be able to fall. We should give them a break when they do.”
—Aaron Saunders, Boston Dynamics

We’re not afraid of a fall—we’re not treating the robots like they’re going to break all the time. Our robot falls a lot, and one of the things we decided a long time ago that we needed to build robots that can fall without breaking. If you can go through that cycle of pushing your robot to failure, studying the failure, and fixing it, you can make progress to where it’s not falling. But if you build a machine or a control system or a culture around never falling, then you’ll never learn what you need to learn to make your robot not fall. We celebrate falls, even the falls that break the robot.

Return to top

If a robot knows that it’s about to fall, what can it do to protect itself, and protect people around it?

Hurst: There are strategies when you know you’re about to fall. If a person is carrying a baby and falls down some stairs, they have this intuition and natural ability to save the baby, even if it means injuring themselves. We can design our robots to do the same kind of thing to protect the people around it when it falls.

Brewer: In addition to the robot falling safely, we need to give a very clear signal to people to tell them not to try and help—just step back and let the robot fall. It’ll be fine.

Hurst: The other thing is to try to fall sooner rather than later. If you’re not sure whether you can stay balanced, you might end up taking a step to try to correct, and then another step, and then maybe you’re moving in a direction that’s not all that controlled. So when it starts to lose its balance, we can tell the robot, “Just fall. You’ll get back up.”

Saunders: We have these detections inside of our control system that trigger when the robot starts doing something that the controller didn’t ask it to do. Maybe the velocity is starting to do something, or the robot is at some angle that it isn’t supposed to be. If that makes us think that a fall might be happening, we’ll run a different controller to try to stop it from falling—Atlas might decide to swing its arms, or move its upper body, or throw its leg out. And if that fails, there’s another control layer for when the robot is really falling. That last layer is about putting the robot in a state that sets its pose and joint stiffnesses to basically ensure that it will do minimal damage to itself and the world. How exactly we do this is different for each robot and for each type of fall. If you comb through videos of Atlas, you might see the robot tucking itself up into a little bit of a ball—that’s a shape and a set of joint stiffnesses that help it mitigate impacts, and also help protect things around it.

Sometimes, though, these falls happen because the robot catastrophically breaks. With Atlas, we definitely have instances where we break the foot off. And at that point, I don’t have good answers.

Return to top

The next time a video of a humanoid robot falling over goes viral, whether it’s your robot or someone else’s, what is one thing you’d like people watching that video to know?

Hurst: If Digit falls, I think it’d be great for people to know that the reaction from the engineers who built that robot would not be, “our robot fell over and we didn’t expect that!” It would just be a shrug.

Brewer: I’d like people to know that when a robot is actually out in the world doing real things, unexpected things are going to happen. You’re going to see some falls, but that’s part of learning to run a really long time in real-world environments. It’s expected, and it’s a sign that you’re not staging things.

Saunders: I think people should recognize that it’s normal for equipment to sometimes fail. Equipment can be fixed, equipment can be improved, and over time, equipment gets more and more reliable. And so, when people see these failures, it may be a situation that the robot has never experienced. They should know that we are gathering all that information and that we’re continuously improving and iterating, and that what they’re seeing now doesn’t represent the end state. It just represents where the technology is today.

I also think that there has to be some balance between our expectations for what robots can do, and the process for getting them to do it. People will come to me and they’ll want a robot that can do amazing things that robots don’t do yet, but they’re very nervous if a robot fails. If we want our robots to do amazing things and enrich our lives and be our tools in the workforce, we’re going to need to build those capabilities over time, because this is emerging technology, not established technology.

Robots should be able to fall. We should give them a break when they do. It’s okay if we laugh at them. But we should also work hard to make our products safe and reliable and things that we can trust, because if we don’t trust our robots, we won’t use them.

Return to top



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.

Enjoy today’s videos!

In this paper, we introduce ROSE, a novel soft gripper that can embrace the object and squeeze it by buckling a soft, funnel-like, thin-walled membrane around the object by simple rotation of the base. Thanks to this design, the ROSE hand can adapt to a wide range of objects that can fall within the funnel, and handle them with pleasant gripping force.

[ Paper ]

Thanks, Van!

Legged robots are designed to perform highly dynamic motions. However, it remains challenging for users to retarget expressive motions onto these complex systems. In this paper, we present a Differentiable Optimal Control (DOC) framework that facilitates the transfer of rich motions from either animals or animations onto these robots.

[ Disney Research ]

We present a team of legged robots for scientific exploration missions in challenging planetary analog environments. The paper was published in Science Robotics, and we deployed this approach at the ESA / ESRIC Space Resources Challenge.

[ ETHZ RSL ]

I physically cringed watching this happen.

[ NORLab ]

At Agility, we make robots that are made for work. Our robot Digit works alongside us in spaces designed for people. Digit handles the tedious and repetitive tasks meant for a machine, allowing companies and their people to focus on the work that requires the human element.

[ Agility Robotics ]

Admit it—you’ve done this with your robot.

[ AeroVironment ]

This looks like a fun game: Can you keep a simulated humanoid from falling over by walking for it?

[ RoboDesign Lab ]

NSFW.

[ Hardcore Robotics ]

I am including this because it’s Scotland, and you deserve to see it.

[ DJI ]

Team RoMeLa’s ARTEMIS vs. RoboCup Champions Team NimbRo. This is an exhibition game generously offered by Team NimbRo after an unfortunate incident of an illegal game controller that interfered with Team RoMeLa’s last official game.

[ RoMeLa ]

Two leading robotics pioneers share how they are driving disruption in wildly different industries: health care and construction. We hear how these innovations will impact businesses and society, and what it takes to be a robotics entrepreneur in today’s economic climate. Vivian Chu, cofounder and CTO, Diligent Robotics, and Tessa Lau, founder and CEO, Dusty Robotics.

[ Fortune ]

Sanctuary AI spends an hour and 20 minutes answering six questions from social media.

[ Sanctuary AI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.

Enjoy today’s videos!

In this paper, we introduce ROSE, a novel soft gripper that can embrace the object and squeeze it by buckling a soft, funnel-like, thin-walled membrane around the object by simple rotation of the base. Thanks to this design, the ROSE hand can adapt to a wide range of objects that can fall within the funnel, and handle them with pleasant gripping force.

[ Paper ]

Thanks, Van!

Legged robots are designed to perform highly dynamic motions. However, it remains challenging for users to retarget expressive motions onto these complex systems. In this paper, we present a Differentiable Optimal Control (DOC) framework that facilitates the transfer of rich motions from either animals or animations onto these robots.

[ Disney Research ]

We present a team of legged robots for scientific exploration missions in challenging planetary analog environments. The paper was published in Science Robotics, and we deployed this approach at the ESA / ESRIC Space Resources Challenge.

[ ETHZ RSL ]

I physically cringed watching this happen.

[ NORLab ]

At Agility, we make robots that are made for work. Our robot Digit works alongside us in spaces designed for people. Digit handles the tedious and repetitive tasks meant for a machine, allowing companies and their people to focus on the work that requires the human element.

[ Agility Robotics ]

Admit it—you’ve done this with your robot.

[ AeroVironment ]

This looks like a fun game: Can you keep a simulated humanoid from falling over by walking for it?

[ RoboDesign Lab ]

NSFW.

[ Hardcore Robotics ]

I am including this because it’s Scotland, and you deserve to see it.

[ DJI ]

Team RoMeLa’s ARTEMIS vs. RoboCup Champions Team NimbRo. This is an exhibition game generously offered by Team NimbRo after an unfortunate incident of an illegal game controller that interfered with Team RoMeLa’s last official game.

[ RoMeLa ]

Two leading robotics pioneers share how they are driving disruption in wildly different industries: health care and construction. We hear how these innovations will impact businesses and society, and what it takes to be a robotics entrepreneur in today’s economic climate. Vivian Chu, cofounder and CTO, Diligent Robotics, and Tessa Lau, founder and CEO, Dusty Robotics.

[ Fortune ]

Sanctuary AI spends an hour and 20 minutes answering six questions from social media.

[ Sanctuary AI ]

Introduction: Human-in-the-loop optimization algorithms have proven useful in optimizing complex interactive problems, such as the interaction between humans and robotic exoskeletons. Specifically, this methodology has been proven valid for reducing metabolic cost while wearing robotic exoskeletons. However, many prostheses and orthoses still consist of passive elements that require manual adjustments of settings.

Methods: In the present study, we investigated if human-in-the-loop algorithms could guide faster manual adjustments in a procedure similar to fitting a prosthesis. Eight healthy participants wore a prosthesis simulator and walked on a treadmill at 0.8 ms−1 under 16 combinations of shoe heel height and pylon height. A human-in-the-loop optimization algorithm was used to find an optimal combination for reducing the loading rate on the limb contralateral to the prosthesis simulator. To evaluate the performance of the optimization algorithm, we used a convergence criterium. We evaluated the accuracy by comparing it against the optimum from a full sweep of all combinations.

Results: In five out of the eight participants, the human-in-the-loop optimization reduced the time taken to find an optimal combination; however, in three participants, the human-in-the-loop optimization either converged by the last iteration or did not converge.

Discussion: Findings from this study show that the human-in-the-loop methodology could be helpful in tasks that require manually adjusting an assistive device, such as optimizing an unpowered prosthesis. However, further research is needed to achieve robust performance and evaluate applicability in persons with amputation wearing an actual prosthesis.

The demands of traditional industrial robotics differ significantly from those of space robotics. While industry requires robots that can perform repetitive tasks with precision and speed, the space environment needs robots to cope with uncertainties, dynamics, and communication delays or interruptions, similar to human astronauts. These demands make a well-suited application for compliant robotics and behavior-based programming. Pose Target Wrench Limiting (PTWL) is a compliant behavior paradigm developed specifically to meet these demands. PTWL controls a robot by moving a virtual attractor to a target pose. The attractor applies virtual forces, based on stiffness and damping presets, to an underlying admittance controller. Guided by virtual forces, the robot will follow the attractor until safety conditions are violated or success criteria are met. We tested PTWL on a variety of quasi-static tasks that may be useful for future space operations. Our results demonstrate that PTWL is an extremely powerful tool. It makes teleoperation easy and safe for a wide range of quasi-static tasks. It also facilitates the creation of semi-autonomous state machines that can reliably complete complex tasks with minimal human intervention.

3d reconstruction of deformable objects in dynamic scenes forms the fundamental basis of many robotic applications. Existing mesh-based approaches compromise registration accuracy, and lose important details due to interpolation and smoothing. Additionally, existing non-rigid registration techniques struggle with unindexed points and disconnected manifolds. We propose a novel non-rigid registration framework for raw, unstructured, deformable point clouds purely based on geometric features. The global non-rigid deformation of an object is formulated as an aggregation of locally rigid transformations. The concept of locality is embodied in soft patches described by geometrical properties based on SHOT descriptor and its neighborhood. By considering the confidence score of pairwise association between soft patches of two scans (not necessarily consecutive), a computed similarity matrix serves as the seed to grow a correspondence graph which leverages rigidity terms defined in As-Rigid-As-Possible for pruning and optimization. Experiments on simulated and publicly available datasets demonstrate the capability of the proposed approach to cope with large deformations blended with numerous missing parts in the scan process.

Introduction: Trunk-like continuum robots have wide applications in manipulation and locomotion. In particular, trunk-like soft arms exhibit high dexterity and adaptability very similar to the creatures of the natural world. However, owing to the continuum and soft bodies, their performance in payload and spatial movements is limited.

Methods: In this paper, we investigate the influence of key design parameters on robotic performance. It is verified that a larger workspace, lateral stiffness, payload, and bending moment could be achieved with adjustments to soft materials’ hardness, the height of module segments, and arrayed radius of actuators.

Results: Especially, a 55% increase in arrayed radius would enhance the lateral stiffness by 25% and a bending moment by 55%. An 80% increase in segment height would enlarge 112% of the elongation range and 70 % of the bending range. Around 200% and 150% increments in the segment’s lateral stiffness and payload forces, respectively, could be obtained by tuning the hardness of soft materials. These relations enable the design customization of trunk-like soft arms, in which this tapering structure ensures stability via the stocky base for an impact reduction of 50% compared to that of the tip and ensures dexterity of the long tip for a relatively larger bending range of over 400% compared to that of the base.

Discussion: The complete methodology of the design concept, analytical models, simulation, and experiments is developed to offer comprehensive guidelines for trunk-like soft robotic design and enable high performance in robotic manipulation.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Fourier Intelligence, a global technology company specialising in rehabilitation robotics and artificial intelligence, unveiled its first-generation humanoid robot GR-1 at the 2023 World Artificial Intelligence Conference (WAIC) in Shanghai.

Standing 1.65 metres tall and weighing 55 kilograms, GR-1 has 40 degrees of freedom (actuators) all over its body. With a peak torque of 300NM generated by a joint module installed at the hip, the robot is able to walk at 5 kilometres per hour and carry a load of 50 kilograms.

According to the company, mass production of GR-1 is slated to begin around the end of this year.

[ Fourier Intelligence ]

Here it is, the RoboCup 2023 Middle-Size League final match between Tech United and Falcons.

Tech United has also put together a couple of extra videos from RoboCup talking about some new tech that they’re working on.

[ Tech United ]

RoboCup 2023 Humanoid AdultSize Final: NimbRo vs. HERoEHS.

[ NimbRo ]

So it turns out that RoMeLa’s ARTEMIS has a Turbo Mode?

[ RoMeLa ]

What if animals were substituted with biohybrid robots? The replacement of pets with bioinspired robots has long existed within technological imaginaries and HRI research. Addressing developments of bioengineering and biohybrid robots, we depart from such replacement to study futures inhabited by animal-robot hybrids. In this paper, we introduce a speculative concept of assembling and eating biohybrid robots.

[ Paper ]

With so much coverage of littlish electric quadrupeds, you kind of forget that big hydraulic monsters exist, too.

[ IIT ]

MIT scientists have developed tiny, soft-bodied robots that can be controlled with a weak magnet. The robots, formed from rubbery magnetic spirals, can be programmed to walk, crawl, swim—all in response to a simple, easy-to-apply magnetic field.

[ MIT ]

Huh, that’s an interesting way of getting a quadrotor to translate without rolling.

[ MAVLab ]

With this system developed at EPFL, surgeons can now perform four-handed surgical interventions using two robot arms controlled by haptic interface pedals. This unprecedented advancement in the field of laparoscopic surgery aims to reduce the workload of surgeons while improving precision and safety. A single practitioner can accomplish tasks typically carried out by two or three individuals, thereby enhancing accuracy and coordination. Clinical trials are currently underway in Geneva.

[ EPFL ]

With a robot arm, X20 has capabilities of opening doors, picking up objects, flipping switches and valves; Or maybe a fist pump, clinking glasses, and shaking hands.

[ DeepRobotics ]

The real reason why people become roboticists.

[ Kawasaki ]

Can the multicellular robot, Loopy, move around in its environment? Yes, this video shows it can. Can it move efficiently and naturally? That’s what we are working on...

[ WVUIRL ]

Together, Daimler Truck and Torc Robotics are creating the world’s first great generation of robotic semi trucks. These trucks are currently in development, with the end goal of producing a highly efficient, safe, and sustainable product for fleet owners, owner operators, and all levels of highway user. Built with level 4 autonomy, these driverless trucks are driving the future of freight.

[ Torc Robotics ]

Okay this is kind of a long interview about Pollen Robotics’ Reachy so if you don’t want to sit through all of it just skip ahead to 6:10 because I lol’d.

Don’t worry, Reachy recovers at 7:28.

[ Pollen Robotics ]

The Robotics: Science and Systems 2023 livestream archive is now online; here’s day 1, and you’ll find the other days on the YouTube channel.

[ RSS 2023 ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Fourier Intelligence, a global technology company specialising in rehabilitation robotics and artificial intelligence, unveiled its first-generation humanoid robot GR-1 at the 2023 World Artificial Intelligence Conference (WAIC) in Shanghai.

Standing 1.65 metres tall and weighing 55 kilograms, GR-1 has 40 degrees of freedom (actuators) all over its body. With a peak torque of 300NM generated by a joint module installed at the hip, the robot is able to walk at 5 kilometres per hour and carry a load of 50 kilograms.

According to the company, mass production of GR-1 is slated to begin around the end of this year.

[ Fourier Intelligence ]

Here it is, the RoboCup 2023 Middle-Size League final match between Tech United and Falcons.

Tech United has also put together a couple of extra videos from RoboCup talking about some new tech that they’re working on.

[ Tech United ]

RoboCup 2023 Humanoid AdultSize Final: NimbRo vs. HERoEHS.

[ NimbRo ]

So it turns out that RoMeLa’s ARTEMIS has a Turbo Mode?

[ RoMeLa ]

What if animals were substituted with biohybrid robots? The replacement of pets with bioinspired robots has long existed within technological imaginaries and HRI research. Addressing developments of bioengineering and biohybrid robots, we depart from such replacement to study futures inhabited by animal-robot hybrids. In this paper, we introduce a speculative concept of assembling and eating biohybrid robots.

[ Paper ]

With so much coverage of littlish electric quadrupeds, you kind of forget that big hydraulic monsters exist, too.

[ IIT ]

MIT scientists have developed tiny, soft-bodied robots that can be controlled with a weak magnet. The robots, formed from rubbery magnetic spirals, can be programmed to walk, crawl, swim—all in response to a simple, easy-to-apply magnetic field.

[ MIT ]

Huh, that’s an interesting way of getting a quadrotor to translate without rolling.

[ MAVLab ]

With this system developed at EPFL, surgeons can now perform four-handed surgical interventions using two robot arms controlled by haptic interface pedals. This unprecedented advancement in the field of laparoscopic surgery aims to reduce the workload of surgeons while improving precision and safety. A single practitioner can accomplish tasks typically carried out by two or three individuals, thereby enhancing accuracy and coordination. Clinical trials are currently underway in Geneva.

[ EPFL ]

With a robot arm, X20 has capabilities of opening doors, picking up objects, flipping switches and valves; Or maybe a fist pump, clinking glasses, and shaking hands.

[ DeepRobotics ]

The real reason why people become roboticists.

[ Kawasaki ]

Can the multicellular robot, Loopy, move around in its environment? Yes, this video shows it can. Can it move efficiently and naturally? That’s what we are working on...

[ WVUIRL ]

Together, Daimler Truck and Torc Robotics are creating the world’s first great generation of robotic semi trucks. These trucks are currently in development, with the end goal of producing a highly efficient, safe, and sustainable product for fleet owners, owner operators, and all levels of highway user. Built with level 4 autonomy, these driverless trucks are driving the future of freight.

[ Torc Robotics ]

Okay this is kind of a long interview about Pollen Robotics’ Reachy so if you don’t want to sit through all of it just skip ahead to 6:10 because I lol’d.

Don’t worry, Reachy recovers at 7:28.

[ Pollen Robotics ]

The Robotics: Science and Systems 2023 livestream archive is now online; here’s day 1, and you’ll find the other days on the YouTube channel.

[ RSS 2023 ]

Humans are increasingly coming into direct physical contact with robots in the context of object handovers. The technical development of robots is progressing so that handovers can be better adapted to humans. An important criterion for successful handovers between robots and humans is the predictability of the robot for the human. The better humans can anticipate the robot’s actions, the better they can adapt to them and thus achieve smoother handovers. In the context of this work, it was investigated whether a highly adaptive transport method of the object, adapted to the human hand, leads to better handovers than a non-adaptive transport method with a predefined target position. To ensure robust handovers at high repetition rates, a Franka Panda robotic arm with a gripper equipped with an Intel RealSense camera and capacitive proximity sensors in the gripper was used. To investigate the handover behavior, a study was conducted with n = 40 subjects, each performing 40 handovers in four consecutive runs. The dependent variables examined are physical handover time, early handover intervention before the robot reaches its target position, and subjects’ subjective ratings. The adaptive transport method does not result in significantly higher mean physical handover times than the non-adaptive transport method. The non-adaptive transport method does not lead to a significantly earlier handover intervention in the course of the runs than the adaptive transport method. Trust in the robot and the perception of safety are rated significantly lower for the adaptive transport method than for the non-adaptive transport method. The physical handover time decreases significantly for both transport methods within the first two runs. For both transport methods, the percentage of handovers with a physical handover time between 0.1 and 0.2 s increases sharply, while the percentage of handovers with a physical handover time of >0.5 s decreases sharply. The results can be explained by theories of motor learning. From the experience of this study, an increased understanding of motor learning and adaptation in the context of human-robot interaction can be of great benefit for further technical development in robotics and for the industrial use of robots.



This is a sponsored article brought to you by Robotnik.

In today’s ever-evolving world, ensuring the safety and security of our surroundings has become an utmost priority. Traditional methods of surveillance and security often fall short when it comes to precision, reliability and adaptability. Recognizing this need for a smarter solution, Robotnik, a robotic company fully committed to precision engineering and unparalleled expertise is shaping the future with its groundbreaking advancements, has developed the RB-WATCHER. It is a collaborative mobile robot designed specifically for surveillance and security tasks. With its advanced features and cutting-edge technology, RB-WATCHER is set to revolutionize the way we approach surveillance in various environments.

Intelligence, precision, functions and reliability

RB-WATCHER‘s intelligence lies not only in its ability to navigate autonomously but also in its suite of intelligent functions. Whether detecting human presence, monitoring a designated area or identifying intruders, collecting crucial data or identifying potential fire outbreaks, RB-WATCHER’s advanced algorithms and sensors ensure unparalleled precision. This mobile robot autonomously performs a wide range of surveillance tasks with exceptional accuracy.

Surveillance and security tasks demand utmost precision and reliability to detect, prevent and overcome potential hazards and risks. RB-WATCHER has been meticulously engineered to meet these requirements, delivering unparalleled performance in diverse operating environments.

RB-WATCHER: Autonomous Mobile Robot for Surveillance & Security youtu.be

Specifications that unleash the power of autonomous capabilities

This surveillance and security robot stands out for its autonomous capabilities, allowing it to operate efficiently even in challenging and dynamic environments. Equipped with state-of-the-art inspection and navigation sensorization, this robotic platform combines multiple technologies to ensure seamless performance. Among its impressive array of sensors are the bi-spectral camera, front camera, RTK GPS and microphone, all working in harmony to provide comprehensive surveillance coverage. Let’s take a deeper look at specifications the RB-WATCHER enjoys.

Standing out among its competitors, RB-WATCHER‘s inspection sensors play a pivotal role in its surveillance capabilities. The bi-spectral Pan-Tilt-Zoom camera captures high-resolution images, while the microphone provides real-time audio information, boosting situational awareness and threat detection capabilities.

Connectivity is seamless with RB-WATCHER, as it comes equipped with a 4G router for real-time data transmission. Additionally, optional 5G router and Smart Radio support ensure compatibility with the latest communication technologies, enabling RB-WATCHER to remain at the forefront of connectivity advancements.

aspect_ratio

Navigation and localization are critical aspects of RB-WATCHER‘s performance. Equipped with a front depth camera, Inertial Measurement Unit (IMU), 3D LIDAR and advanced 3D SLAM and 2D SLAM technologies, RB-WATCHER achieves precise movement and accurate mapping of its surroundings. The integration of Real-Time Kinematic (RTK) GPS further enhances its position awareness, making it an ideal choice for large outdoor spaces.

RB-WATCHER also offers a host of impressive technical specifications that cement its position as an industry leader. With dimensions of 790 x 614 x 856 mm and weighing 62 kg, RB-WATCHER strikes a balance between compactness and power. Its payload capacity of up to 65 kg ensures it can handle various surveillance tasks effectively, while its top speed of 2.5 m/s allows for swift and efficient coverage of designated areas. This versatility makes RB-WATCHER suitable for both indoor and outdoor environments, further expanding its range of applications.

Built to withstand challenging conditions, RB-WATCHER boasts an IP53 enclosure class, offering protection against dust and water spray. Its temperature range of -10ºC to +45ºC enables reliable operation in a wide array of environments. Additionally, the robot’s ability to navigate slopes of up to 80% showcases its exceptional adaptability and ensures comprehensive surveillance coverage.

Surveillance & Security at its finest

As a vital component of the service portfolio offered by private security and surveillance companies, one of RB-WATCHER’s core strengths lies in its ability to execute surveillance and security tasks across various industries. The RB-WATCHER provides a powerful solution that enables them to enhance their offerings and benefit their clients in several ways. The autonomous mobile robot excels in patrolling predetermined areas, detecting objects, individuals, and identifying potential fire hazards. Its versatility and reliability make the RB-WATCHER an ideal choice for private security and surveillance companies, empowering them to deliver comprehensive and advanced services to their clients.

Reliability: A Hallmark of RB-WATCHER

When it comes to security and surveillance robots, reliability is non-negotiable. Robotnik’s RB-WATCHER instills confidence through its robust construction and resilient performance. Built to withstand demanding environments, RB-WATCHER operates flawlessly, consistently delivering accurate data for effective decision-making. With this reliable companion by your side, you can rest assured that your surveillance and security tasks are in capable hands.

Free Navigation for unrestricted surveillance

RB-WATCHER breaks free from the limitations of conventional surveillance methods. With its free navigation capabilities, this collaborative mobile robot efficiently traverses various terrains, overcoming obstacles effortlessly. By dynamically adapting to changing environments, RB-WATCHER guarantees comprehensive surveillance coverage, leaving no stone unturned in its quest for security.

Conclusion

Robotnik’s RB-WATCHER sets a new standard in the realm of security and surveillance robots. By combining precision, reliability and autonomous capabilities, RB-WATCHER redefines the way we approach surveillance tasks in a rapidly changing world. With its impressive sensorization, versatility and intelligent functions, RB-WATCHER stands as a testament to Robotnik’s commitment to innovation and safety. As we continue to embrace technological advancements, RB-WATCHER paves the way for a safer, more secure future.



This is a sponsored article brought to you by Robotnik.

In today’s ever-evolving world, ensuring the safety and security of our surroundings has become an utmost priority. Traditional methods of surveillance and security often fall short when it comes to precision, reliability and adaptability. Recognizing this need for a smarter solution, Robotnik, a robotic company fully committed to precision engineering and unparalleled expertise is shaping the future with its groundbreaking advancements, has developed the RB-WATCHER. It is a collaborative mobile robot designed specifically for surveillance and security tasks. With its advanced features and cutting-edge technology, RB-WATCHER is set to revolutionize the way we approach surveillance in various environments.

Intelligence, precision, functions and reliability

RB-WATCHER‘s intelligence lies not only in its ability to navigate autonomously but also in its suite of intelligent functions. Whether detecting human presence, monitoring a designated area or identifying intruders, collecting crucial data or identifying potential fire outbreaks, RB-WATCHER’s advanced algorithms and sensors ensure unparalleled precision. This mobile robot autonomously performs a wide range of surveillance tasks with exceptional accuracy.

Surveillance and security tasks demand utmost precision and reliability to detect, prevent and overcome potential hazards and risks. RB-WATCHER has been meticulously engineered to meet these requirements, delivering unparalleled performance in diverse operating environments.

RB-WATCHER: Autonomous Mobile Robot for Surveillance & Security youtu.be

Specifications that unleash the power of autonomous capabilities

This surveillance and security robot stands out for its autonomous capabilities, allowing it to operate efficiently even in challenging and dynamic environments. Equipped with state-of-the-art inspection and navigation sensorization, this robotic platform combines multiple technologies to ensure seamless performance. Among its impressive array of sensors are the bi-spectral camera, front camera, RTK GPS and microphone, all working in harmony to provide comprehensive surveillance coverage. Let’s take a deeper look at specifications the RB-WATCHER enjoys.

Standing out among its competitors, RB-WATCHER‘s inspection sensors play a pivotal role in its surveillance capabilities. The bi-spectral Pan-Tilt-Zoom camera captures high-resolution images, while the microphone provides real-time audio information, boosting situational awareness and threat detection capabilities.

Connectivity is seamless with RB-WATCHER, as it comes equipped with a 4G router for real-time data transmission. Additionally, optional 5G router and Smart Radio support ensure compatibility with the latest communication technologies, enabling RB-WATCHER to remain at the forefront of connectivity advancements.

aspect_ratio

Navigation and localization are critical aspects of RB-WATCHER‘s performance. Equipped with a front depth camera, Inertial Measurement Unit (IMU), 3D LIDAR and advanced 3D SLAM and 2D SLAM technologies, RB-WATCHER achieves precise movement and accurate mapping of its surroundings. The integration of Real-Time Kinematic (RTK) GPS further enhances its position awareness, making it an ideal choice for large outdoor spaces.

RB-WATCHER also offers a host of impressive technical specifications that cement its position as an industry leader. With dimensions of 790 x 614 x 856 mm and weighing 62 kg, RB-WATCHER strikes a balance between compactness and power. Its payload capacity of up to 65 kg ensures it can handle various surveillance tasks effectively, while its top speed of 2.5 m/s allows for swift and efficient coverage of designated areas. This versatility makes RB-WATCHER suitable for both indoor and outdoor environments, further expanding its range of applications.

Built to withstand challenging conditions, RB-WATCHER boasts an IP53 enclosure class, offering protection against dust and water spray. Its temperature range of -10ºC to +45ºC enables reliable operation in a wide array of environments. Additionally, the robot’s ability to navigate slopes of up to 80% showcases its exceptional adaptability and ensures comprehensive surveillance coverage.

Surveillance & Security at its finest

As a vital component of the service portfolio offered by private security and surveillance companies, one of RB-WATCHER’s core strengths lies in its ability to execute surveillance and security tasks across various industries. The RB-WATCHER provides a powerful solution that enables them to enhance their offerings and benefit their clients in several ways. The autonomous mobile robot excels in patrolling predetermined areas, detecting objects, individuals, and identifying potential fire hazards. Its versatility and reliability make the RB-WATCHER an ideal choice for private security and surveillance companies, empowering them to deliver comprehensive and advanced services to their clients.

Reliability: A Hallmark of RB-WATCHER

When it comes to security and surveillance robots, reliability is non-negotiable. Robotnik’s RB-WATCHER instills confidence through its robust construction and resilient performance. Built to withstand demanding environments, RB-WATCHER operates flawlessly, consistently delivering accurate data for effective decision-making. With this reliable companion by your side, you can rest assured that your surveillance and security tasks are in capable hands.

Free Navigation for unrestricted surveillance

RB-WATCHER breaks free from the limitations of conventional surveillance methods. With its free navigation capabilities, this collaborative mobile robot efficiently traverses various terrains, overcoming obstacles effortlessly. By dynamically adapting to changing environments, RB-WATCHER guarantees comprehensive surveillance coverage, leaving no stone unturned in its quest for security.

Conclusion

Robotnik’s RB-WATCHER sets a new standard in the realm of security and surveillance robots. By combining precision, reliability and autonomous capabilities, RB-WATCHER redefines the way we approach surveillance tasks in a rapidly changing world. With its impressive sensorization, versatility and intelligent functions, RB-WATCHER stands as a testament to Robotnik’s commitment to innovation and safety. As we continue to embrace technological advancements, RB-WATCHER paves the way for a safer, more secure future.

Introduction: In Interactive Task Learning (ITL), an agent learns a new task through natural interaction with a human instructor. Behavior Trees (BTs) offer a reactive, modular, and interpretable way of encoding task descriptions but have not yet been applied a lot in robotic ITL settings. Most existing approaches that learn a BT from human demonstrations require the user to specify each action step-by-step or do not allow for adapting a learned BT without the need to repeat the entire teaching process from scratch.

Method: We propose a new framework to directly learn a BT from only a few human task demonstrations recorded as RGB-D video streams. We automatically extract continuous pre- and post-conditions for BT action nodes from visual features and use a Backchaining approach to build a reactive BT. In a user study on how non-experts provide and vary demonstrations, we identify three common failure cases of an BT learned from potentially imperfect initial human demonstrations. We offer a way to interactively resolve these failure cases by refining the existing BT through interaction with a user over a web-interface. Specifically, failure cases or unknown states are detected automatically during the execution of a learned BT and the initial BT is adjusted or extended according to the provided user input.

Evaluation and results: We evaluate our approach on a robotic trash disposal task with 20 human participants and demonstrate that our method is capable of learning reactive BTs from only a few human demonstrations and interactively resolving possible failure cases at runtime.

A high degree of freedom (DOF) benefits manipulators by presenting various postures when reaching a target. Using a tendon-driven system with an underactuated structure can provide flexibility and weight reduction to such manipulators. The design and control of such a composite system are challenging owing to its complicated architecture and modeling difficulties. In our previous study, we developed a tendon-driven, high-DOF underactuated manipulator inspired from an ostrich neck referred to as the Robostrich arm. This study particularly focused on the control problems and simulation development of such a tendon-driven high-DOF underactuated manipulator. We proposed a curriculum-based reinforcement-learning approach. Inspired by human learning, progressing from simple to complex tasks, the Robostrich arm can obtain manipulation abilities by step-by-step reinforcement learning ranging from simple position control tasks to practical application tasks. In addition, an approach was developed to simulate tendon-driven manipulation with a complicated structure. The results show that the Robostrich arm can continuously reach various targets and simultaneously maintain its tip at the desired orientation while mounted on a mobile platform in the presence of perturbation. These results show that our system can achieve flexible manipulation ability even if vibrations are presented by locomotion.

The variability in the shapes and sizes of objects presents a significant challenge for two-finger robotic grippers when it comes to manipulating them. Based on the chemistry of vitrimers (a new class of polymer materials that have dynamic covalent bonds, which allow them to reversibly change their mechanical properties under specific conditions), we present two designs as 3D-printed shape memory polymer-based shape-adaptive fingertips (SMP-SAF). The fingertips have two main properties needed for an effective grasping. First, the ability to adapt their shape to different objects. Second, exhibiting variable rigidity, to lock and retain this new shape without the need for any continuous external triggering system. Our two design strategies are: 1) A curved part, which is suitable for grasping delicate and fragile objects. In this mode and prior to gripping, the SMP-SAFs are straightened by the force of the parallel gripper and are adapted to the object by shape memory activation. 2) A straight part that takes on the form of the objects by contact force with them. This mode is better suited for gripping hard bodies and provides a more straightforward shape programming process. The SMP-SAFs can be programmed by heating them up above glass transition temperature (54°C) via Joule-effect of the integrated electrically conductive wire or by using a heat gun, followed by reshaping by the external forces (without human intervention), and subsequently fixing the new shape upon cooling. As the shape programming process is time-consuming, this technique suits adaptive sorting lines where the variety of objects is not changed from grasp to grasp, but from batch to batch.



It never gets any easier to watch: a control room full of engineers, waiting anxiously as the robotic probe they’ve worked on for years nears the surface of the moon. Telemetry from the spacecraft says everything is working; landing is moments away. But then the vehicle goes silent, and the control room does too, until, after an agonizing wait, the project leader keys a microphone to say the landing appears to have failed.

The last time this happened was in April, in this case to a privately funded Japanese mission called Hakuto-R. It was in many ways similar to crashes by Israel’s Beresheet and India’s Chandrayaan-2 in 2019. All three landers seemed fine until final approach. Since the 1970s, only China has successfully put any uncrewed ships on the moon (most recently in 2020); Russia’s last landing was in 1976, and the United States hasn’t tried since 1972. Why, half a century after the technological triumph of Apollo, have the odds of a safe lunar landing actually gone down?

The question has some urgency because five more landing attempts, by companies or government agencies from four different countries, could well be made before the end of 2023; the next, Chandrayaan-3 from India, is scheduled for launch as early as this week. NASA’s administrator, Bill Nelson, has called this a “golden age” of spaceflight, culminating in the landing of Artemis astronauts on the moon late in 2025. But every setback is watched uneasily by others in the space community.

2023 Possible Lunar Landings

India: Chandrayaan-3, from the Indian Space Research Organization, with a hoped-for launch in mid-July and, if that succeeds, a landing in August.

Chandrayaan-3 could be heading to the moon soon.ISRO

Russia: Luna-25, from the Roscosmos space agency, which currently says it plans an August launch.

United States: Nova-C IM-1, from a private Houston-based company, Intuitive Machines, currently targeted for launch in the third quarter of 2023.

United States: Peregrine Mission 1, from the Pittsburgh-based company Astrobotic Technology, is waiting for modifications to its Vulcan Centaur launch vehicle. A launch date of 4 May was put off; a new one has not been set. [Read about Peregrine 1's rover here.]

Japan: SLIM (Smart Lander for Investigating Moon), from the JAXA space agency. An August launch date has been put off.

Intuitive Machines hopes to launch the Nova-C IM-1 this season.Intuitive Machines

Each of these missions is behind schedule, in some cases by years, and several could slip into 2024 or later.

The Fate of Hakuto-R Mission 1

A day after Hakuto-R went silent, an American spacecraft, Lunar Reconnaissance Orbiter, passed over the landing site; its imagery, compared with previous shots of the area, showed clearly that there had been a crash. The company running Hakuto-R, ispace, did an analysis of the crash and concluded that its software had perhaps been too clever for its own good.

According to ispace, the lander’s onboard sensors indicated a sharp rise in altitude when the craft passed over a 3-kilometer-high cliff. The cliff was later determined to be the rim of a crater. But the onboard computer had not been programmed for any cliff that high; it was told that in case of a large discrepancy in its expected position, the computer should assume something was wrong with the ship’s radar altimeter and disregard its input. The computer, said ispace, therefore behaved as if the ship were near touchdown when it was actually 5 km above the surface. It kept firing its engines, descending ever so gently, until its fuel ran out. “At that time, the controlled descent of the lander ceased, and it is believed to have free-fallen to the moon’s surface,” ispace said in a press release.

The crash site of the privately mounted Japanese Hakuto-R Mission 1 lunar lander, imaged by NASA’s Lunar Reconnaissance Orbiter.NASA/Goddard Space Flight Center/Arizona State University

Takeshi Hakamada, the CEO of ispace, put a brave face on it. “We acquired actual flight data during the landing phase,” he said. “That is a great achievement for future missions.”

Will this failure be helpful to other teams trying to make landings? Only to a limited extent, they say. As the so-called new space economy expands to include startup companies and more countries, there are many collaborative efforts, but there is also heightened competition, so there’s less willingness to share data.

Better Technology, Tighter Budgets

“Our biggest challenges are that we are doing this as a private company,” says John Thornton, the CEO of Astrobotic, whose Peregrine lander is waiting to go. “Only three nations have landed on the moon, and they’ve all been superpowers with gigantic, unlimited budgets compared to what we’re dealing with. We’re landing on the moon for on the order of $100 million. So it’s a very different ballgame for us.”

To put US $100 million in perspective: Between 1966 and 1968, NASA surprised itself by safely landing five of its seven Surveyor spacecraft on the moon as scouts for Apollo. The cost at the time was $469 million. That number today, after inflation, would be about $4.4 billion.

Surveyor’s principal way of determining its distance from landing was radar, a mature but sometimes imprecise technology. Swati Mohan, the guidance and navigation lead for NASA’s Perseverance rover landing on Mars in 2021, likened radar to “closing your eyes and holding your hands out in front of you.” So Astrobotic, for instance, has turned to Doppler lidar—laser ranging—which has about 10 times better resolution. It also uses terrain-relative navigation, or TRN, a visually based system that takes rapid-fire images of the approaching ground and compares them to an onboard database of terrain images. Some TRN imagery comes from the same Lunar Reconnaissance Orbiter that spotted Hakuto-R.

“Our folks are feeling good, and I think we’ve done as much as we possibly can to make sure that it’s successful,” says Thornton. But, he adds, “it’s an unforgiving environment where everything has to work.”



It never gets any easier to watch: a control room full of engineers, waiting anxiously as the robotic probe they’ve worked on for years nears the surface of the moon. Telemetry from the spacecraft says everything is working; landing is moments away. But then the vehicle goes silent, and the control room does too, until, after an agonizing wait, the project leader keys a microphone to say the landing appears to have failed.

The last time this happened was in April, in this case to a privately funded Japanese mission called Hakuto-R. It was in many ways similar to crashes by Israel’s Beresheet and India’s Chandrayaan-2 in 2019. All three landers seemed fine until final approach. Since the 1970s, only China has successfully put any uncrewed ships on the moon (most recently in 2020); Russia’s last landing was in 1976, and the United States hasn’t tried since 1972. Why, half a century after the technological triumph of Apollo, have the odds of a safe lunar landing actually gone down?

The question has some urgency because five more landing attempts, by companies or government agencies from four different countries, could well be made before the end of 2023; the next, Chandrayaan-3 from India, is scheduled for launch as early as this week. NASA’s administrator, Bill Nelson, has called this a “golden age” of spaceflight, culminating in the landing of Artemis astronauts on the moon late in 2025. But every setback is watched uneasily by others in the space community.

2023 Possible Lunar Landings

India: Chandrayaan-3, from the Indian Space Research Organization, with a hoped-for launch in mid-July and, if that succeeds, a landing in August.

Chandrayaan-3 could be heading to the moon soon.ISRO

Russia: Luna-25, from the Roscosmos space agency, which currently says it plans an August launch.

United States: Nova-C IM-1, from a private Houston-based company, Intuitive Machines, currently targeted for launch in the third quarter of 2023.

United States: Peregrine Mission 1, from the Pittsburgh-based company Astrobotic Technology, is waiting for modifications to its Vulcan Centaur launch vehicle. A launch date of 4 May was put off; a new one has not been set. [Read about Peregrine 1's rover here.]

Japan: SLIM (Smart Lander for Investigating Moon), from the JAXA space agency. An August launch date has been put off.

Intuitive Machines hopes to launch the Nova-C IM-1 this season.Intuitive Machines

Each of these missions is behind schedule, in some cases by years, and several could slip into 2024 or later.

The Fate of Hakuto-R Mission 1

A day after Hakuto-R went silent, an American spacecraft, Lunar Reconnaissance Orbiter, passed over the landing site; its imagery, compared with previous shots of the area, showed clearly that there had been a crash. The company running Hakuto-R, ispace, did an analysis of the crash and concluded that its software had perhaps been too clever for its own good.

According to ispace, the lander’s onboard sensors indicated a sharp rise in altitude when the craft passed over a 3-kilometer-high cliff. The cliff was later determined to be the rim of a crater. But the onboard computer had not been programmed for any cliff that high; it was told that in case of a large discrepancy in its expected position, the computer should assume something was wrong with the ship’s radar altimeter and disregard its input. The computer, said ispace, therefore behaved as if the ship were near touchdown when it was actually 5 km above the surface. It kept firing its engines, descending ever so gently, until its fuel ran out. “At that time, the controlled descent of the lander ceased, and it is believed to have free-fallen to the moon’s surface,” ispace said in a press release.

The crash site of the privately mounted Japanese Hakuto-R Mission 1 lunar lander, imaged by NASA’s Lunar Reconnaissance Orbiter.NASA/Goddard Space Flight Center/Arizona State University

Takeshi Hakamada, the CEO of ispace, put a brave face on it. “We acquired actual flight data during the landing phase,” he said. “That is a great achievement for future missions.”

Will this failure be helpful to other teams trying to make landings? Only to a limited extent, they say. As the so-called new space economy expands to include startup companies and more countries, there are many collaborative efforts, but there is also heightened competition, so there’s less willingness to share data.

Better Technology, Tighter Budgets

“Our biggest challenges are that we are doing this as a private company,” says John Thornton, the CEO of Astrobotic, whose Peregrine lander is waiting to go. “Only three nations have landed on the moon, and they’ve all been superpowers with gigantic, unlimited budgets compared to what we’re dealing with. We’re landing on the moon for on the order of $100 million. So it’s a very different ballgame for us.”

To put US $100 million in perspective: Between 1966 and 1968, NASA surprised itself by safely landing five of its seven Surveyor spacecraft on the moon as scouts for Apollo. The cost at the time was $469 million. That number today, after inflation, would be about $4.4 billion.

Surveyor’s principal way of determining its distance from landing was radar, a mature but sometimes imprecise technology. Swati Mohan, the guidance and navigation lead for NASA’s Perseverance rover landing on Mars in 2021, likened radar to “closing your eyes and holding your hands out in front of you.” So Astrobotic, for instance, has turned to Doppler lidar—laser ranging—which has about 10 times better resolution. It also uses terrain-relative navigation, or TRN, a visually based system that takes rapid-fire images of the approaching ground and compares them to an onboard database of terrain images. Some TRN imagery comes from the same Lunar Reconnaissance Orbiter that spotted Hakuto-R.

“Our folks are feeling good, and I think we’ve done as much as we possibly can to make sure that it’s successful,” says Thornton. But, he adds, “it’s an unforgiving environment where everything has to work.”



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Instead of one autonomous robot to fly, another to drive on land and one more to navigate on water, a new hybrid drone can do all three. To carry out complex missions, scientists are increasingly experimenting with drones that can do more than just fly.

The idea for a drone capable of navigating land, air, and sea came when researchers at New York University Abu Dhabi’s Arabian Center for Climate and Environmental Sciences (ACCESS) noted they would like a drone “capable of flying out to potentially remote locations and sampling bodies of water,” says study lead author Dimitrios Chaikalis, a doctoral candidate at NYU Abu Dhabi.

Environmental research often “relies on sample collections from hard-to-reach areas,” Chaikalis says. “Flying vehicles can easily navigate to such areas, while being capable of landing on water and navigating on the surface allows for sampling for long hours with minimal energy consumption before flying back to its base.”

The new autonomous vehicle is a tricopter with three pairs of rotors for flight, three wheels for roaming on land, and two thrusters to help it move on water. The rubber wheels were 3D-printed directly around the body of the main wheel frame, eliminating the need for metal screws and ball bearings, which would run the risk of rust after exposure to water. The entire machine weighs less than 10 kilograms, in order to comply with drone regulations.

A buoyant, machine-cut Styrofoam body was placed between the top of the machine, which holds the rotors, and its bottom, which holds the wheels and thrusters. This flotation device served as the machine’s hull in the water, and was shaped like a trefoil to leave room for the airflow of the rotors.

“The resulting vehicle is capable of traversing every available medium—air, water, ground—meaning you can eventually deploy autonomous vehicles capable of overcoming ever-increasing difficulties and obstacles,” Chaikalis says.

The drone possesses two open-source PX4 autopilot systems: one for the air, and the other for navigating both land and water. “Aerial navigation differs heavily from ground or water surface navigation, which actually bear a lot of similarities with each other,” Chaikalis says. “So we designed the ground and water surface navigation to both work with the same autopilot, changing only the motor output for each case.”

An Intel NUC computer served as the command module. The computer can switch between the two autopilots as needed, as well as interface with a radio transceiver and GPS. All these electronics were secured within a waterproof plastic casing.

“Of course, you also have to get waterproof motors for the ground-vehicle wheels, since they’ll be fully submerged when on water,” Chaikalis says. “Such motors proved difficult to interface with commercial autopilot units, so we ended up also designing custom hardware and firmware for interfacing such communications.”

The drone can operate under radio control or autonomously on preprogrammed missions. Its lithium polymer batteries give it a flight time of 18 minutes.

In experiments, the Styrofoam hull absorbed water during floating, increasing its weight by 20 percent within 30 minutes. The Styrofoam did release this water during flight, albeit slowly, with a 20 percent weight loss after 100 minutes. The scientists note this significant variation in weight needs to be accounted for in the autopilot design, or they could add a water-resistant coating, although that would permanently increase the overall weight.

In addition, “although waterproof against splashes and light submersion, this is not yet a fully submersible design, meaning a failure of the flotation device could potentially be catastrophic,” Chaikalis says.

In the future, the researchers note they could optimize the hull to make it strong enough to withstand complex maneuvers and to minimize air drag during flight. They would also like to make the drone fully modular so they can easily change its capabilities by attaching or detaching modules from it.

“We imagine being capable of, for example, selecting to drop the ground mechanism behind if necessary to save power, then returning to it later to land,” Chaikalis says. “Or allow the water module to navigate on water, while the [unmanned aerial vehicle] returns to a nearby base for recharge and picking it up again later.”

A patent application is pending on the new drone. The scientists detailed their findings 9 June at the 2023 International Conference on Unmanned Aircraft Systems in Warsaw.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Instead of one autonomous robot to fly, another to drive on land and one more to navigate on water, a new hybrid drone can do all three. To carry out complex missions, scientists are increasingly experimenting with drones that can do more than just fly.

The idea for a drone capable of navigating land, air, and sea came when researchers at New York University Abu Dhabi’s Arabian Center for Climate and Environmental Sciences (ACCESS) noted they would like a drone “capable of flying out to potentially remote locations and sampling bodies of water,” says study lead author Dimitrios Chaikalis, a doctoral candidate at NYU Abu Dhabi.

Environmental research often “relies on sample collections from hard-to-reach areas,” Chaikalis says. “Flying vehicles can easily navigate to such areas, while being capable of landing on water and navigating on the surface allows for sampling for long hours with minimal energy consumption before flying back to its base.”

The new autonomous vehicle is a tricopter with three pairs of rotors for flight, three wheels for roaming on land, and two thrusters to help it move on water. The rubber wheels were 3D-printed directly around the body of the main wheel frame, eliminating the need for metal screws and ball bearings, which would run the risk of rust after exposure to water. The entire machine weighs less than 10 kilograms, in order to comply with drone regulations.

A buoyant, machine-cut Styrofoam body was placed between the top of the machine, which holds the rotors, and its bottom, which holds the wheels and thrusters. This flotation device served as the machine’s hull in the water, and was shaped like a trefoil to leave room for the airflow of the rotors.

“The resulting vehicle is capable of traversing every available medium—air, water, ground—meaning you can eventually deploy autonomous vehicles capable of overcoming ever-increasing difficulties and obstacles,” Chaikalis says.

The drone possesses two open-source PX4 autopilot systems: one for the air, and the other for navigating both land and water. “Aerial navigation differs heavily from ground or water surface navigation, which actually bear a lot of similarities with each other,” Chaikalis says. “So we designed the ground and water surface navigation to both work with the same autopilot, changing only the motor output for each case.”

An Intel NUC computer served as the command module. The computer can switch between the two autopilots as needed, as well as interface with a radio transceiver and GPS. All these electronics were secured within a waterproof plastic casing.

“Of course, you also have to get waterproof motors for the ground-vehicle wheels, since they’ll be fully submerged when on water,” Chaikalis says. “Such motors proved difficult to interface with commercial autopilot units, so we ended up also designing custom hardware and firmware for interfacing such communications.”

The drone can operate under radio control or autonomously on preprogrammed missions. Its lithium polymer batteries give it a flight time of 18 minutes.

In experiments, the Styrofoam hull absorbed water during floating, increasing its weight by 20 percent within 30 minutes. The Styrofoam did release this water during flight, albeit slowly, with a 20 percent weight loss after 100 minutes. The scientists note this significant variation in weight needs to be accounted for in the autopilot design, or they could add a water-resistant coating, although that would permanently increase the overall weight.

In addition, “although waterproof against splashes and light submersion, this is not yet a fully submersible design, meaning a failure of the flotation device could potentially be catastrophic,” Chaikalis says.

In the future, the researchers note they could optimize the hull to make it strong enough to withstand complex maneuvers and to minimize air drag during flight. They would also like to make the drone fully modular so they can easily change its capabilities by attaching or detaching modules from it.

“We imagine being capable of, for example, selecting to drop the ground mechanism behind if necessary to save power, then returning to it later to land,” Chaikalis says. “Or allow the water module to navigate on water, while the [unmanned aerial vehicle] returns to a nearby base for recharge and picking it up again later.”

A patent application is pending on the new drone. The scientists detailed their findings 9 June at the 2023 International Conference on Unmanned Aircraft Systems in Warsaw.

Pages