Feed aggregator



When we hear about manipulation robots in warehouses, it’s almost always in the context of picking. That is, about grasping a single item from a bin of items, and then dropping that item into a different bin, where it may go toward building a customer order. Picking a single item from a jumble of items can be tricky for robots (especially when the number of different items may be in the millions). While the problem’s certainly not solved, in a well-structured and optimized environment, robots are nevertheless still getting pretty good at this kind of thing.

Amazon has been on a path toward the kind of robots that can pick items since at least 2015, when the company sponsored the Amazon Picking Challenge at ICRA. And just a month ago, Amazon introduced Sparrow, which it describes as “the first robotic system in our warehouses that can detect, select, and handle individual products in our inventory.” What’s important to understand about Sparrow, however, is that like most practical and effective industrial robots, the system surrounding it is doing a lot of heavy lifting—Sparrow is being presented with very robot-friendly bins that makes its job far easier than it would be otherwise. This is not unique to Amazon, and in highly automated warehouses with robotic picking systems it’s typical to see bins that either include only identical items or have just a few different items to help the picking robot be successful.

Doing the picking task in reverse is called stowing, and it’s the way that items get into Amazon’s warehouse workflow in the first place.

But robot-friendly bins are simply not the reality for the vast majority of items in an Amazon warehouse, and a big part of the reason for this is (as per usual) humans making an absolute mess of things, in this case when they stow products into bins in the first place. Sidd Srinivasa, the director of Amazon Robotics AI, described the problem of stowing items as “a nightmare.... Stow fundamentally breaks all existing industrial robotic thinking.” But over the past few years, Amazon Robotics researchers have put some serious work into solving it.

First, it’s important to understand the difference between the robot-friendly workflows that we typically see with bin-picking robots, and the way that most Amazon warehouses are actually run. That is, with humans doing most of the complex manipulation.

You may already be familiar with Amazon’s drive units—the mobile robots with shelves on top (called pods) that autonomously drive themselves past humans who pick items off of the shelves to build up orders for customers. This is (obviously) the picking task, but doing the same task in reverse is called stowing, and it’s the way that items get into Amazon’s warehouse workflow in the first place. It turns out that humans who stow things on Amazon’s mobile shelves do so in what is essentially a random way in order to maximize space most efficiently. This sounds counterintuitive, but it actually makes a lot of sense.

When an Amazon warehouse gets a new shipment of stuff, let’s say Extremely Very Awesome Nuggets (EVANs), the obvious thing to do might be to call up a pod with enough empty shelves to stow all of the EVANs in at once. That way, when someone places an order for an EVAN, the pod full of EVANs shows up, and a human can pick an EVAN off one of the shelves. The problem with this method, however, is that if the pod full of EVANs gets stuck or breaks or is otherwise inaccessible, then nobody can get their EVANs, slowing the entire system down (demand for EVANs being very, very high). Amazon’s strategy is to instead distribute EVANs across multiple pods, so that some EVANs are always available.

The process for this distributed stow is random in the sense that a human stower might get a couple of EVANs to put into whatever pod shows up next. Each pod has an array shelves, some of which are empty. It’s up to the human to decide where the EVANs best fit, and Amazon doesn’t really care as long as human tells the inventory system where the EVANs ended up. Here’s what this process looks like:

Two things are immediately obvious from this video: First, the way that Amazon products are stowed at automated warehouses like this one is entirely incompatible with most current bin-picking robots. Second, it’s easy to see why this kind of stowing is “a nightmare” for robots. As if the need to carefully manipulate a jumble of objects to make room in a bin wasn’t a hard enough problem, you also have to deal with those elastic bins that get in the way of both manipulation and visualization, and you have to be able to grasp and manipulate the item that you’re trying to stow. Oof.

“For me, it’s hard, but it’s not too hard—it’s on the cutting edge of what’s feasible for robots,” says Aaron Parness, senior manager of applied science at Amazon Robotics & AI. “It’s crazy fun to work on.” Parness came to Amazon from Stanford and JPL, where he worked on robots like StickyBot and LEMUR and was responsible for this bonkers microspine gripper designed to grasp asteroids in microgravity. “Having robots that can interact in high-clutter and high-contact environments is superexciting because I think it unlocks a wave of applications,” continues Parness. “This is exactly why I came to Amazon; to work on that kind of a problem and try to scale it.”

What makes stowing at Amazon both cutting edge and nightmarish for robots is that it’s a task that has been highly optimized for humans. Amazon has invested heavily in human optimization, and (at least for now) the company is very reliant on humans. This means that any robotic solution that would have a significant impact on the human-centered workflow is probably not going to get very far. So Parness, along with Senior Applied Scientist Parker Owan, had to develop hardware and software that could solve the problem as is. Here’s what they came up with:

On the hardware side, there’s a hook system that lifts the elastic bands out of the way to provide access to each bin. But that’s the easy part; the hard part is embodied in the end-of-arm tool (EOAT), which consists of two long paddles that can gently squeeze an item to pick it up, with conveyor belts on their inner surfaces to shoot the item into the bin. An extendable thin metal spatula of sorts can go into the bin before the paddles and shift items around to make room when necessary.

To use all of this hardware requires some very complex software, since the system needs to be able to perceive the items in the bin (which may be occluding each other and also behind the elastic bands), estimate the characteristics of each item, consider ways in which those items could be safely shoved around to maximize available bin space based on the object to be stowed, and then execute the right motions to make all of that happen. By identifying and then chaining together a series of motion primitives, the Amazon researchers have been able to achieve stowing success rates (in the lab) of better than 90 percent.

After years of work, the system is functioning well enough that prototypes are stowing actual inventory items at an Amazon fulfillment center in Washington state. The goal is to be able to stow 85 percent of the products that Amazon stocks (millions of items), but since the system can be installed within the same workflow that humans use, there’s no need to hit 100 percent. If the system can’t handle it, it just passes it along to a human worker. This means that the system doesn’t even need to reach 85 percent before it can be useful, since if it can do even a small percentage of items, it can offload some of that basic stuff from humans. And if you’re a human who has to do a lot of basic stuff over and over, that seems like it might be nice. Thanks, robots!

But of course there’s a lot more going on here on the robotics side, and we spoke with Aaron Parness to learn more.

IEEE Spectrum: Stowing in an Amazon warehouse is a highly human-optimized task. Does this make things at lot more challenging for robots?

Aaron Parness, senior manager of applied science at Amazon Robotics & AIAmazon

Aaron Parness: In a home, in a hospital, on the space station, in these kinds of settings, you have these human-built environments. I don’t really think that’s a driver for us. The hard problem we’re trying to solve involves contact and also the reasoning. And that doesn’t change too much with the environment, I don’t think. Most of my team is not focused on questions of that nature, questions like, “If we could only make the bins this height,” or, “If we could only change this or that other small thing.” I don’t mean to say that Amazon won’t ever change processes or alter systems. Obviously, we are doing that all the time. It’s easier to do that in new buildings than in old buildings, but Amazon is still totally doing that. We just try to think about our product fitting into those existing environments.

I think there’s a general statement that you can make that when you take robots from the lab and put them into the real world, you’re always constrained by the environment that you put them into. With the stowing problem, that’s definitely true. These fabric pods are horizontal surfaces, so orientation with respect to gravity can be a factor. The elastic bands that block our view are a challenge. The stiffness of the environment also matters, because we’re doing this force-in-the-loop control, and the incredible diversity of items that Amazon sells means that some of the items are compressible. So those factors are part of our environment as well. So in our case, dealing with this unstructured contact, this unexpected contact, that’s the hardest part of the problem.

“Handling contact is a new thing for industrial robots, especially unexpected, unpredictable contact. It’s both a hard problem, and a worthy one.”
—Aaron Parness

What information do you have about what’s in each bin, and how much does that help you to stow items?

Parness: We have the inventory of what’s in the bins, and a bunch of information about each of those items. We also know all the information about the items in our buffer [to be stowed]. And we have a 3D representation from our perception system. But there’s also a quality-control thing where the inventory system says there’s four items in the bin, but in reality, there’s only three items in the bin, because there’s been a defect somewhere. At Amazon, because we’re talking about millions of items per day, that’s a regular occurrence for us.

The configuration of the items in each bin is one of the really challenging things. If you had the same five items: a soccer ball, a teddy bear, a T-shirt, a pair of jeans, and an SD card and you put them in a bin 100 times, they’re going to look different in each of those 100 cases. You also get things that can look very similar. If you have a red pair of jeans or a red T-shirt and red sweatpants, your perception system can’t necessarily tell which one is which. And we do have to think about potentially damaging items—our algorithm decides which items should go to which bins and what confidence we have that we would be successful in making that stow, along with what risk there is that we would damage an item if we flip things up or squish things.

“Contact and clutter are the two things that keep me up at night.”
—Aaron Parness

How do you make sure that you don’t damage anything when you may be operating with incomplete information about what’s in the bin?

Parness: There are two things to highlight there. One is the approach and how we make our decisions about what actions to take. And then the second is how to make sure you don’t damage items as you do those kinds of actions, like squishing as far as you can.

With the first thing, we use a decision tree. We use that item information to claim all the easy stuff—if the bin is empty, put the biggest thing you can in the bin. If there’s only one item in the bin, and you know that item is a book, you can make an assumption it’s incompressible, and you can manipulate it accordingly. As you work down that decision tree, you get to certain branches and leaves that are too complicated to have a set of heuristics, and that’s where we use machine learning to predict things like, if I sweep this point cloud, how much space am I likely to make in the bin?

And this is where the contact-based manipulation comes in because the other thing is, in a warehouse, you need to have speed. You can’t stow one item per hour and be efficient. This is where putting force and torque in the control loop makes a difference—we need to have a high rate, a couple of hundred hertz loop that’s closing around that sensor and a bunch of special sauce in our admittance controller and our motion-planning stack to make sure we can do those motions without damaging items.

An overhead view of Amazon’s new stowing robotAmazon

Since you’re operating in these human-optimized environments, how closely does your robotic approach mimic what a human would be doing?

Parness: We started by doing it ourselves. We also did it ourselves while holding a robotic end effector. And this matters a lot, because you don’t realize that you’re doing all these kinds of fine-control motions, and you have so many sensors on your hand, right? This is a thing. But when we did this task ourselves, when we observed experts doing it, this is where the idea of motion primitives kind of emerged, which made the problem a little more achievable.

What made you use the motion primitives approach as opposed to a more generalized learning technique?

Parness: I’ll give you an honest answer—I was never tempted by reinforcement learning. But there were some in my team that were tempted by that, and we had a debate, since I really believe in iterative design philosophy and in the value of prototyping. We did a bunch of early-stage prototypes, trying to make a data-driven decision, and the end-to-end reinforcement learning seemed intractable. But the motion-primitive strategy actually turned me from a bit of a skeptic about whether robots could even do this job, and made me think, “Oh, yeah, this is the thing. We got to go for this.” That was a turning point, getting those motion primitives and recognizing that that was a way to structure the problem to make it solvable, because they get you most of the way there—you can handle everything but the long tail. And with the tail, maybe sometimes a human is looking in, and saying, “Well, if I play Tetris and I do this incredibly complicated and slow thing I can make the perfect unicorn shaped hole to put this unicorn shaped object into.” The robot won’t do that, and doesn’t need to do that. It can handle the bulk.

You really didn’t think that the problem was solvable at all, originally?

Parness: Yes. Parker Owan, who’s one of the lead scientists on my team, went off into the corner of the lab and started to set up some experiments. And I would look over there while working on other stuff, and be like, “Oh, that young guy, how brave. This problem will show him.” And then I started to get interested. Ultimately, there were two things, like I said: it was discovering that you could use these motion primitives to accomplish the bulk of the in-bin manipulation, because really that’s the hardest part of the problem. The second thing was on the gripper, on the end-of-arm tool.

“If the robot is doing well, I’m like, ‘This is achievable!’ And when we have some new problems, and then all of a sudden I’m like, ‘This is the hardest thing in the world!’ ”
—Aaron Parness

The end effector looks pretty specialized—how did you develop that?

Parness: Looking around the industry, there’s a lot of suction cups, a lot of pinch grasps. And when you have those kinds of grippers, all of a sudden you’re trying to use the item you’re gripping to manipulate the other items that are in the bin, right? When we decided to go with the paddle approach and encapsulate the item, it both gave us six degrees of freedom control over the item, so to make sure it wasn’t going into spaces we didn’t want it to, while also giving us a known engineering surface on the gripper. Maybe I can only predict in a general way the stiffness or the contact properties or the items that are in the bin, but I know I’m touching it with the back of my paddle, which is aluminum.

But then we realized that the end effector actually takes up a lot of space in the bin, and the whole point is that we’re trying to fill these bins up so that we can have a lot of stuff for sale on Amazon.com. So we say, okay, well, we’re going to stay outside the bin, but we’ll have this spatula that will be our in-bin manipulator. It’s a super simple tool that you can use for pushing on stuff, flipping stuff, squashing stuff.... You’re definitely not doing 27-degree-of-freedom human-hand stuff, but because we have these motion primitives, the hardware complemented that.

However, the paddles presented a new problem, because when using them we basically had to drop the item and then try to push it in at the same time. It was this kind of dynamic—let go and shove—which wasn’t great. That’s what led to putting the conveyor belts onto the paddles, which took us to the moon in terms of being successful. I’m the biggest believer there is now! Parker Owan has to kind of slow me down sometimes because I’m so excited about it.

It must have been tempting to keep iterating on the end effector.

Parness: Yeah, it is tempting, especially when you have scientists and engineers on your team. They want everything. It’s always like, “I can make it better. I can make it better. I can make it better.” I have that in me too, for sure. There’s another phrase I really love which is just, “so simple, it might work.” Are we inventing and complexifying, or are we making an elegant solution? Are we making this easier? Because the other thing that’s different about the lab and an actual fulfillment center is that we’ve got to work with our operators. We need it to be serviceable. We need it to be accessible and easy to use. You can’t have four Ph.D.s around each of the robots constantly kind of tinkering and optimizing it. We really try to balance that, but is there a temptation? Yeah. I want to put every sensor known to man on the robot! That’s a temptation, but I know better.

To what extent is picking just stowing in reverse? Could you run your system backwards and have picking solved as well?

Parness: That’s a good question, because obviously I think about that too, but picking is a little harder. With stowing, it’s more about how you make space in a bin, and then how you fit an item into space. For picking, you need to identify the item—when that bin shows up, the machine learning, the computer vision, that system has to be able to find the right item in clutter. But once we can handle contact and we can handle clutter, pick is for sure an application that opens up.

When I think really long term, if Amazon were to deploy a bunch of these stowing robots, all of a sudden you can start to track items, and you can remember that this robot stowed this item in this place in this bin. You can start to build up container maps. Right now, though, the system doesn’t remember.

Regarding picking in particular, a nice thing Amazon has done in the last couple of years is start to engage with the academic community more. My team sponsors research at MIT and at the University of Washington. And the team at University of Washington is actually looking at picking. Stow and pick are both really hard and really appealing problems, and in time, I hope I get to solve both!

There are a large number of publicly available datasets of 3D data, they generally suffer from some drawbacks, such as small number of data samples, and class imbalance. Data augmentation is a set of techniques that aim to increase the size of datasets and solve such defects, and hence to overcome the problem of overfitting when training a classifier. In this paper, we propose a method to create new synthesized data by converting complete meshes into occluded 3D point clouds similar to those in real-world datasets. The proposed method involves two main steps, the first one is hidden surface removal (HSR), where the occluded parts of objects surfaces from the viewpoint of a camera are deleted. A low-complexity method has been proposed to implement HSR based on occupancy grids. The second step is a random sampling of the detected visible surfaces. The proposed two-step method is applied to a subset of ModelNet40 dataset to create a new dataset, which is then used to train and test three different deep-learning classifiers (VoxNet, PointNet, and 3DmFV). We studied classifiers performance as a function of the camera elevation angle. We also conducted another experiment to show how the newly generated data samples can improve the classification performance when they are combined with the original data during training process. Simulation results show that the proposed method enables us to create a large number of new data samples with a small size needed for storage. Results also show that the performance of classifiers is highly dependent on the elevation angle of the camera. In addition, there may exist some angles where performance degrades significantly. Furthermore, data augmentation using our created data improves the performance of classifiers not only when they are tested on the original data, but also on real data.

In robotic-assisted partial nephrectomy, surgeons remove a part of a kidney often due to the presence of a mass. A drop-in ultrasound probe paired to a surgical robot is deployed to execute multiple swipes over the kidney surface to localise the mass and define the margins of resection. This sub-task is challenging and must be performed by a highly-skilled surgeon. Automating this sub-task may reduce cognitive load for the surgeon and improve patient outcomes. The eventual goal of this work is to autonomously move the ultrasound probe on the surface of the kidney taking advantage of the use of the Pneumatically Attachable Flexible (PAF) rail system, a soft robotic device used for organ scanning and repositioning. First, we integrate a shape-sensing optical fibre into the PAF rail system to evaluate the curvature of target organs in robotic-assisted laparoscopic surgery. Then, we investigate the impact of the PAF rail’s material stiffness on the curvature sensing accuracy, considering that soft targets are present in the surgical field. We found overall curvature sensing accuracy to be between 1.44% and 7.27% over the range of curvatures present in adult kidneys. Finally, we use shape sensing to plan the trajectory of the da Vinci surgical robot paired with a drop-in ultrasound probe and autonomously generate an Ultrasound scan of a kidney phantom.

For effective human-robot collaboration, it is crucial for robots to understand requests from users perceiving the three-dimensional space and ask reasonable follow-up questions when there are ambiguities. While comprehending the users’ object descriptions in the requests, existing studies have focused on this challenge for limited object categories that can be detected or localized with existing object detection and localization modules. Further, they have mostly focused on comprehending the object descriptions using flat RGB images without considering the depth dimension. On the other hand, in the wild, it is impossible to limit the object categories that can be encountered during the interaction, and 3-dimensional space perception that includes depth information is fundamental in successful task completion. To understand described objects and resolve ambiguities in the wild, for the first time, we suggest a method leveraging explainability. Our method focuses on the active areas of an RGB scene to find the described objects without putting the previous constraints on object categories and natural language instructions. We further improve our method to identify the described objects considering depth dimension. We evaluate our method in varied real-world images and observe that the regions suggested by our method can help resolve ambiguities. When we compare our method with a state-of-the-art baseline, we show that our method performs better in scenes with ambiguous objects which cannot be recognized by existing object detectors. We also show that using depth features significantly improves performance in scenes where depth data is critical to disambiguate the objects and across our evaluation dataset that contains objects that can be specified with and without the depth dimension.

This paper proposes an adaptive robust Jacobian-based controller for task-space position-tracking control of robotic manipulators. Structure of the controller is built up on a traditional Proportional-Integral-Derivative (PID) framework. An additional neural control signal is next synthesized under a non-linear learning law to compensate for internal and external disturbances in the robot dynamics. To provide the strong robustness of such the controller, a new gain learning feature is then integrated to automatically adjust the PID gains for various working conditions. Stability of the closed-loop system is guaranteed by Lyapunov constraints. Effectiveness of the proposed controller is carefully verified by intensive simulation results.

Often in swarm robotics, an assumption is made that all robots in the swarm behave the same and will have a similar (if not the same) error model. However, in reality, this is not the case, and this lack of uniformity in the error model, and other operations, can lead to various emergent behaviors. This paper considers the impact of the error model and compares robots in a swarm that operate using the same error model (uniform error) against each robot in the swarm having a different error model (thus introducing error diversity). Experiments are presented in the context of a foraging task. Simulation and physical experimental results show the importance of the error model and diversity in achieving the expected swarm behavior.

Stroke is a major global issue, affecting millions every year. When a stroke occurs, survivors are often left with physical disabilities or difficulties, frequently marked by abnormal gait. Post-stroke gait normally presents as one of or a combination of unilaterally shortened step length, decreased dorsiflexion during swing phase, and decreased walking speed. These factors lead to an increased chance of falling and an overall decrease in quality of life due to a reduced ability to locomote quickly and safely under one’s own power. Many current rehabilitation techniques fail to show lasting results that suggest the potential for producing permanent changes. As technology has advanced, robot-assisted rehabilitation appears to have a distinct advantage, as the precision and repeatability of such an intervention are not matched by conventional human-administered therapy. The possible role in gait rehabilitation of the Variable Stiffness Treadmill (VST), a unique, robotic treadmill, is further investigated in this paper. The VST is a split-belt treadmill that can reduce the vertical stiffness of one of the belts, while the other belt remains rigid. In this work, we show that the repeated unilateral stiffness perturbations created by this device elicit an aftereffect of increased step length that is seen for over 575 gait cycles with healthy subjects after a single 10-min intervention. These long aftereffects are currently unmatched in the literature according to our knowledge. This step length increase is accompanied by kinematics and muscle activity aftereffects that help explain functional changes and have their own independent value when considering the characteristics of post-stroke gait. These results suggest that repeated unilateral stiffness perturbations could possibly be a useful form of post-stroke gait rehabilitation.

In the current industrial context, the importance of assessing and improving workers' health conditions is widely recognised. Both physical and psycho-social factors contribute to jeopardising the underlying comfort and well-being, boosting the occurrence of diseases and injuries, and affecting their quality of life. Human-robot interaction and collaboration frameworks stand out among the possible solutions to prevent and mitigate workplace risk factors. The increasingly advanced control strategies and planning schemes featured by collaborative robots have the potential to foster fruitful and efficient coordination during the execution of hybrid tasks, by meeting their human counterparts' needs and limits. To this end, a thorough and comprehensive evaluation of an individual's ergonomics, i.e. direct effect of workload on the human psycho-physical status, must be taken into account. In this review article, we provide an overview of the existing ergonomics assessment tools as well as the available monitoring technologies to drive and adapt a collaborative robot's behaviour. Preliminary attempts of ergonomic human-robot collaboration frameworks are presented next, discussing state-of-the-art limitations and challenges. Future trends and promising themes are finally highlighted, aiming to promote safety, health, and equality in worldwide workplaces.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREA

Enjoy today’s videos!

Following the great success of the miniature humanoid robot DARwIn-OP we have developed, RoMeLa is proud to introduce the next generation humanoid robot for research and education, BRUCE (Bipedal Robot Unit with Compliance Enhanced.) BRUCE is an open-platform humanoid robot that utilizes the BEAR proprioceptive actuators, enabling it to have stunning dynamic performance capabilities never before seen in this class of robots. Originally developed at RoMeLa in joint effort with Westwood Robotics, BRUCE will be made open source to the robotics community and also be made available via Westwood Robotics.

BRUCE has a total 16 DoF, is 70cm in height and weights only 4.8kg. With a 3000mAh lithium battery it can lasts for about 20 minutes with continuous dynamic motions. Besides its excellent dynamic performance, BRUCE is very robust and user-friendly, along with great compatibility and expandability. BRUCE makes humanoid robotics research efficient, safe and fun.

[ Westwood Robotics ]

This video shows evoBOT, a dynamically stable and autonomous transport robot.

[ Fraunhofer IML ]

ASL Team wishes you all the best for 2023 :-)

[ ASL ]

Holidays are a magical time. But if you feel like our robot dog Marvin, the magic needs to catch up and find you. Keep your eyes and heart open for possibilities – jolliness is closer than you realize!

[ Accenture Baltics ]

In this Christmas clip, the robots of a swarm transport Christmas decorations and they cooperate to carry the decorated tree. Each robot has enough strength to carry the decorations itself, however, no robot can carry the tree on its own. The solution: they carry the tree by working together!

[ Demiurge ]

Thanks David!

Our VoloDrone team clearly got the holiday feels in snowy Germany while sling load testing cargo – definitely a new way of disposing of a Christmas tree before the New Year.

[ Volocopter ]

What if we race three commercially available quadruped robots for a bit of fun...? Out of the box configuration, ‘full sticks forward’ on the remotes on flat ground. Hope you enjoy the results ;-)

[ CSIRO Data61 ]

Happy Holidays From Veo!

[ Veo ]

In ETH Zurich’s Soft Robotics Lab, a white robot hand reaches for a beer can, lifts it up and moves it to a glass at the other end of the table. There, the hand carefully tilts the can to the right and pours the sparkling, gold-coloured liquid into the glass without spilling it. Cheers!

[ SRL ]

Bingo (aka Santa) found herself a new sleigh! All of us at CSIRO’s Data61 Robotics and Autonomous Systems Group wish everyone a Merry Christmas and Happy Holidays!

[ CSIRO Data61 ]

From 2020, a horse-inspired walking robot.

[ Ishikawa Minami Lab ]

Landing an unmanned aerial vehicle (UAV) on top of an unmanned surface vehicle (USV) in harsh open waters is a challenging problem, owing to forces that can damage the UAV due to a severe roll and/or pitch angle of the USV during touchdown. To tackle this, we propose a novel model predictive control (MPC) approach enabling a UAV to land autonomously on a USV in these harsh conditions.

[ MRS CTU ]

GITAI has a fancy new office in Los Angeles that they’re filling with space robots.

[ GITAI ]

This Maryland Robotics Center seminar is from CMU’s Vickie Webster-Wood: “It’s Alive! Bioinspired and biohybrid approaches towards life-like and living robots.”

In this talk, I will share efforts from my group in our two primary research thrusts: Bioinspired robotics, and biohybrid robotics. By using neuromechanical models and bioinspired robots as tools for basic research we are developing new models of how animals achieve multifunctional, adaptable behaviors. Building on our understanding of animal systems and living tissues, our research in biohybrid robotics is enabling new approaches toward the creation of autonomous biodegradable living robots. Such robotic systems have future applications in medicine, search and rescue, and environmental monitoring of sensitive environments (e.g., coral reefs).

[ UMD ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREA

Enjoy today’s videos!

Following the great success of the miniature humanoid robot DARwIn-OP we have developed, RoMeLa is proud to introduce the next generation humanoid robot for research and education, BRUCE (Bipedal Robot Unit with Compliance Enhanced.) BRUCE is an open-platform humanoid robot that utilizes the BEAR proprioceptive actuators, enabling it to have stunning dynamic performance capabilities never before seen in this class of robots. Originally developed at RoMeLa in joint effort with Westwood Robotics, BRUCE will be made open source to the robotics community and also be made available via Westwood Robotics.

BRUCE has a total 16 DoF, is 70cm in height and weights only 4.8kg. With a 3000mAh lithium battery it can lasts for about 20 minutes with continuous dynamic motions. Besides its excellent dynamic performance, BRUCE is very robust and user-friendly, along with great compatibility and expandability. BRUCE makes humanoid robotics research efficient, safe and fun.

[ Westwood Robotics ]

This video shows evoBOT, a dynamically stable and autonomous transport robot.

[ Fraunhofer IML ]

ASL Team wishes you all the best for 2023 :-)

[ ASL ]

Holidays are a magical time. But if you feel like our robot dog Marvin, the magic needs to catch up and find you. Keep your eyes and heart open for possibilities – jolliness is closer than you realize!

[ Accenture Baltics ]

In this Christmas clip, the robots of a swarm transport Christmas decorations and they cooperate to carry the decorated tree. Each robot has enough strength to carry the decorations itself, however, no robot can carry the tree on its own. The solution: they carry the tree by working together!

[ Demiurge ]

Thanks David!

Our VoloDrone team clearly got the holiday feels in snowy Germany while sling load testing cargo – definitely a new way of disposing of a Christmas tree before the New Year.

[ Volocopter ]

What if we race three commercially available quadruped robots for a bit of fun...? Out of the box configuration, ‘full sticks forward’ on the remotes on flat ground. Hope you enjoy the results ;-)

[ CSIRO Data61 ]

Happy Holidays From Veo!

[ Veo ]

In ETH Zurich’s Soft Robotics Lab, a white robot hand reaches for a beer can, lifts it up and moves it to a glass at the other end of the table. There, the hand carefully tilts the can to the right and pours the sparkling, gold-coloured liquid into the glass without spilling it. Cheers!

[ SRL ]

Bingo (aka Santa) found herself a new sleigh! All of us at CSIRO’s Data61 Robotics and Autonomous Systems Group wish everyone a Merry Christmas and Happy Holidays!

[ CSIRO Data61 ]

From 2020, a horse-inspired walking robot.

[ Ishikawa Minami Lab ]

Landing an unmanned aerial vehicle (UAV) on top of an unmanned surface vehicle (USV) in harsh open waters is a challenging problem, owing to forces that can damage the UAV due to a severe roll and/or pitch angle of the USV during touchdown. To tackle this, we propose a novel model predictive control (MPC) approach enabling a UAV to land autonomously on a USV in these harsh conditions.

[ MRS CTU ]

GITAI has a fancy new office in Los Angeles that they’re filling with space robots.

[ GITAI ]

This Maryland Robotics Center seminar is from CMU’s Vickie Webster-Wood: “It’s Alive! Bioinspired and biohybrid approaches towards life-like and living robots.”

In this talk, I will share efforts from my group in our two primary research thrusts: Bioinspired robotics, and biohybrid robotics. By using neuromechanical models and bioinspired robots as tools for basic research we are developing new models of how animals achieve multifunctional, adaptable behaviors. Building on our understanding of animal systems and living tissues, our research in biohybrid robotics is enabling new approaches toward the creation of autonomous biodegradable living robots. Such robotic systems have future applications in medicine, search and rescue, and environmental monitoring of sensitive environments (e.g., coral reefs).

[ UMD ]



Even simple robotic grippers can perform complex tasks—so long as it’s smart about using its environment as its handy aide. This, at least, is the finding of new research from Carnegie Mellon University’s Robotics Institute.

In robotics, simple grippers are typically assigned straightforward tasks such as picking up objects and placing them somewhere. However, by making use of their surroundings, such as pushing an item against a table or wall, simple grippers can perform skillful maneuvers usually thought achievable only by more complex, fragile and expensive, multi-fingered artificial hands.

However, previous research on this strategy, known as “extrinsic dexterity,” often made assumptions about the way in which grippers would grasp items. This in turn required specific gripper designs or robot motions.

“Simple grippers are underrated.”
—Wenxuan Zhou, Carnegie Mellon University

In the new study, scientists used AI to overcome these limitations to apply extrinsic dexterity to more general settings and successfully grasp items of various sizes, weights, shapes and surfaces.

“This research may open up new possibilities in manipulation with a simple gripper,” says study lead author Wenxuan Zhou at Carnegie Mellon University. “Potential applications include warehouse robots or housekeeping robots that help people to organize their home.”

The researchers employed reinforcement learning to train a neural network. They had the AI system attempt random actions to grasp an object, rewarding those series of actions that led to success. The system, then, ultimately adopted the most successful patterns of behavior. It learned, in so many words. After first training their system in a physics simulator, they next tested it in a simple robot with a pincer-like grip.

The scientists had the robot attempt to grab items confined within an open bin that were initially oriented in ways that meant the robot could not pick them up. For example, the robot might be given an object that was too wide for its gripper to grasp. The AI needed to figure out a way to push the item against the wall of the bin so the robot could then grab it from its side.

“Initially, we thought the robot might try to do something like scooping underneath the object, as humans do,” Zhou says. “However, the algorithm gave us an unexpected answer.” After nudging an item against the wall, the robot pushed its top finger against the side of the object to lever it up, “and then let the object drop on the bottom finger to grasp it.”

In experiments, Zhou and her colleagues tested their system on items such as cardboard boxes, plastic bottles, a toy purse and a container of Cool Whip. These varied in weight, shape and how slippery they were. They found their simple grippers could successfully grasp these items with a 78 percent success rate.

“Simple grippers are underrated,” Zhou says. “Robots should exploit extrinsic dexterity for more skillful manipulation.”

In the future, the group hopes to generalize their findings to, Zhou says, “a wider range of objects and scenarios,” Zhou says. “We are also interested in exploring more complex tasks with a simple gripper with extrinsic dexterity.”

The scientists detailed their findings 18 December at the Conference on Robot Learning in Auckland, New Zealand.



Even simple robotic grippers can perform complex tasks—so long as it’s smart about using its environment as its handy aide. This, at least, is the finding of new research from Carnegie Mellon University’s Robotics Institute.

In robotics, simple grippers are typically assigned straightforward tasks such as picking up objects and placing them somewhere. However, by making use of their surroundings, such as pushing an item against a table or wall, simple grippers can perform skillful maneuvers usually thought achievable only by more complex, fragile and expensive, multi-fingered artificial hands.

However, previous research on this strategy, known as “extrinsic dexterity,” often made assumptions about the way in which grippers would grasp items. This in turn required specific gripper designs or robot motions.

“Simple grippers are underrated.”
—Wenxuan Zhou, Carnegie Mellon University

In the new study, scientists used AI to overcome these limitations to apply extrinsic dexterity to more general settings and successfully grasp items of various sizes, weights, shapes and surfaces.

“This research may open up new possibilities in manipulation with a simple gripper,” says study lead author Wenxuan Zhou at Carnegie Mellon University. “Potential applications include warehouse robots or housekeeping robots that help people to organize their home.”

The researchers employed reinforcement learning to train a neural network. They had the AI system attempt random actions to grasp an object, rewarding those series of actions that led to success. The system, then, ultimately adopted the most successful patterns of behavior. It learned, in so many words. After first training their system in a physics simulator, they next tested it in a simple robot with a pincer-like grip.

The scientists had the robot attempt to grab items confined within an open bin that were initially oriented in ways that meant the robot could not pick them up. For example, the robot might be given an object that was too wide for its gripper to grasp. The AI needed to figure out a way to push the item against the wall of the bin so the robot could then grab it from its side.

“Initially, we thought the robot might try to do something like scooping underneath the object, as humans do,” Zhou says. “However, the algorithm gave us an unexpected answer.” After nudging an item against the wall, the robot pushed its top finger against the side of the object to lever it up, “and then let the object drop on the bottom finger to grasp it.”

In experiments, Zhou and her colleagues tested their system on items such as cardboard boxes, plastic bottles, a toy purse and a container of Cool Whip. These varied in weight, shape and how slippery they were. They found their simple grippers could successfully grasp these items with a 78 percent success rate.

“Simple grippers are underrated,” Zhou says. “Robots should exploit extrinsic dexterity for more skillful manipulation.”

In the future, the group hopes to generalize their findings to, Zhou says, “a wider range of objects and scenarios,” Zhou says. “We are also interested in exploring more complex tasks with a simple gripper with extrinsic dexterity.”

The scientists detailed their findings 18 December at the Conference on Robot Learning in Auckland, New Zealand.



2022 was a huge year for robotics. Yes, I might say this every year, and yes, every year I might also say that each year is more significant than any other. But seriously: This year trumped them all. After a tough pandemic (which, let’s be clear, is still not over), conferences and events have started to come back, research has resumed, and robots have continued to make their way into the world. It really has been a great year.

And on a personal note, we’d like to thank you, all of you, for reading (and hopefully enjoying) our work. We’d be remiss if we didn’t also thank those of you who provide awesome stuff for us to write about. So, please enjoy this quick look back at some of our most popular and most impactful stories of 2022. Here’s wishing for more and better in 2023!

The Bionic-Hand Arms Race

Robotic technology can be a powerful force for good, but using robots to make the world a better place has to be done respectfully. This is especially true when what you’re working on has a direct physical impact on a user, as is the case with bionic limbs. Britt Young has a more personal perspective on this than most, and in this article, she weaved together history, technology, and her own experience to explore bionic limb design. With over 100,000 views, this was our most popular robotics story of 2022.

For Better or Worse, Tesla Bot Is Exactly What We Expected

After Elon Musk announced Tesla’s development of a new humanoid robot, we were left wondering whether the car company would be able to somehow deliver something magical. We found out this year that the answer is a resounding “Not really.” There was nothing wrong with Tesla Bot, but it was immediately obvious that Tesla had not managed to do anything groundbreaking with it, either. While there is certainly potential for the future, at this point it’s just another humanoid robot with a long and difficult development path ahead of it.

Autonomous Drones Challenge Human Champions in First “Fair” Race

Usually, the kinds of things that humans are really good at and the kinds of things that robots are really good at don’t overlap all that much. So, it’s always impressive when robots get anywhere close to human performance in activities that play to our strengths. This year, autonomous drones from the University of Zurich managed for the first time to defeat the best human pilots in the world in a “fair” drone race, where both humans and robots relied entirely on their onboard brains and visual perception.

How Robots Can Help Us Act and Feel Younger

Gill Pratt has a unique perspective on the robotics world, going from academia to DARPA program manager to the current CEO of Toyota Research. His leadership position at TRI means that he can visualize how to make robots that best help humanity, and then actually work towards putting that vision into practice—commercially and at scale. His current focus is assistive robots that help us live fuller, happier lives as we age.

DARPA’s RACER Program Sends High-Speed Autonomous Vehicles Off-Road

Getting autonomous vehicles to drive themselves is not easy, but the fact that they work even as well as they do is arguably due to the influence of DARPA’s 2005 Grand Challenge. That’s why it’s so exciting to hear about DARPA’s newest autonomous vehicle challenge, aimed at putting fully autonomous vehicles out into the wilderness to fend for themselves completely off-road.

Boston Dynamics AI Institute Targets Basic Research

Boston Dynamics is arguably best known for developing amazing robots with questionable practicality. As the company seeks to change that by exploring commercial applications for its existing platforms, founder Marc Raibert has decided to keep focusing on basic research by starting a completely new institute with the backing of Hyundai.

Alphabet’s Intrinsic Acquires Majority of Open Robotics

The Open Source Robotics Foundation (OSRF) spun out of Willow Garage 10 years ago. This year’s acquisition of most of the Open Robotics team by Alphabet’s Intrinsic represents a milestone for the Robotics Operating System (ROS). the fact that it’s even possible for Open Robotics to move on like this is a testament to just how robust the ROS community is. The Open Robotics folks will still be contributing to ROS, with a much smaller OSRF supporting the community directly. But it’s hard to say goodbye to what OSRF used to be.

The 11 Commandments of Hugging Robots

Hugging robots is super important to me, and it should be important to you, too! And to everyone, everywhere! While, personally, I’m perfectly happy to hug just about any robot, very few of them can hug back—at least in part because the act of hugging is a complex human interaction task that requires either experience being a human or a lot of research for a robot. Much of that research has now been done, giving robots some data-driven guidelines about how to give really good hugs.


Labrador Addresses Critical Need With Deceptively Simple Home Robot

It’s not often that we see a new autonomous home robot with a compelling use case. But this year, Labrador Systems introduced Retriever, a semi-autonomous mobile table that can transport objects for folks with mobility challenges. If Retriever doesn’t sound like a big deal, that’s probably because you have no use for a robot like this; but it has the potential to make a huge impact on people who need it.

Even as It Retires, ASIMO Still Manages to Impress

ASIMO has been setting the standard for humanoid robots for literally a decade. Honda’s tiny humanoid was walking, running, and jumping back in 2011 (!)—and that was just the most recent version. ASIMO has been under development since the mid-1980s, which is some seriously ancient history as far as humanoid robots go. Honda decided to retire the little white robot this year, but ASIMO’s legacy lives on in Honda’s humanoid robot program. We’ll miss you, buddy.




2022 was a huge year for robotics. Yes, I might say this every year, and yes, every year I might also say that each year is more significant than any other. But seriously: This year trumped them all. After a tough pandemic (which, let’s be clear, is still not over), conferences and events have started to come back, research has resumed, and robots have continued to make their way into the world. It really has been a great year.

And on a personal note, we’d like to thank you, all of you, for reading (and hopefully enjoying) our work. We’d be remiss if we didn’t also thank those of you who provide awesome stuff for us to write about. So, please enjoy this quick look back at some of our most popular and most impactful stories of 2022. Here’s wishing for more and better in 2023!

The Bionic-Hand Arms Race

Robotic technology can be a powerful force for good, but using robots to make the world a better place has to be done respectfully. This is especially true when what you’re working on has a direct physical impact on a user, as is the case with bionic limbs. Britt Young has a more personal perspective on this than most, and in this article, she weaved together history, technology, and her own experience to explore bionic limb design. With over 100,000 views, this was our most popular robotics story of 2022.

For Better or Worse, Tesla Bot Is Exactly What We Expected

After Elon Musk announced Tesla’s development of a new humanoid robot, we were left wondering whether the car company would be able to somehow deliver something magical. We found out this year that the answer is a resounding “Not really.” There was nothing wrong with Tesla Bot, but it was immediately obvious that Tesla had not managed to do anything groundbreaking with it, either. While there is certainly potential for the future, at this point it’s just another humanoid robot with a long and difficult development path ahead of it.

Autonomous Drones Challenge Human Champions in First “Fair” Race

Usually, the kinds of things that humans are really good at and the kinds of things that robots are really good at don’t overlap all that much. So, it’s always impressive when robots get anywhere close to human performance in activities that play to our strengths. This year, autonomous drones from the University of Zurich managed for the first time to defeat the best human pilots in the world in a “fair” drone race, where both humans and robots relied entirely on their onboard brains and visual perception.

How Robots Can Help Us Act and Feel Younger

Gill Pratt has a unique perspective on the robotics world, going from academia to DARPA program manager to the current CEO of Toyota Research. His leadership position at TRI means that he can visualize how to make robots that best help humanity, and then actually work towards putting that vision into practice—commercially and at scale. His current focus is assistive robots that help us live fuller, happier lives as we age.

DARPA’s RACER Program Sends High-Speed Autonomous Vehicles Off-Road

Getting autonomous vehicles to drive themselves is not easy, but the fact that they work even as well as they do is arguably due to the influence of DARPA’s 2005 Grand Challenge. That’s why it’s so exciting to hear about DARPA’s newest autonomous vehicle challenge, aimed at putting fully autonomous vehicles out into the wilderness to fend for themselves completely off-road.

Boston Dynamics AI Institute Targets Basic Research

Boston Dynamics is arguably best known for developing amazing robots with questionable practicality. As the company seeks to change that by exploring commercial applications for its existing platforms, founder Marc Raibert has decided to keep focusing on basic research by starting a completely new institute with the backing of Hyundai.

Alphabet’s Intrinsic Acquires Majority of Open Robotics

The Open Source Robotics Foundation (OSRF) spun out of Willow Garage 10 years ago. This year’s acquisition of most of the Open Robotics team by Alphabet’s Intrinsic represents a milestone for the Robotics Operating System (ROS). the fact that it’s even possible for Open Robotics to move on like this is a testament to just how robust the ROS community is. The Open Robotics folks will still be contributing to ROS, with a much smaller OSRF supporting the community directly. But it’s hard to say goodbye to what OSRF used to be.

The 11 Commandments of Hugging Robots

Hugging robots is super important to me, and it should be important to you, too! And to everyone, everywhere! While, personally, I’m perfectly happy to hug just about any robot, very few of them can hug back—at least in part because the act of hugging is a complex human interaction task that requires either experience being a human or a lot of research for a robot. Much of that research has now been done, giving robots some data-driven guidelines about how to give really good hugs.


Labrador Addresses Critical Need With Deceptively Simple Home Robot

It’s not often that we see a new autonomous home robot with a compelling use case. But this year, Labrador Systems introduced Retriever, a semi-autonomous mobile table that can transport objects for folks with mobility challenges. If Retriever doesn’t sound like a big deal, that’s probably because you have no use for a robot like this; but it has the potential to make a huge impact on people who need it.

Even as It Retires, ASIMO Still Manages to Impress

ASIMO has been setting the standard for humanoid robots for literally a decade. Honda’s tiny humanoid was walking, running, and jumping back in 2011 (!)—and that was just the most recent version. ASIMO has been under development since the mid-1980s, which is some seriously ancient history as far as humanoid robots go. Honda decided to retire the little white robot this year, but ASIMO’s legacy lives on in Honda’s humanoid robot program. We’ll miss you, buddy.




Video Friday is your weekly selection of awesome robotics videos (special holiday edition!) collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREA

Enjoy today’s videos!

We hope you have an uplifting holiday season! Spot was teleoperated by professional operators, don’t try this at home.

[ Boston Dynamics ]

This year, our robot Husky was very busy working for the European Space Agency (ESA). But will he have to spend Christmas alone, apart from his robot friends at the FZI – alone on the moon? His friends want to change that! So, they train very hard to reunite with Husky! Will they succeed?

[ FZI ]

Thanks, Arne!

We heard Santa is starting to automate at the North Pole and loads the sledge with robots now. Enjoy our little Christmas movie!

[ Leverage Robotics ]

Thanks, Roman!

A self healing soft robot finger developed by VUB-imec Brubotics and FYSC sending in morse to the world “MERRY XMAS”.

[ BruBrotics ]

Thanks, Bram!

After the research team made some gingerbread houses, we wanted to see how Nadia would do walking over them. Happy Holidays everyone!

[ IHMC Robotics ]

In this festive robotic Christmas sketch, a group of highly advanced robots come together to celebrate the holiday season. The “Berliner Hochschule für Technik” wishes a merry Christmas and a happy new year!

[ BHT ]

Thanks, Hannes!

Our GoFa cobot had a fantastic year and is ready for new challenges in the new year, but right now, its time for some celebrations with some cobot-made delicious cookies.

[ ABB ]

Helping with the office tree, from Sanctuary AI.

Flavor text from the video description: “Decorated Christmas trees originated during the 16th-century in Germany. Protestant reformer Martin Luther is known for being among the first major historical figures to add candles to an evergreen tree. It is unclear whether this was, even then, considered to be a good idea.”

[ Sanctuary ]

Merry Christmas from qbrobotics!

[ qbrobotics ]

Christmas, delivered by robots!

[ Naver Labs ]

Bernadett dressed Ecowalker in Xmas lights. Enjoy the holidays!

[ Max Planck ]

Warmest greetings this holiday season and best wishes for a happy New Year from Kawasaki Robotics.

[ Kawasaki Robotics ]

Robotnik wishes you a Merry Christmas 2022.

[ Robotnik ]

CYBATHLON wishes you all a happy festive season and a happy new year 2023!

[ Cybathlon ]

Here’s what LiDAR-based SLAM in a snow gust looks like. Enjoy the weather out there!

[ NORLAB ]

We present advances on the development of proactive control for online individual user adaptation in a welfare robot guidance scenario. The proposed control approach can drive a mobile robot to autonomously navigate in relevant indoor environments. All in all, this study captures a wide range of research from robot control technology development to technological validity in a relevant environment and system prototype demonstration in an operational environment (i.e., an elderly care center).

[ Paper ]

Thanks, Poramate!

“Every day in a research job :)”

[ Chengxu Zhou ]

Robots like Digit are purpose-built to do tasks in environments made for humans. We aren’t trying to just mimic the look of people or make a humanoid robot. Every design and engineering decision is looked at through a function-first lens. To easily walk into warehouses and work alongside people, to do the kinds of dynamic reaching, carrying, and walking that we do, Digit has some similar characteristics. Our Co-Founder and Chief Technology Officer Jonathan Hurst, discusses the difference between humanoid and human-centric robotics.

[ Agility Robotics ]

This year, the KUKA Innovation Award is all about medicine and health. After all, new technologies are playing an increasingly important role in healthcare and will be virtually indispensable in the future. Researchers, developers and young entrepreneurs from all over the world submitted their concepts for the “Robotics in Healthcare Challenge”. An international jury of experts evaluated the concepts and selected our five finalists.

[ Kuka ]

In the summer of 2003, two NASA rovers began their journeys to Mars at a time when the Red Planet and Earth were the nearest they had been to each other in 60,000 years. To capitalize on this alignment, the rovers had been built at breakneck speed by teams at NASA’s Jet Propulsion Laboratory. The mission came amid further pressures, from mounting international competition to increasing public scrutiny following the loss of the space shuttle Columbia and its crew of seven. NASA was in great need of a success.
“Landing on Mars” is the story of Opportunity and Spirit surviving a massive solar flare during cruise, the now well-known “six minutes of terror,” and what came close to being a mission-ending software error for the first rover once it was on the ground.

[ JPL ]



Video Friday is your weekly selection of awesome robotics videos (special holiday edition!) collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREA

Enjoy today’s videos!

We hope you have an uplifting holiday season! Spot was teleoperated by professional operators, don’t try this at home.

[ Boston Dynamics ]

This year, our robot Husky was very busy working for the European Space Agency (ESA). But will he have to spend Christmas alone, apart from his robot friends at the FZI – alone on the moon? His friends want to change that! So, they train very hard to reunite with Husky! Will they succeed?

[ FZI ]

Thanks, Arne!

We heard Santa is starting to automate at the North Pole and loads the sledge with robots now. Enjoy our little Christmas movie!

[ Leverage Robotics ]

Thanks, Roman!

A self healing soft robot finger developed by VUB-imec Brubotics and FYSC sending in morse to the world “MERRY XMAS”.

[ BruBrotics ]

Thanks, Bram!

After the research team made some gingerbread houses, we wanted to see how Nadia would do walking over them. Happy Holidays everyone!

[ IHMC Robotics ]

In this festive robotic Christmas sketch, a group of highly advanced robots come together to celebrate the holiday season. The “Berliner Hochschule für Technik” wishes a merry Christmas and a happy new year!

[ BHT ]

Thanks, Hannes!

Our GoFa cobot had a fantastic year and is ready for new challenges in the new year, but right now, its time for some celebrations with some cobot-made delicious cookies.

[ ABB ]

Helping with the office tree, from Sanctuary AI.

Flavor text from the video description: “Decorated Christmas trees originated during the 16th-century in Germany. Protestant reformer Martin Luther is known for being among the first major historical figures to add candles to an evergreen tree. It is unclear whether this was, even then, considered to be a good idea.”

[ Sanctuary ]

Merry Christmas from qbrobotics!

[ qbrobotics ]

Christmas, delivered by robots!

[ Naver Labs ]

Bernadett dressed Ecowalker in Xmas lights. Enjoy the holidays!

[ Max Planck ]

Warmest greetings this holiday season and best wishes for a happy New Year from Kawasaki Robotics.

[ Kawasaki Robotics ]

Robotnik wishes you a Merry Christmas 2022.

[ Robotnik ]

CYBATHLON wishes you all a happy festive season and a happy new year 2023!

[ Cybathlon ]

Here’s what LiDAR-based SLAM in a snow gust looks like. Enjoy the weather out there!

[ NORLAB ]

We present advances on the development of proactive control for online individual user adaptation in a welfare robot guidance scenario. The proposed control approach can drive a mobile robot to autonomously navigate in relevant indoor environments. All in all, this study captures a wide range of research from robot control technology development to technological validity in a relevant environment and system prototype demonstration in an operational environment (i.e., an elderly care center).

[ Paper ]

Thanks, Poramate!

“Every day in a research job :)”

[ Chengxu Zhou ]

Robots like Digit are purpose-built to do tasks in environments made for humans. We aren’t trying to just mimic the look of people or make a humanoid robot. Every design and engineering decision is looked at through a function-first lens. To easily walk into warehouses and work alongside people, to do the kinds of dynamic reaching, carrying, and walking that we do, Digit has some similar characteristics. Our Co-Founder and Chief Technology Officer Jonathan Hurst, discusses the difference between humanoid and human-centric robotics.

[ Agility Robotics ]

This year, the KUKA Innovation Award is all about medicine and health. After all, new technologies are playing an increasingly important role in healthcare and will be virtually indispensable in the future. Researchers, developers and young entrepreneurs from all over the world submitted their concepts for the “Robotics in Healthcare Challenge”. An international jury of experts evaluated the concepts and selected our five finalists.

[ Kuka ]

In the summer of 2003, two NASA rovers began their journeys to Mars at a time when the Red Planet and Earth were the nearest they had been to each other in 60,000 years. To capitalize on this alignment, the rovers had been built at breakneck speed by teams at NASA’s Jet Propulsion Laboratory. The mission came amid further pressures, from mounting international competition to increasing public scrutiny following the loss of the space shuttle Columbia and its crew of seven. NASA was in great need of a success.
“Landing on Mars” is the story of Opportunity and Spirit surviving a massive solar flare during cruise, the now well-known “six minutes of terror,” and what came close to being a mission-ending software error for the first rover once it was on the ground.

[ JPL ]

Speech-to-text engines are extremely needed nowadays for different applications, representing an essential enabler in human–robot interaction. Still, some languages suffer from the lack of labeled speech data, especially in the Arabic dialects or any low-resource languages. The need for a self-supervised training process and self-training using noisy training is proven to be one of the up-and-coming feasible solutions. This article proposes an end-to-end, transformers-based model with a framework for low-resource languages. In addition, the framework incorporates customized audio-to-text processing algorithms to achieve a highly efficient Jordanian Arabic dialect speech-to-text system. The proposed framework enables ingesting data from many sources, making the ground truth from external sources possible by speeding up the manual annotation process. The framework allows the training process using noisy student training and self-supervised learning to utilize the unlabeled data in both pre- and post-training stages and incorporate multiple types of data augmentation. The proposed self-training approach outperforms the fine-tuned Wav2Vec model by 5% in terms of word error rate reduction. The outcome of this work provides the research community with a Jordanian-spoken data set along with an end-to-end approach to deal with low-resource languages. This is done by utilizing the power of the pretraining, post-training, and injecting noisy labeled and augmented data with minimal human intervention. It enables the development of new applications in the field of Arabic language speech-to-text area like the question-answering systems and intelligent control systems, and it will add human-like perception and hearing sensors to intelligent robots.

The fifth industrial revolution and the accompanying influences of digitalization are presenting enterprises with significant challenges. Regardless of the trend, however, humans will remain a central resource in future factories and will continue to be required to perform manual tasks. Against the backdrop of, e.g., societal and demographic changes and skills shortage, future-oriented support technologies such as exoskeletons represent a promising opportunity to support workers. Accordingly, the increasing interconnection of human operators, devices, and the environment, especially in human-centered work processes, requires improved human-machine interaction and further qualification of support systems to smart devices. In order to meet these requirements and enable exoskeletons as a future-proof technology, this article presents a framework for the future-oriented qualification of exoskeletons, which reveals potential in terms of user-individual and context-dependent adaptivity of support systems. In this context, a framework has been developed, allowing different support situations to be classified based on elementary functions. Using these support function dependencies and characteristics, it becomes possible to describe adaptive system behavior for human-centered support systems such as exoskeletons as a central aspect. For practical illustration, it is shown for an exemplary active exoskeleton using the example of user-individuality and context-specificity how the support characteristics of exoskeletons in the form of different support characteristics can bring about a purposeful and needs-based application for users and can contribute valuably to design future workplaces.

Pages