Feed aggregator

Human-in-the-loop approaches can greatly enhance the human–robot interaction by making the user an active part of the control loop, who can provide a feedback to the robot in order to augment its capabilities. Such feedback becomes even more important in all those situations where safety is of utmost concern, such as in assistive robotics. This study aims to realize a human-in-the-loop approach, where the human can provide a feedback to a specific robot, namely, a smart wheelchair, to augment its artificial sensory set, extending and improving its capabilities to detect and avoid obstacles. The feedback is provided by both a keyboard and a brain–computer interface: with this scope, the work has also included a protocol design phase to elicit and evoke human brain event–related potentials. The whole architecture has been validated within a simulated robotic environment, with electroencephalography signals acquired from different test subjects.

While the potential of using helical microrobots for biomedical applications, such as cargo transport, drug delivery, and micromanipulation, had been demonstrated, the viability to use them for practical applications is hindered by the cost, speed, and repeatability of current fabrication techniques. Hence, this paper introduces a simple, low-cost, high-throughput manufacturing process for single nickel layer helical microrobots with consistent dimensions. Photolithography and electron-beam (e-beam) evaporation were used to fabricate 2D parallelogram patterns that were sequentially rolled up into helical microstructures through the swelling effect of a photoresist sacrificial layer. Helical parameters were controlled by adjusting the geometric parameters of parallelogram patterns. To validate the fabrication process and characterize the microrobots’ mobility, we characterized the structures and surface morphology of the microrobots using a scanning electron microscope and tested their steerability using feedback control, respectively. Finally, we conducted a benchmark comparison to demonstrate that the fabrication method can produce helical microrobots with swimming properties comparable to previously reported microrobots.

Damage detection is one of the critical challenges in operating soft robots in an industrial setting. In repetitive tasks, even a small cut or fatigue can propagate to large damage ceasing the complete operation process. Although research has shown that damage detection can be performed through an embedded sensor network, this approach leads to complicated sensorized systems with additional wiring and equipment, made using complex fabrication processes and often compromising the flexibility of the soft robotic body. Alternatively, in this paper, we proposed a non-invasive approach for damage detection and localization on soft grippers. The essential idea is to track changes in non-linear dynamics of a gripper due to possible damage, where minor changes in material and morphology lead to large differences in the force and torque feedback over time. To test this concept, we developed a classification model based on a bidirectional long short-time memory (biLSTM) network that discovers patterns of dynamics changes in force and torque signals measured at the mounting point. To evaluate this model, we employed a two-fingered Fin Ray gripper and collected data for 43 damage configurations. The experimental results show nearly perfect damage detection accuracy and 97% of its localization. We have also tested the effect of the gripper orientation and the length of time-series data. By shaking the gripper with an optimal roll angle, the localization accuracy can exceed 95% and increase further with additional gripper orientations. The results also show that two periods of the gripper oscillation, i.e., roughly 50 data points, are enough to achieve a reasonable level of damage localization.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

Happy Thanksgiving, for those who celebrate it. Now spend 10 minutes watching a telepresence robot assemble a turkey sandwich.

[ Sanctuary ]

Ayato Kanada, an assistant professor at Kyushu University in Japan, wrote in to share "the world's simplest omnidirectional mobile robot."

We propose a palm-sized omnidirectional mobile robot with two torus wheels. A single torus wheel is made of an elastic elongated coil spring in which the two ends of the coil connected each other and is driven by a piezoelectric actuator (stator) that can generate 2-degrees-of-freedom (axial and angular) motions. The stator converts its thrust force and torque into longitudinal and meridian motions of the torus wheel, respectively, making the torus work as an omnidirectional wheel on a plane.

[ Paper ]

Thanks, Ayato!

This work entitled "Virtually turning robotic manipulators into worn devices: opening new horizons for wearable assistive robotics" proposes a novel hybrid system using a virtually worn robotic arm in augmented-reality, and a real robotic manipulator servoed on such virtual representation. We basically aim at bringing an illusion of wearing a robotic system while its weight is fully deported. We believe that this approach could offers a solution to the critical challenge of wight and discomfort cause by robotic sensorimotor extensions (such as supernumerary robotics limbs (SRL), prostheses or handheld tools), and open new horizons for the development of wearable robotics.

[ Paper ]

Thanks, Nathanaël!

Engineers at Georgia Tech are the first to study the mechanics of springtails, which leap in the water to avoid predators. The researchers learned how the tiny hexapods control their jump, self-right in midair, and land on their feet in the blink of an eye. The team used the findings to build penny-sized jumping robots.

[ Georgia Tech ]

Thanks, Jason!

The European Space Agency (ESA) and the European Space Resources Innovation Centre (ESRIC) have asked European space industries and research institutions to develop innovative technologies for the exploration of resources on the Moon in the framework of the ESA-ESRIC Space Resources Challenge. As part of the challenge, teams of engineers have developed vehicles capable of prospecting for resources in a test-bed simulating the Moon's shaded polar regions. From 5 to 9 September 2022, the final of the ESA-ESRIC Space Resource Challenge took place at the Rockhal in Esch-sur-Alzette. On this occasion, lunar rover prototypes competed on a 1,800 m² 'lunar' terrain. The winning team will have the opportunity to have their technology implemented on the Moon.

[ ESA ]

Thanks, Arne!

If only cobots were as easy to use as this video from Kuka makes it seem.

The Kuka website doesn't say how much this thing costs, which means it's almost certainly not something that you impulse buy.

[ Kuka ]

We present the tensegrity aerial vehicle, a design of collision-resilient rotor robots with icosahedron tensegrity structures. With collision resilience and re-orientation ability, the tensegrity aerial vehicles can operate in cluttered environments without complex collision-avoidance strategies. These capabilities are validated by a test of an experimental tensegrity aerial vehicle operating with only onboard inertial sensors in a previously-unknown forest.

[ HiPeR Lab ]

The robotics research group Brubotics and polymer science and physical chemistry group FYSC of the university of Brussels have developed together self-healing materials that can be scratched, punctured or completely cut through and heal themselves back together, with the required heat, or even at room temperature.

[ Brubotics ]

Apparently, the World Cup needs more drone footage, because this is kinda neat.

[ DJI ]

Researchers at MIT's Center for Bits and Atoms have made significant progress toward creating robots that could build nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots.

[ MIT ]

The researchers from North Carolina State University have recently developed a fast and efficient soft robotic swimmer that swims resembling human's butterfly-stroke style. It can achieve a high average swimming speed of 3.74 body length per second, close to five times faster than the fastest similar soft swimmers, and also a high-power efficiency with low cost of energy.

[ NC State ]

To facilitate sensing and physical interaction in remote and/or constrained environments, high-extension, lightweight robot manipulators are easier to transport and reach substantially further than traditional serial chain manipulators. We propose a novel planar 3-degree-of-freedom manipulator that achieves low weight and high extension through the use of a pair of spooling bistable tapes, commonly used in self-retracting tape measures, which are pinched together to form a reconfigurable revolute joint.

[ Charm Lab ]

SLURP!

[ River Lab ]

This video may encourage you to buy a drone. Or a snowmobile.

[ Skydio ]

Moxie is getting an update for the holidays!

[ Embodied ]

Robotics professor Henny Admoni answers the internet's burning questions about robots! How do you program a personality? Can robots pick up a single M&M? Why do we keep making humanoid robots? What is Elon Musk's goal for the Tesla Optimus robot? Will robots take over my job writing video descriptions...I mean, um, all our jobs? Henny answers all these questions and much more.

[ CMU ]

This GRASP on Robotics talk is from Julie Adams at Oregon State University, on “Towards Adaptive Human-Robot Teams: Workload Estimation.”

The ability for robots, be it a single robot, multiple robots or a robot swarm, to adapt to the humans with which they are teamed requires algorithms that allow robots to detect human performance in real time. The multi-dimensional workload algorithm incorporates physiological metrics to estimate overall workload and its components (i.e., cognitive, speech, auditory, visual and physical). The algorithm is sensitive to changes in a human’s individual workload components and overall workload across domains, human-robot teaming relationships (i.e., supervisory, peer-based), and individual differences. The algorithm has also been demonstrated to detect shifts in workload in real-time in order to adapt the robot’s interaction with the human and autonomously change task responsibilities when the human’s workload is over- or underloaded. Recently, the algorithm was used to post-hoc analyze the resulting workload for a single human deploying a heterogeneous robot swarm in an urban environment. Current efforts are focusing on predicting the human’s future workload, recognizing the human’s current tasks, and estimating workload for previously unseen tasks.

[ UPenn ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

Happy Thanksgiving, for those who celebrate it. Now spend 10 minutes watching a telepresence robot assemble a turkey sandwich.

[ Sanctuary ]

Ayato Kanada, an assistant professor at Kyushu University in Japan, wrote in to share "the world's simplest omnidirectional mobile robot."

We propose a palm-sized omnidirectional mobile robot with two torus wheels. A single torus wheel is made of an elastic elongated coil spring in which the two ends of the coil connected each other and is driven by a piezoelectric actuator (stator) that can generate 2-degrees-of-freedom (axial and angular) motions. The stator converts its thrust force and torque into longitudinal and meridian motions of the torus wheel, respectively, making the torus work as an omnidirectional wheel on a plane.

[ Paper ]

Thanks, Ayato!

This work entitled "Virtually turning robotic manipulators into worn devices: opening new horizons for wearable assistive robotics" proposes a novel hybrid system using a virtually worn robotic arm in augmented-reality, and a real robotic manipulator servoed on such virtual representation. We basically aim at bringing an illusion of wearing a robotic system while its weight is fully deported. We believe that this approach could offers a solution to the critical challenge of wight and discomfort cause by robotic sensorimotor extensions (such as supernumerary robotics limbs (SRL), prostheses or handheld tools), and open new horizons for the development of wearable robotics.

[ Paper ]

Thanks, Nathanaël!

Engineers at Georgia Tech are the first to study the mechanics of springtails, which leap in the water to avoid predators. The researchers learned how the tiny hexapods control their jump, self-right in midair, and land on their feet in the blink of an eye. The team used the findings to build penny-sized jumping robots.

[ Georgia Tech ]

Thanks, Jason!

The European Space Agency (ESA) and the European Space Resources Innovation Centre (ESRIC) have asked European space industries and research institutions to develop innovative technologies for the exploration of resources on the Moon in the framework of the ESA-ESRIC Space Resources Challenge. As part of the challenge, teams of engineers have developed vehicles capable of prospecting for resources in a test-bed simulating the Moon's shaded polar regions. From 5 to 9 September 2022, the final of the ESA-ESRIC Space Resource Challenge took place at the Rockhal in Esch-sur-Alzette. On this occasion, lunar rover prototypes competed on a 1,800 m² 'lunar' terrain. The winning team will have the opportunity to have their technology implemented on the Moon.

[ ESA ]

Thanks, Arne!

If only cobots were as easy to use as this video from Kuka makes it seem.

The Kuka website doesn't say how much this thing costs, which means it's almost certainly not something that you impulse buy.

[ Kuka ]

We present the tensegrity aerial vehicle, a design of collision-resilient rotor robots with icosahedron tensegrity structures. With collision resilience and re-orientation ability, the tensegrity aerial vehicles can operate in cluttered environments without complex collision-avoidance strategies. These capabilities are validated by a test of an experimental tensegrity aerial vehicle operating with only onboard inertial sensors in a previously-unknown forest.

[ HiPeR Lab ]

The robotics research group Brubotics and polymer science and physical chemistry group FYSC of the university of Brussels have developed together self-healing materials that can be scratched, punctured or completely cut through and heal themselves back together, with the required heat, or even at room temperature.

[ Brubotics ]

Apparently, the World Cup needs more drone footage, because this is kinda neat.

[ DJI ]

Researchers at MIT's Center for Bits and Atoms have made significant progress toward creating robots that could build nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots.

[ MIT ]

The researchers from North Carolina State University have recently developed a fast and efficient soft robotic swimmer that swims resembling human's butterfly-stroke style. It can achieve a high average swimming speed of 3.74 body length per second, close to five times faster than the fastest similar soft swimmers, and also a high-power efficiency with low cost of energy.

[ NC State ]

To facilitate sensing and physical interaction in remote and/or constrained environments, high-extension, lightweight robot manipulators are easier to transport and reach substantially further than traditional serial chain manipulators. We propose a novel planar 3-degree-of-freedom manipulator that achieves low weight and high extension through the use of a pair of spooling bistable tapes, commonly used in self-retracting tape measures, which are pinched together to form a reconfigurable revolute joint.

[ Charm Lab ]

SLURP!

[ River Lab ]

This video may encourage you to buy a drone. Or a snowmobile.

[ Skydio ]

Moxie is getting an update for the holidays!

[ Embodied ]

Robotics professor Henny Admoni answers the internet's burning questions about robots! How do you program a personality? Can robots pick up a single M&M? Why do we keep making humanoid robots? What is Elon Musk's goal for the Tesla Optimus robot? Will robots take over my job writing video descriptions...I mean, um, all our jobs? Henny answers all these questions and much more.

[ CMU ]

This GRASP on Robotics talk is from Julie Adams at Oregon State University, on “Towards Adaptive Human-Robot Teams: Workload Estimation.”

The ability for robots, be it a single robot, multiple robots or a robot swarm, to adapt to the humans with which they are teamed requires algorithms that allow robots to detect human performance in real time. The multi-dimensional workload algorithm incorporates physiological metrics to estimate overall workload and its components (i.e., cognitive, speech, auditory, visual and physical). The algorithm is sensitive to changes in a human’s individual workload components and overall workload across domains, human-robot teaming relationships (i.e., supervisory, peer-based), and individual differences. The algorithm has also been demonstrated to detect shifts in workload in real-time in order to adapt the robot’s interaction with the human and autonomously change task responsibilities when the human’s workload is over- or underloaded. Recently, the algorithm was used to post-hoc analyze the resulting workload for a single human deploying a heterogeneous robot swarm in an urban environment. Current efforts are focusing on predicting the human’s future workload, recognizing the human’s current tasks, and estimating workload for previously unseen tasks.

[ UPenn ]

Dynamic hopping maneuvers using mechanical actuation are proposed as a method of locomotion for free-flyer vehicles near or on large space structures. Such maneuvers are of interest for applications related to proximity maneuvers, observation, cargo carrying, fabrication, and sensor data collection. This study describes a set of dynamic hopping maneuver experiments performed using two Astrobees. Both vehicles were made to initially grasp onto a common free-floating handrail. From this initial condition, the active Astrobee launched itself using mechanical actuation of its robotic arm manipulator. The results are presented from the ground and flight experimental sessions completed at the Spacecraft Robotics Laboratory of the Naval Postgraduate School, the Intelligent Robotics Group facility at NASA Ames Research Center, and hopping maneuvers aboard the International Space Station. Overall, this study demonstrates that locomotion through mechanical actuation could successfully launch a free-flyer vehicle in an initial desired trajectory from another object of similar size and mass.

Tele-manipulation is indispensable for the nuclear industry since teleoperated robots cancel the radiation hazard problem for the operator. The majority of the teleoperated solutions used in the nuclear industry rely on bilateral teleoperation, utilizing a variation of the 4-channel architecture, where the motion and force signals of the local and remote robots are exchanged in the communication channel. However, the performance limitation of teleoperated robots for nuclear decommissioning tasks is not clearly answered in the literature. In this study, we assess the task performance in bilateral tele-manipulation for radiation surveying in gloveboxes and compare it to radiation surveying of a glovebox operator. To analyze the performance, an experimental setup suitable for human operation (manual operation) and tele-manipulation is designed. Our results showed that a current commercial off-the-shelf (COTS) teleoperated robotic manipulation solution is flexible, yet insufficient, as its task performance is significantly lower when compared to manual operation and potentially hazardous for the equipment inside the glovebox. Finally, we propose a set of potential solutions, derived from both our observations and expert interviews, that could improve the performance of teleoperation systems in glovebox environments in future work.



While being able to drive the ball 300 yards might get the fans excited, a solid putting game is often what separates a golf champion from the journeymen. A robot built by German researchers is quickly becoming a master of this short game using a clever combination of classical control engineering and machine learning.

In golf tournaments, players often scout out the greens the day beforehand to think through how they are going to play their shots, says Annika Junker, a doctoral student at Paderborn University in Germany. So she and her colleagues decided to see if giving a robot similar capabilities could help it to sink a putt from anywhere on the green, without assistance from a human.

Golfi, as the team has dubbed their creation, uses a 3D camera to take a snapshot of the green, which it then feeds into a physics-based model to simulate thousands of random shots from different positions. These are used to train a neural network that can then predict exactly how hard and in what direction to hit a ball to get it in the hole, from anywhere on the green.

On the green, Golfi was successful six or seven times out of ten.

Like even the best pros, it doesn’t get a hole in one every time. The goal isn’t really to build a tournament winning golf robot though, says Junker, but to demonstrate the power of hybrid approaches to robotic control. “We try to combine data-driven and physics based methods and we searched for a nice example, which everyone can easily understand,” she says. “It's only a toy for us, but we hope to see some advantages of our approach for industrial applications.”

So far, the researchers have only tested their approach on a small mock-up green inside their lab. The robot, which is described in a paper due to be presented at the IEEE International Conference on Robotic Computing in Italy next month, navigates its way around the two meter-square space on four wheels, two of which are powered. Once in position it then uses a belt driven gear shaft with a putter attached to the end to strike the ball towards the hole.

First though, it needs to work out what shot to play given the position of the ball. The researchers begin by using a Microsoft Kinect 3D camera mounted on the ceiling to capture a depth map of the green. This data is then fed into a physics-based model, alongside other parameters like the rolling resistance of the turf, the weight of the ball and its starting velocity, to simulate three thousand random shots from various starting points.

golfi video youtu.be

This data is used to train a neural network that can predict how hard and in what direction to hit the ball to get it in the hole from anywhere on the green. While it’s possible to solve this problem by combining the physics based model with classical optimization, says Junker, it’s far more computationally expensive. And training the robot on simulated golf shots takes just five minutes, compared to around 30 to 40 hours if they collected data on real-world strokes, she adds.

Before it can make it’s shot though, the robot first has to line its putter up with the ball just right, which requires it to work out where on the green both itself and the ball are. To do so, it uses a neural network that has been trained to spot golf balls and a hard-coded object detection algorithm that picks out colored dots on the top of the robot to work out its orientation. This positioning data is then combined with a physical model of the robot and fed into an optimization algorithm that works out how to control its wheel motors to navigate to the ball.

Junker admits that the approach isn’t flawless. The current set-up relies on a bird’s eye view, which would be hard to replicate on a real golf course, and switching to cameras on the robot would present major challenges, she says. The researchers also didn’t report how often Golfi successfully sinks the putt in their paper, because the figures were thrown off by the fact that it occasionally drove over the ball, knocking it out of position. When that didn’t happen though, Junker says it was successful six or seven times out of ten, and since they submitted the paper a colleague has reworked the navigation system to avoid the ball.

Golfi isn’t the first machine to try its hand at the sport. In 2016, a robot called LDRICK hit a hole-in-one at Arizona's TPC Scottsdale course and several devices have been built to test out golf clubs. But Noel Rousseau, a golf coach with a PhD in motor learning, says that typically they require an operator painstakingly setting them up for each shot, and any adjustments take considerable time. “The most impressive part to me is that the golf robot is able to find the ball, sight the hole and move itself into position for an accurate stoke,” he says.

Beyond mastering putting, the hope is that the underlying techniques the researchers have developed could translate to other robotics problems, says Niklas Fittkau, a doctoral student at Paderborn University and co-lead author of the paper. “You can also transfer that to other problems, where you have some knowledge about the system and could model parts of it to obtain some data, but you can’t model everything,” he says.



While being able to drive the ball 300 yards might get the fans excited, a solid putting game is often what separates a golf champion from the journeymen. A robot built by German researchers is quickly becoming a master of this short game using a clever combination of classical control engineering and machine learning.

In golf tournaments, players often scout out the greens the day beforehand to think through how they are going to play their shots, says Annika Junker, a doctoral student at Paderborn University in Germany. So she and her colleagues decided to see if giving a robot similar capabilities could help it to sink a putt from anywhere on the green, without assistance from a human.

Golfi, as the team has dubbed their creation, uses a 3D camera to take a snapshot of the green, which it then feeds into a physics-based model to simulate thousands of random shots from different positions. These are used to train a neural network that can then predict exactly how hard and in what direction to hit a ball to get it in the hole, from anywhere on the green.

On the green, Golfi was successful six or seven times out of ten.

Like even the best pros, it doesn’t get a hole in one every time. The goal isn’t really to build a tournament winning golf robot though, says Junker, but to demonstrate the power of hybrid approaches to robotic control. “We try to combine data-driven and physics based methods and we searched for a nice example, which everyone can easily understand,” she says. “It's only a toy for us, but we hope to see some advantages of our approach for industrial applications.”

So far, the researchers have only tested their approach on a small mock-up green inside their lab. The robot, which is described in a paper due to be presented at the IEEE International Conference on Robotic Computing in Italy next month, navigates its way around the two meter-square space on four wheels, two of which are powered. Once in position it then uses a belt driven gear shaft with a putter attached to the end to strike the ball towards the hole.

First though, it needs to work out what shot to play given the position of the ball. The researchers begin by using a Microsoft Kinect 3D camera mounted on the ceiling to capture a depth map of the green. This data is then fed into a physics-based model, alongside other parameters like the rolling resistance of the turf, the weight of the ball and its starting velocity, to simulate three thousand random shots from various starting points.

golfi video youtu.be

This data is used to train a neural network that can predict how hard and in what direction to hit the ball to get it in the hole from anywhere on the green. While it’s possible to solve this problem by combining the physics based model with classical optimization, says Junker, it’s far more computationally expensive. And training the robot on simulated golf shots takes just five minutes, compared to around 30 to 40 hours if they collected data on real-world strokes, she adds.

Before it can make it’s shot though, the robot first has to line its putter up with the ball just right, which requires it to work out where on the green both itself and the ball are. To do so, it uses a neural network that has been trained to spot golf balls and a hard-coded object detection algorithm that picks out colored dots on the top of the robot to work out its orientation. This positioning data is then combined with a physical model of the robot and fed into an optimization algorithm that works out how to control its wheel motors to navigate to the ball.

Junker admits that the approach isn’t flawless. The current set-up relies on a bird’s eye view, which would be hard to replicate on a real golf course, and switching to cameras on the robot would present major challenges, she says. The researchers also didn’t report how often Golfi successfully sinks the putt in their paper, because the figures were thrown off by the fact that it occasionally drove over the ball, knocking it out of position. When that didn’t happen though, Junker says it was successful six or seven times out of ten, and since they submitted the paper a colleague has reworked the navigation system to avoid the ball.

Golfi isn’t the first machine to try its hand at the sport. In 2016, a robot called LDRICK hit a hole-in-one at Arizona's TPC Scottsdale course and several devices have been built to test out golf clubs. But Noel Rousseau, a golf coach with a PhD in motor learning, says that typically they require an operator painstakingly setting them up for each shot, and any adjustments take considerable time. “The most impressive part to me is that the golf robot is able to find the ball, sight the hole and move itself into position for an accurate stoke,” he says.

Beyond mastering putting, the hope is that the underlying techniques the researchers have developed could translate to other robotics problems, says Niklas Fittkau, a doctoral student at Paderborn University and co-lead author of the paper. “You can also transfer that to other problems, where you have some knowledge about the system and could model parts of it to obtain some data, but you can’t model everything,” he says.



All things considered, we humans are kind of big, which is very limiting to how we can comfortably interact with the world. The practical effect of this is that we tend to prioritize things that we can see and touch and otherwise directly experience, even if those things are only a small part of the world in which we live. A recent study conservatively estimates that there are 2.5 million ants for every one human on Earth. And that’s just ants. There are probably something like 7 million different species of terrestrial insects, and humans have only even noticed like 10 percent of them. The result of this disconnect is that when (for example) insect populations around the world start to crater, it takes us much longer to first notice, care, and act.

To give the small scale the attention that it deserves, we need a way of interacting with it. In a paper recently published in Scientific Reports, roboticists from Ritsumeikan University in Japan demonstrate a haptic teleoperation system that connects a human hand on one end with microfingers on the other, letting the user feel what it’s like to give a pill bug a tummy rub.

At top, a microfinger showing the pneumatic balloon actuator (PBA) and liquid metal strain gauge. At bottom left, when the PBA is deflated, the microfinger is straight. At bottom right, inflating the PBA causes the finger to bend downwards.

These microfingers are just 12 millimeters long, 3 mm wide, and 490 microns (μm) thick. Inside of each microfinger is a pneumatic balloon actuator, which is just a hollow channel that can be pressurized with air. Since the channel is on the top of the microfinger, when the channel is inflated, it bulges upward, causing the microfinger to bend down. When pressure is reduced, the microfinger returns to its original position. Separate channels in the microfinger are filled with liquid metal, and as the microfinger bends, the channels elongate, thinning out the metal. By measuring the resistance of the metal, you can tell how much the finger is being bent. This combination of actuation and force sensing means that a human-size haptic system can be used as a force feedback interface: As you move your fingers, the microfingers will move, and forces can be transmitted back to you, allowing you to feel what the microfingers feel.

The microfingers (left) can be connected to a haptic feedback and control system for use by a human.

Fans of the golden age of science fiction will recognize this system as a version of Waldo F. Jones' Synchronous Reduplicating Pantograph, although the concept has even deeper roots in sci-fi:

The thought suddenly struck me: I can make micro hands for my little hands. I can make the same gloves for them as I did for my living hands, use the same system to connect them to the handles ten times smaller than my micro arms, and then ... I will have real micro arms, they will chop my movements two hundred times. With these hands I will burst into such a smallness of life that they have only seen, but where no one else has disposed of their own hands. And I got to work.

With their very real and not science fiction system, the researchers were able to successfully determine that pill bugs can exert about 10 micro-Newtons of force through their legs, which is about the same as what has been estimated using other techniques. This is just a proof of concept study, but I’m excited about the potential here, because there is still so much of the world that humans haven’t yet been able to really touch. And besides just insect-scale tickling, there’s a broader practical context here around the development of insect-scale robots. Insects have had insect-scale sensing and mobility and whatnot pretty well figured out for a long time now, and if we’re going to make robots that can do insect-like things, we’re going to do it by learning as much as we can directly from insects themselves.

“With our strain-sensing microfinger, we were able to directly measure the pushing motion and force of the legs and torso of a pill bug—something that has been impossible to achieve previously. We anticipate that our results will lead to further technological development for microfinger-insect interactions, leading to human-environment interactions at much smaller scales.”
—Satoshi Konishi, Ritsumeikan University

I should also be clear that despite the headline, I don’t know if it’s actually possible to tickle a bug. A Google search for “are insects ticklish” turns up one single result, from someone asking this question on the "StonerThoughts" subreddit. There is some suggestion that tickling, or more specifically the kind of tickling that is surprising and can lead to laughter called gargalesis, has evolved in social mammals to promote bonding. The other kind of tickling is called knismesis, which is more of an unpleasant sensation that causes irritation or distress. You know, like the feeling of a bug crawling on you. It seems plausible (to me, anyway) that bugs may experience some kind of knismesis—but I think that someone needs to get in there and do some science, especially now that we have the tools to make it happen.


All things considered, we humans are kind of big, which is very limiting to how we can comfortably interact with the world. The practical effect of this is that we tend to prioritize things that we can see and touch and otherwise directly experience, even if those things are only a small part of the world in which we live. A recent study conservatively estimates that there are 2.5 million ants for every one human on Earth. And that’s just ants. There are probably something like 7 million different species of terrestrial insects, and humans have only even noticed like 10 percent of them. The result of this disconnect is that when (for example) insect populations around the world start to crater, it takes us much longer to first notice, care, and act.

To give the small scale the attention that it deserves, we need a way of interacting with it. In a paper recently published in Scientific Reports, roboticists from Ritsumeikan University in Japan demonstrate a haptic teleoperation system that connects a human hand on one end with microfingers on the other, letting the user feel what it’s like to give a pill bug a tummy rub.

At top, a microfinger showing the pneumatic balloon actuator (PBA) and liquid metal strain gauge. At bottom left, when the PBA is deflated, the microfinger is straight. At bottom right, inflating the PBA causes the finger to bend downwards.

These microfingers are just 12 millimeters long, 3 mm wide, and 490 microns (μm) thick. Inside of each microfinger is a pneumatic balloon actuator, which is just a hollow channel that can be pressurized with air. Since the channel is on the top of the microfinger, when the channel is inflated, it bulges upward, causing the microfinger to bend down. When pressure is reduced, the microfinger returns to its original position. Separate channels in the microfinger are filled with liquid metal, and as the microfinger bends, the channels elongate, thinning out the metal. By measuring the resistance of the metal, you can tell how much the finger is being bent. This combination of actuation and force sensing means that a human-size haptic system can be used as a force feedback interface: As you move your fingers, the microfingers will move, and forces can be transmitted back to you, allowing you to feel what the microfingers feel.

The microfingers (left) can be connected to a haptic feedback and control system for use by a human.

Fans of the golden age of science fiction will recognize this system as a version of Waldo F. Jones' Synchronous Reduplicating Pantograph, although the concept has even deeper roots in sci-fi:

The thought suddenly struck me: I can make micro hands for my little hands. I can make the same gloves for them as I did for my living hands, use the same system to connect them to the handles ten times smaller than my micro arms, and then ... I will have real micro arms, they will chop my movements two hundred times. With these hands I will burst into such a smallness of life that they have only seen, but where no one else has disposed of their own hands. And I got to work.

With their very real and not science fiction system, the researchers were able to successfully determine that pill bugs can exert about 10 micro-Newtons of force through their legs, which is about the same as what has been estimated using other techniques. This is just a proof of concept study, but I’m excited about the potential here, because there is still so much of the world that humans haven’t yet been able to really touch. And besides just insect-scale tickling, there’s a broader practical context here around the development of insect-scale robots. Insects have had insect-scale sensing and mobility and whatnot pretty well figured out for a long time now, and if we’re going to make robots that can do insect-like things, we’re going to do it by learning as much as we can directly from insects themselves.

“With our strain-sensing microfinger, we were able to directly measure the pushing motion and force of the legs and torso of a pill bug—something that has been impossible to achieve previously. We anticipate that our results will lead to further technological development for microfinger-insect interactions, leading to human-environment interactions at much smaller scales.”
—Satoshi Konishi, Ritsumeikan University

I should also be clear that despite the headline, I don’t know if it’s actually possible to tickle a bug. A Google search for “are insects ticklish” turns up one single result, from someone asking this question on the "StonerThoughts" subreddit. There is some suggestion that tickling, or more specifically the kind of tickling that is surprising and can lead to laughter called gargalesis, has evolved in social mammals to promote bonding. The other kind of tickling is called knismesis, which is more of an unpleasant sensation that causes irritation or distress. You know, like the feeling of a bug crawling on you. It seems plausible (to me, anyway) that bugs may experience some kind of knismesis—but I think that someone needs to get in there and do some science, especially now that we have the tools to make it happen.

An arboreal mammal such as a squirrel can amazingly lock its head (and thus eyes) toward a fixed spot for safe landing while its body is tumbling in air after unexpectedly being thrown into air. Such an impressive ability of body motion control of squirrels has been shown in a recent YouTube video, which has amazed public with over 100 million views. In the video, a squirrel attracted to food crawled onto an ejection device and was unknowingly ejected into air by the device. During the resulting projectile flight, the squirrel managed to quickly turn its head (eyes) toward and then keeps staring at the landing spot until it safely landed on feet. Understanding the underline dynamics and how the squirrel does this behavior can inspire robotics researchers to develop bio-inspired control strategies for challenging robotic operations such as hopping/jumping robots operating in an unstructured environment. To study this problem, we implemented a 2D multibody dynamics model, which simulated the dynamic motion behavior of the main body segments of a squirrel in a vertical motion plane. The inevitable physical contact between the body segments is also modeled and simulated. Then, we introduced two motion control methods aiming at locking the body representing the head of the squirrel toward a globally fixed spot while the other body segments of the squirrel were undergoing a general 2D rotation and translation. One of the control methods is a conventional proportional-derivative (PD) controller, and the other is a reinforcement learning (RL)-based controller. Our simulation-based experiment shows that both controllers can achieve the intended control goal, quickly turning and then locking the head toward a globally fixed spot under any feasible initial motion conditions. In comparison, the RL-based method is more robust against random noise in sensor data and also more robust under unexpected initial conditions.

Driven by the aim of realizing functional robotic systems at the milli- and submillimetre scale for biomedical applications, the area of magnetically driven soft devices has received significant recent attention. This has resulted in a new generation of magnetically controlled soft robots with patterns of embedded, programmable domains throughout their structures. This type of programmable magnetic profiling equips magnetic soft robots with shape programmable memory and can be achieved through the distribution of discrete domains (voxels) with variable magnetic densities and magnetization directions. This approach has produced highly compliant, and often bio-inspired structures that are well suited to biomedical applications at small scales, including microfluidic transport and shape-forming surgical catheters. However, to unlock the full potential of magnetic soft robots with improved designs and control, significant challenges remain in their compositional optimization and fabrication. This review considers recent advances and challenges in the interlinked optimization and fabrication aspects of programmable domains within magnetic soft robots. Through a combination of improvements in the computational capacity of novel optimization methods with advances in the resolution, material selection and automation of existing and novel fabrication methods, significant further developments in programmable magnetic soft robots may be realized.

One of the possible benefits of robot-mediated education is the effect of the robot becoming a catalyst between people and facilitating learning. In this study, the authors focused on an asynchronous active learning method mediated by robots. Active learning is believed to help students continue learning and develop the ability to think independently. Therefore, the authors improved the UGA (User Generated Agent) system that we have created for long-term active learning in COVID-19 to create an environment where children introduce books to each other via robots. The authors installed the robot in an elementary school and conducted an experiment lasting more than a year. As a result, it was confirmed that the robot could continue to be used without getting bored even over a long period of time. They also analyzed how the children created the contents by analyzing the contents that had a particularly high number of views. In particular, the authors observed changes in children’s behavior, such as spontaneous advertising activities, guidance from upperclassmen to lowerclassmen, collaboration with multiple people, and increased interest in technology, even under conditions where the new coronavirus was spreading and children’s social interaction was inhibited.



It’s been a couple of years, but the IEEE Spectrum Robot Gift Guide is back for 2022! We’ve got all kinds of new robots, and right now is an excellent time to buy one (or a dozen), since many of them are on sale this week. We’ve tried to focus on consumer robots that are actually available (or that you can at least order), but depending on when you’re reading this guide, the prices we have here may not be up to date, and we’re not taking shipping into account.

And if these robots aren’t enough for you, many of our picks from years past are still available: check out our guides from 2019, 2018, 2017, 2016, 2015, 2014, 2013, and 2012. And as always, if you have suggestions that you’d like to share, post a comment to help the rest of us find the perfect robot gift.

Lego Robotics Kits

Lego has decided to discontinue its classic Mindstorms robotics kits, but they’ll be supported for another couple of years and this is your last chance to buy one. If you like Lego’s approach to robotics education but don’t want to invest in a system at the end of its life, Lego also makes an education kit called Spike that shares many of the hardware and software features for students in grades 6 to 8.

$360–$385
Lego Sphero Indi

Indi is a clever educational robot designed to teach problem solving and screenless coding to kids as young as 4, using a small wheeled robot with a color sensor and a system of colored strips that command the robot to do different behaviors. There’s also an app to access more options, and Sphero has more robots to choose from once your kid is ready for something more.

$110
Sphero | Amazon Nybble and Bittle

Petoi’s quadrupedal robot kits are an adorable (and relatively affordable) way to get started with legged robotics. Whether you go with Nybble the cat or Bittle the dog, you get to do some easy hardware assembly and then leverage a bunch of friendly software tools to get your little legged friend walking around and doing tricks.

$220–$260
Petoi iRobot Root

Root educational robots have a long and noble history, and iRobot has built on that to create an inexpensive platform to help kids learn to code starting as young as age 4. There are two different versions of Root; the more expensive one includes an RGB sensor, a programmable eraser, and the ability to stick to vertical whiteboards and move around on them.

$100–$250
iRobot
TurtleBot 4

The latest generation of TurtleBot from Clearpath, iRobot, and Open Robotics is a powerful and versatile ROS (Robot Operating System) platform for research and product development. For aspiring roboticists in undergrad and possibly high school, the Turtlebot 4 is just about as good as it gets unless you want to spend an order of magnitude more. And the fact that TurtleBots are used so extensively means that if you need some help, the ROS community will (hopefully) have your back.

$1,200–$1,900
RoboShop iRobot Create 3

Newly updated just last year, iRobot's Create 3 is the perfect platform for folks who want to build their own robot, but not all of their own robot. The rugged mobile base is essentially a Roomba without the cleaning parts, and it's easy to add your own hardware on top. It runs ROS 2, but you can get started with Python.

$300
iRobot Mini Pupper

Mini Pupper is one of the cutest ways of getting started with ROS. This legged robot is open source, and runs ROS on a Raspberry Pi, which makes it extra affordable if you have your own board lying around. Even if you don’t, though, the Mini Pupper kit is super affordable for what you get, and is a fun hardware project if you decide to save a little extra cash by assembling it yourself.

$400–$585
MangDang Luxonis Rae

I’m not sure whether the world is ready for ROS 2 yet, but you can get there with Rae, which combines a pocket-size mobile robot with a pair of depth cameras and onboard computer shockingly cheaply. App support means that Rae can do cool stuff out of the box, but it’s easy to get more in-depth with it too. Rae will get delivered early next year, but it’s cool enough that we think a Kickstarter IOU is a perfectly acceptable gift.

$400
Kickstarter
Roomba Combo j7+

iRobot’s brand new top-of-the-line fully autonomous vacuuming and wet-mopping combo j7+ Roomba will get your floors clean and shiny, except for carpet, which it’s smart enough to not try to shine because it’ll cleverly lift the wet mop up out of the way. It’s also cloud connected and empties itself. You’ll have to put water in it if you want it to mop, but that’s way better than mopping yourself.

$900
iRobot Neato D9

Neato’s robots might not be quite as pervasive as the Roomba, but they’re excellent vacuums, and they use a planar lidar system for obstacle avoidance and map making. The nice thing about lidar (besides the fact that it works in total darkness) is that Neato robots have no cameras at all and are physically incapable of collecting imagery of you or your home.

$300
Neato Robotics Tertill

How often do you find an affordable, useful, reliable, durable, fully autonomous home robot? Not often! But Tertill is all of these things: powered entirely by the sun, it slowly prowls around your garden, whacking weeds as they sprout while avoiding your mature plants. All you have to do is make sure it can’t escape, then just let it loose and forget about it for months at a time.

$200
Tertill
Amazon Astro

If you like the idea of having a semi-autonomous mobile robot with a direct link to Amazon wandering around your house trying to be useful, then Amazon’s Astro might not sound like a terrible idea. You’ll have to apply for one, and it sounds like it’s more like a beta program, but could be fun, I guess?

$1,000
Amazon Skydio 2+

The Skydio 2+ is an incremental (but significant) update to the Skydio 2 drone, with its magically cutting-edge obstacle avoidance and extremely impressive tracking skills. There are many drones out there that are cheaper and more portable, and if flying is your thing, get one of those. But if filming is your thing, the Skydio 2+ is the drone you want to fly.

$900
Skydio DJI FPV

We had a blast flying DJI’s FPV drone. The VR system is exhilarating and the drone is easy to fly even for FPV beginners, but it’s powerful enough to grow along with your piloting skills. Just don’t get cocky, or you’ll crash it. Don’t ask me how I know this.

$900
DJI
ElliQ

ElliQ is an embodied voice assistant that is a lot more practical than a smart speaker. It's designed for older adults who may spend a lot of time alone at home, and can help with a bunch of things, including health and wellness tasks and communicating with friends and family. ElliQ costs $250 up front, plus a subscription of between $30 and $40 per month.

$250+
ElliQ Moxie

Not all robots for kids are designed to teach them to code: Moxie helps to “supports social-emotional development in kids through play.” The carefully designed and curated interaction between Moxie and children helps them to communicate and build social skills in a friendly and engaging way. Note that Moxie also requires a subscription fee of $40 per month.

$800
Embodied Petit Qoobo

What is Qoobo? It is “a tailed cushion that heals your heart,” according to the folks that make it. According to us, it’s a furry round pillow that responds to your touch by moving its tail, sort of like a single-purpose cat. It’s fuzzy tail therapy!

$130
Qoobo | Amazon Unitree Go1

Before you decide on a real dog, consider the Unitree Go1 instead. Sure it’s expensive, but you know what? So are real dogs. And unlike with a real dog, you only have to walk the Go1 when you feel like it, and you can turn it off and stash it in a closet or under a bed whenever you like. For a fully featured dynamic legged robot, it’s staggeringly cheap, just keep in mind that shipping is $1,000.

$2,700
Unitree



















It’s been a couple of years, but the IEEE Spectrum Robot Gift Guide is back for 2022! We’ve got all kinds of new robots, and right now is an excellent time to buy one (or a dozen), since many of them are on sale this week. We’ve tried to focus on consumer robots that are actually available (or that you can at least order), but depending on when you’re reading this guide, the prices we have here may not be up to date, and we’re not taking shipping into account.

And if these robots aren’t enough for you, many of our picks from years past are still available: check out our guides from 2019, 2018, 2017, 2016, 2015, 2014, 2013, and 2012. And as always, if you have suggestions that you’d like to share, post a comment to help the rest of us find the perfect robot gift.

Lego Robotics Kits

Lego has decided to discontinue its classic Mindstorms robotics kits, but they’ll be supported for another couple of years and this is your last chance to buy one. If you like Lego’s approach to robotics education but don’t want to invest in a system at the end of its life, Lego also makes an education kit called Spike that shares many of the hardware and software features for students in grades 6 to 8.

$360–$385
Lego Sphero Indi

Indi is a clever educational robot designed to teach problem solving and screenless coding to kids as young as 4, using a small wheeled robot with a color sensor and a system of colored strips that command the robot to do different behaviors. There’s also an app to access more options, and Sphero has more robots to choose from once your kid is ready for something more.

$110
Sphero | Amazon Nybble and Bittle

Petoi’s quadrupedal robot kits are an adorable (and relatively affordable) way to get started with legged robotics. Whether you go with Nybble the cat or Bittle the dog, you get to do some easy hardware assembly and then leverage a bunch of friendly software tools to get your little legged friend walking around and doing tricks.

$220–$260
Petoi iRobot Root

Root educational robots have a long and noble history, and iRobot has built on that to create an inexpensive platform to help kids learn to code starting as young as age 4. There are two different versions of Root; the more expensive one includes an RGB sensor, a programmable eraser, and the ability to stick to vertical whiteboards and move around on them.

$100–$250
iRobot
TurtleBot 4

The latest generation of TurtleBot from Clearpath, iRobot, and Open Robotics is a powerful and versatile ROS (Robot Operating System) platform for research and product development. For aspiring roboticists in undergrad and possibly high school, the Turtlebot 4 is just about as good as it gets unless you want to spend an order of magnitude more. And the fact that TurtleBots are used so extensively means that if you need some help, the ROS community will (hopefully) have your back.

$1,200–$1,900
RoboShop iRobot Create 3

Newly updated just last year, iRobot's Create 3 is the perfect platform for folks who want to build their own robot, but not all of their own robot. The rugged mobile base is essentially a Roomba without the cleaning parts, and it's easy to add your own hardware on top. It runs ROS 2, but you can get started with Python.

$300
iRobot Mini Pupper

Mini Pupper is one of the cutest ways of getting started with ROS. This legged robot is open source, and runs ROS on a Raspberry Pi, which makes it extra affordable if you have your own board lying around. Even if you don’t, though, the Mini Pupper kit is super affordable for what you get, and is a fun hardware project if you decide to save a little extra cash by assembling it yourself.

$400–$585
MangDang Luxonis Rae

I’m not sure whether the world is ready for ROS 2 yet, but you can get there with Rae, which combines a pocket-size mobile robot with a pair of depth cameras and onboard computer shockingly cheaply. App support means that Rae can do cool stuff out of the box, but it’s easy to get more in-depth with it too. Rae will get delivered early next year, but it’s cool enough that we think a Kickstarter IOU is a perfectly acceptable gift.

$400
Kickstarter
Roomba Combo j7+

iRobot’s brand new top-of-the-line fully autonomous vacuuming and wet-mopping combo j7+ Roomba will get your floors clean and shiny, except for carpet, which it’s smart enough to not try to shine because it’ll cleverly lift the wet mop up out of the way. It’s also cloud connected and empties itself. You’ll have to put water in it if you want it to mop, but that’s way better than mopping yourself.

$900
iRobot Neato D9

Neato’s robots might not be quite as pervasive as the Roomba, but they’re excellent vacuums, and they use a planar lidar system for obstacle avoidance and map making. The nice thing about lidar (besides the fact that it works in total darkness) is that Neato robots have no cameras at all and are physically incapable of collecting imagery of you or your home.

$300
Neato Robotics Tertill

How often do you find an affordable, useful, reliable, durable, fully autonomous home robot? Not often! But Tertill is all of these things: powered entirely by the sun, it slowly prowls around your garden, whacking weeds as they sprout while avoiding your mature plants. All you have to do is make sure it can’t escape, then just let it loose and forget about it for months at a time.

$200
Tertill
Amazon Astro

If you like the idea of having a semi-autonomous mobile robot with a direct link to Amazon wandering around your house trying to be useful, then Amazon’s Astro might not sound like a terrible idea. You’ll have to apply for one, and it sounds like it’s more like a beta program, but could be fun, I guess?

$1,000
Amazon Skydio 2+

The Skydio 2+ is an incremental (but significant) update to the Skydio 2 drone, with its magically cutting-edge obstacle avoidance and extremely impressive tracking skills. There are many drones out there that are cheaper and more portable, and if flying is your thing, get one of those. But if filming is your thing, the Skydio 2+ is the drone you want to fly.

$900
Skydio DJI FPV

We had a blast flying DJI’s FPV drone. The VR system is exhilarating and the drone is easy to fly even for FPV beginners, but it’s powerful enough to grow along with your piloting skills. Just don’t get cocky, or you’ll crash it. Don’t ask me how I know this.

$900
DJI
ElliQ

ElliQ is an embodied voice assistant that is a lot more practical than a smart speaker. It's designed for older adults who may spend a lot of time alone at home, and can help with a bunch of things, including health and wellness tasks and communicating with friends and family. ElliQ costs $250 up front, plus a subscription of between $30 and $40 per month.

$250+
ElliQ Moxie

Not all robots for kids are designed to teach them to code: Moxie helps to “supports social-emotional development in kids through play.” The carefully designed and curated interaction between Moxie and children helps them to communicate and build social skills in a friendly and engaging way. Note that Moxie also requires a subscription fee of $40 per month.

$800
Embodied Petit Qoobo

What is Qoobo? It is “a tailed cushion that heals your heart,” according to the folks that make it. According to us, it’s a furry round pillow that responds to your touch by moving its tail, sort of like a single-purpose cat. It’s fuzzy tail therapy!

$130
Qoobo | Amazon Unitree Go1

Before you decide on a real dog, consider the Unitree Go1 instead. Sure it’s expensive, but you know what? So are real dogs. And unlike with a real dog, you only have to walk the Go1 when you feel like it, and you can turn it off and stash it in a closet or under a bed whenever you like. For a fully featured dynamic legged robot, it’s staggeringly cheap, just keep in mind that shipping is $1,000.

$2,700
Unitree

















The National Institute of Standards and Technology is developing performance tests and associated artifacts to benchmark research in the area of robotic assembly. Sets of components consistent with mechanical assemblies including screws, gears, electrical connectors, wires, and belts are configured for assembly or disassembly using a task board concept. Test protocols accompany the task boards and are designed to mimic low-volume, high-mixture assembly challenges typical to small and medium sized manufacturers. In addition to the typical rigid components found in assembled products, the task boards include many non-rigid component operations representative of wire harness and belt drive assemblies to support research in the area of grasping and manipulation of deformable objects, an area still considered to be an emerging research problem in robotics. A set of four primary task boards as well as competition task boards are presented as benchmarks along with scoring metrics and a method to compare robot system assembly times with human performance. Competitions are used to raise awareness to these benchmarks. Tools to progress and compare research are described along with emphasis placed on system competition-based solutions to grasp and manipulate deformable task board components.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

Researchers at Carnegie Mellon University’s School of Computer Science and the University of California, Berkeley, have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs, and even operate in the dark.

[ CMU ]

This robot is designed as a preliminary platform for humanoid robot research. The platform will be further extended with soles as well as upper limbs. In this video, the current lower limb version of the platform has shown its capability on traversing over uneven terrains without active or passive ankle joint. This under-actuation nature of the robot system has been well addressed with our locomotion control framework, which also provides a new perspective on the leg design of bipedal robot.

[ CLEAR Lab ]

Thanks, Zejun!

Inbiodroid is a startup "dedicated to the development of fully immersive telepresence technologies that create a deeper connection between people and their environment." Hot off the ANA Avatar XPRIZE competition, they're doing a Kickstarter to fund the next generation of telepresence robots.

[ Kickstarter ] via [ Inbiodroid ]

Thanks, Alejandro!

A robot that can feel what a therapist feels when treating a patient, that can adjust the intensity of rehabilitation exercises at any time according to the patient's abilities and needs, and that can thus go on for hours without getting tired: it seems like fiction, and yet researchers from the Vrije Universiteit Brussel and imec have now finished a prototype that unites all these skills in one robot.

[ VUB ]

Thanks, Bram!

Self-driving bikes present some special challenges, as this excellent video graphically demonstrates.

[ Paper ]

Pickle robots unload trucks. This is a short overview of the Pickle Robot Unload System in Action at the end of October 2022—autonomously picking floor-loaded freight to unload a trailer. As a robotic system built on AI and advanced sensors, the system gets better and faster all the time.

[ Pickle ]

Learning agile skills can be challenging with reward shaping. Imitation learning provides an alternative solution by assuming access to decent expert references. However, such experts are not always available. We propose Wasserstein Adversarial Skill Imitation (WASABI) which acquires agile behaviors from partial and potentially physically incompatible demonstrations. In our work, Solo, a quadruped robot learns highly dynamic skills (e.g. backflips) from only hand-held human demonstrations.

WASABI!

[ WASABI ]

NASA and the European Space Agency are developing plans for one of the most ambitious campaigns ever attempted in space: bringing the first samples of Mars material safely back to Earth for detailed study. The diverse set of scientifically curated samples now being collected by NASA’s Mars Perseverance rover could help scientists answer the question of whether ancient life ever arose on the Red Planet.

I thought I was promised some helicopters?

[ NASA ]

A Sanctuary general-purpose robot picks up and sorts medicine pills.

Remotely controlled, if that wasn't clear.

[ Sanctuary ]

I don't know what's going on here, but it scares me.

[ KIMLAB ]

The Canadian Space Agency plans to send a rover to the Moon as early as 2026 to explore a polar region. The mission will demonstrate key technologies and accomplish meaningful science. Its objectives are to gather imagery, measurements, and data on the surface of the Moon, as well as to have the rover survive an entire night on the Moon. Lunar nights, which last about 14 Earth days, are extremely cold and dark, posing a significant technological challenge.

[ CSA ]

Covariant Robotic Induction automates previously manual induction processes. This video shows the Covariant Robotic Induction solution picking a wide range of item types from totes, scanning barcodes, and inducting items onto a unit sorter. Note the robot’s ability to effectively handle items that are traditionally difficult to pick, such as transparent polybagged apparel and oddly shaped, small health and beauty items, and place them precisely onto individual trays.

[ Covariant ]

The solution will integrate Boston Dynamics' Spot® robot, the ExynPak™ powered by ExynAI™ and the Trimble® X7 total station. It will enable fully autonomous missions inside complex and dynamic construction environments, which can result in consistent and precise reality capture for production and quality control workflows.

[ Exyn ]

Our most advanced programmable robot yet is back and better than ever. Sphero RVR+ includes an advanced gearbox to improve torque and payload capacity, enhanced sensors including an improved color sensor, and an improved rechargeable and swappable battery.

$279.

[ Sphero ]

I'm glad Starship is taking this seriously, although it's hard to know from this video how well the robots behave when conditions are less favorable.

[ Starship ]

Complexity, cost, and power requirements for the actuation of individual robots can play a large factor in limiting the size of robotic swarms. Here we present PCBot, a minimalist robot that can precisely move on an orbital shake table using a bi-stable solenoid actuator built directly into its PCB. This allows the actuator to be built as part of the automated PCB manufacturing process, greatly reducing the impact it has on manual assembly.

[ Paper ]

Drone racing world champion Thomas Bitmatta designed an indoor drone racing track for ETH Zurich's autonomous high speed racing drones, and in something like half an hour, the autonomous drones were able to master the track at superhuman speeds (with the aid of a motion capture system).

[ ETH RSL ] via [ BMS Racing ]

Thanks, Paul!

Moravec's paradox is the observation that many things that are difficult to do for robots to do come easily to humans, and vice versa. Stanford University professor Chelsea Finn has been tasked to explain this concept to 5 different people; a child, a teen, a college student, a grad student, and an expert.

[ Wired ]

Roberto Calandra from Meta AI gives a talk about “Perceiving, Understanding, and Interacting through Touch.”

[ UPenn ]

AI advancements have been motivated and inspired by human intelligence for decades. How can we use AI to expand our knowledge and understanding of the world and ourselves? How can we leverage AI to enrich our lives? In his Tanner Lecture, Eric Horvitz, Chief Science Officer at Microsoft, will explore these questions and more, tracing the arc of intelligence from its origins and evolution in humans, to its manifestations and prospects in the tools we create and use.

[ UMich ]

Pages