Feed aggregator



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Trees play clear and crucial roles in many ecosystems—whether they’re providing shade on a sunny day, serving as a home for a family of owls, or cycling carbon dioxide out of the air. But forests are diminishing through exploitative logging practices as well as the process of desertification, which turns grassland and shrubland arid.

Restoring biodiversity to these areas through growing new trees is a crucial first step, but planting seedlings in these arid environments can be both time and labor intensive. To address this problem, a team of researchers at Firat University and Adiyaman University—located in Elazig, Turkey, and Adiyaman, Turkey, respectively—has developed a concept design of a robot to drill holes and plant seedlings for up to 24 hours at a time.

Andrea Botta is an engineering professor at the Polytechnic University of Turin in Italy who has researched the use of agricultural robots. Botta, who did not contribute to this research, says that planting robots like this could fill an important gap in communities with smaller labor forces.

“Robots are very good at doing repetitive tasks like planting several trees [and] can also work for an extended period of time,” Botta says. “In a community with a significant lack of workers, complete automation is a fair approach.”

Tree-planting robots are not necessarily a new concept and can take on a variety of shapes and sizes. In their work, the research team in Turkey explored different existing tree-planting robots, including quadrupeds, ones with caterpillar belts, and wheeled robots. These robots, designed by groups such as students at University of Victoria in Canada or engineers at Chinese tech giant Huawei, either ran on steam, electric batteries, or diesel. Several of the robots were even designed to carry more than 300 seedlings on their back at a time, cutting down on time spent going between a greenhouse and the planting site.

With these designs in mind, the research team in Turkey developed a 3D model of a robotic planter that had four wheels, a steel frame, and a back-mounted hydraulic drill. Using diesel power, this 136-kilogram robot is designed to drive 300 centimeters at a time before drilling a 50-cm hole for each seedling. In future iterations, the team plans to incorporate autonomous sensing.

“As a future study, we plan to manufacture the robot we designed and develop autonomous motion algorithms,” the team write in their paper (The researchers declined to comment for this story). “The rapid development of sensor technology in recent years and the acceleration of research on the fusion of multisensor data have paved the way for robots to gain environmental perception and autonomous movement capability.”

In particular, the team plan to mount environmental sensing units—including cameras and ultrasonic sensors—to a gimbal on the robot’s back. This sensor data will then feed into motion- and object-detection algorithms the team plans to develop.

However, adding more autonomy to these types of robots doesn’t necessarily mean they should be given free rein over tree planting, Botta says. Especially in situations where they may be working alongside human workers.

“Human-robot collaboration is a very trendy topic and should be carefully designed depending on the case,” he says. “Introducing automation to a job may also introduce problems too, so it should be applied appropriately considering the local communities and scenario. For example—if a large workforce is available, the robot should be designed considering a strong synergy with the workers to ease their burden while avoiding harming the community.”

In future iterations of this design, Botta also hopes that consideration will be paid to diversity in the planting environment and planting type as well. For example, adding suspension to the robot for better all-terrain driving or adding solar panels for supplemental power and to allow the robot to operate where other fuel sources may not be readily available. Adding a renewable power option to the robots could also help ensure that they remain carbon neutral while planting.

Considering how a robot could handle multiple types of plants would also be important, Botta says.

“It seems that most—if not all—of the robotics solutions create tree farms, but probably what we need is planting actual forests with a significant biodiversity,” he says.

The work was presented in May at the 14th International Conference on Mechanical and Intelligent Manufacturing Technologies in Cape Town, South Africa.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Trees play clear and crucial roles in many ecosystems—whether they’re providing shade on a sunny day, serving as a home for a family of owls, or cycling carbon dioxide out of the air. But forests are diminishing through exploitative logging practices as well as the process of desertification, which turns grassland and shrubland arid.

Restoring biodiversity to these areas through growing new trees is a crucial first step, but planting seedlings in these arid environments can be both time and labor intensive. To address this problem, a team of researchers at Firat University and Adiyaman University—located in Elazig, Turkey, and Adiyaman, Turkey, respectively—has developed a concept design of a robot to drill holes and plant seedlings for up to 24 hours at a time.

Andrea Botta is an engineering professor at the Polytechnic University of Turin in Italy who has researched the use of agricultural robots. Botta, who did not contribute to this research, says that planting robots like this could fill an important gap in communities with smaller labor forces.

“Robots are very good at doing repetitive tasks like planting several trees [and] can also work for an extended period of time,” Botta says. “In a community with a significant lack of workers, complete automation is a fair approach.”

Tree-planting robots are not necessarily a new concept and can take on a variety of shapes and sizes. In their work, the research team in Turkey explored different existing tree-planting robots, including quadrupeds, ones with caterpillar belts, and wheeled robots. These robots, designed by groups such as students at University of Victoria in Canada or engineers at Chinese tech giant Huawei, either ran on steam, electric batteries, or diesel. Several of the robots were even designed to carry more than 300 seedlings on their back at a time, cutting down on time spent going between a greenhouse and the planting site.

With these designs in mind, the research team in Turkey developed a 3D model of a robotic planter that had four wheels, a steel frame, and a back-mounted hydraulic drill. Using diesel power, this 136-kilogram robot is designed to drive 300 centimeters at a time before drilling a 50-cm hole for each seedling. In future iterations, the team plans to incorporate autonomous sensing.

“As a future study, we plan to manufacture the robot we designed and develop autonomous motion algorithms,” the team write in their paper (The researchers declined to comment for this story). “The rapid development of sensor technology in recent years and the acceleration of research on the fusion of multisensor data have paved the way for robots to gain environmental perception and autonomous movement capability.”

In particular, the team plan to mount environmental sensing units—including cameras and ultrasonic sensors—to a gimbal on the robot’s back. This sensor data will then feed into motion- and object-detection algorithms the team plans to develop.

However, adding more autonomy to these types of robots doesn’t necessarily mean they should be given free rein over tree planting, Botta says. Especially in situations where they may be working alongside human workers.

“Human-robot collaboration is a very trendy topic and should be carefully designed depending on the case,” he says. “Introducing automation to a job may also introduce problems too, so it should be applied appropriately considering the local communities and scenario. For example—if a large workforce is available, the robot should be designed considering a strong synergy with the workers to ease their burden while avoiding harming the community.”

In future iterations of this design, Botta also hopes that consideration will be paid to diversity in the planting environment and planting type as well. For example, adding suspension to the robot for better all-terrain driving or adding solar panels for supplemental power and to allow the robot to operate where other fuel sources may not be readily available. Adding a renewable power option to the robots could also help ensure that they remain carbon neutral while planting.

Considering how a robot could handle multiple types of plants would also be important, Botta says.

“It seems that most—if not all—of the robotics solutions create tree farms, but probably what we need is planting actual forests with a significant biodiversity,” he says.

The work was presented in May at the 14th International Conference on Mechanical and Intelligent Manufacturing Technologies in Cape Town, South Africa.

Specifying and solving Constraint-based Optimization Problems (COP) has become a mainstream technology for advanced motion control of mobile robots. COP programming still requires expert knowledge to transform specific application context into the right configuration of the COP parameters (i.e., objective functions and constraints). The research contribution of this paper is a methodology to couple the context knowledge of application developers to the robot knowledge of control engineers, which, to our knowledge, has not yet been carried out. The former is offered a selected set of symbolic descriptions of the robots’ capabilities (its so-called “behavior semantics”) that are translated in control actions via “templates” in a “semantic map”; the latter contains the parameters that cover contextual dependencies in an application and robot vendor-independent way. The translation from semantics to control templates takes place in an “interaction layer” that contains 1) generic knowledge about robot motion capabilities (e.g., depending on the kinematic type of the robots), 2) spatial queries to extract relevant COP parameters from a semantic map (e.g., what is the impact of entering different types of “collision areas”), and 3) generic application knowledge (e.g., how the robots’ behavior is impacted by priorities, emergency, safety, and prudence). This particular design of, and interplay between, the application, interaction, and control layers provides a structured, conceptually simple approach to advance the complexity of mobile robot applications. Eventually, industry-wide cooperation between representatives of the application and control communities should result in an interaction layer with different standardized versions of semantic complexity.

Understanding novelty and improvisation in music requires gathering insight from a variety of disciplines. One fruitful path for synthesizing these insights is via modeling. As such, my aim in this paper is to start building a bridge between traditional cognitive models and contemporary embodied and ecological approaches to cognitive science. To achieve this task, I offer a perspective on a model that would combine elements of ecological psychology (especially affordances) and the Learning Intelligent Decision Agent (LIDA) cognitive architecture. Jeff Pressing’s cognitive model of musical improvisation will also be a central link between these elements. While some overlap between these three areas already exists, there are several points of tension between them, notably concerning the nature of perception and the function of artificial general intelligence modeling. I thus aim to alleviate the most worrisome concerns here, introduce several future research questions, and conclude with several points on how my account is part of a general theory, rather than merely a redescription of existent work.

Long-horizon task planning is essential for the development of intelligent assistive and service robots. In this work, we investigate the applicability of a smaller class of large language models (LLMs), specifically GPT-2, in robotic task planning by learning to decompose tasks into subgoal specifications for a planner to execute sequentially. Our method grounds the input of the LLM on the domain that is represented as a scene graph, enabling it to translate human requests into executable robot plans, thereby learning to reason over long-horizon tasks, as encountered in the ALFRED benchmark. We compare our approach with classical planning and baseline methods to examine the applicability and generalizability of LLM-based planners. Our findings suggest that the knowledge stored in an LLM can be effectively grounded to perform long-horizon task planning, demonstrating the promising potential for the future application of neuro-symbolic planning methods in robotics.

This paper reports the implementation and results of a simulation-based analysis of the impact of cloud/edge-enabled cooperative perception on the performance of automated driving in unsignalized roundabouts. This is achieved by comparing the performance of automated driving assisted by cooperative perception to that of a baseline system, where the automated vehicle relies only on its onboard sensing and perception for motion planning and control. The paper first provides the descriptions of the implemented simulation model, which integrates the SUMO road traffic generator and CARLA simulator. This includes descriptions of both the baseline and cooperative perception-assisted automated driving systems. We then define a set of relevant key performance indicators for traffic efficiency, safety, and ride comfort, as well as simulation scenarios to collect relevant data for our analysis. This is followed by the description of simulation scenarios, presentation of the results, and discussions of the insights learned from the results.



Skydio, maker of the most autonomous consumer drone there ever was, has announced that it is getting out of the consumer drone space completely, as of this past week. The company will be focusing on “over 1,500 enterprise and public sector customers” that are, to be fair, doing many useful and important things with Skydio drones rather than just shooting videos of themselves like the rest of us. Sigh.

By a lot of metrics, the Skydio 2 is (was?) the most capable drone that it was possible for a consumer to get. Lots of drones advertise obstacle avoidance, but in my experience, none of them came anywhere close to the way that Skydio’s drones are able to effortlessly slide around even complex obstacles while reliably tracking you at speed. Being able to (almost) completely trust the drone to fly itself and film you while you ignored it was a magical experience that I don’t think any other consumer drone can offer. It’s rare that robots can operate truly autonomously in unstructured environments, and the Skydio 2 may have been the first robot to bring that to the consumer space. This capability blew my mind even as a very early prototype in 2016, and it still does.

But Skydio does not exist solely to blow my mind, which is unfortunate for me but probably healthy for them. Instead, the company is focusing more on the public sector, the military, and business customers, which have been using Skydio drones in all kinds of public safety and inspection applications. In addition to its technology, Skydio has an edge in that it’s one of just a handful of domestic drone producers approved by the DoD.

The impact we’re having with our enterprise and public sector customers has become so compelling that it demands nothing less than our full focus and attention. As a result, I have made the very difficult decision to sunset our consumer business in order to put everything we’ve got into serving our enterprise and public sector customers. —Adam Bry, Skydio CEO

So as of now, you can no longer buy a consumer Skydio 2 from Skydio.

The less terrible news is that Skydio has promised to continue to support existing consumer drone customers:

We stand by all warranty terms, Skydio Care, and will continue vehicle repairs. Additionally, we will retain inventory of accessories for as long as we can to support the need for replacement batteries, propellers, charging cables, etc.

And since the Skydio 2+ is still being produced for sale for enterprise customers, it seems like those parts and accessories may be available longer than they would be otherwise.

If you don’t have a Skydio 2 consumer drone and you desperately want one, there aren’t a lot of good options. Last time we checked, the Skydio 2+ enterprise kit was US $5,000. Most of that value is in software and support, since the consumer edition of the Skydio 2+ with similar accessories was closer to US $2,400. That leaves buying a Skydio 2 used, or at least, buying one from a source other than Skydio—at the moment, there are a couple of Skydio 2 drones on eBay, one of which is being advertised as new.

Lastly, there is some very tenuous suggestion that Skydio may not be done with the consumer drone space forever. In a FA

Q on the company’s website about the change in strategy, Skydio says they do not explicitly rule out a future consumer drone. Rather, they saying only that “we are not able to share any updates about our future product roadmap.” So I’m just going to cross my fingers and assume that a Skydio 3 may still one day be on the way.



Skydio, maker of the most autonomous consumer drone there ever was, has announced that it is getting out of the consumer drone space completely, as of this past week. The company will be focusing on “over 1,500 enterprise and public sector customers” that are, to be fair, doing many useful and important things with Skydio drones rather than just shooting videos of themselves like the rest of us. Sigh.

By a lot of metrics, the Skydio 2 is (was?) the most capable drone that it was possible for a consumer to get. Lots of drones advertise obstacle avoidance, but in my experience, none of them came anywhere close to the way that Skydio’s drones are able to effortlessly slide around even complex obstacles while reliably tracking you at speed. Being able to (almost) completely trust the drone to fly itself and film you while you ignored it was a magical experience that I don’t think any other consumer drone can offer. It’s rare that robots can operate truly autonomously in unstructured environments, and the Skydio 2 may have been the first robot to bring that to the consumer space. This capability blew my mind even as a very early prototype in 2016, and it still does.

But Skydio does not exist solely to blow my mind, which is unfortunate for me but probably healthy for them. Instead, the company is focusing more on the public sector, the military, and business customers, which have been using Skydio drones in all kinds of public safety and inspection applications. In addition to its technology, Skydio has an edge in that it’s one of just a handful of domestic drone producers approved by the DoD.

The impact we’re having with our enterprise and public sector customers has become so compelling that it demands nothing less than our full focus and attention. As a result, I have made the very difficult decision to sunset our consumer business in order to put everything we’ve got into serving our enterprise and public sector customers. —Adam Bry, Skydio CEO

So as of now, you can no longer buy a consumer Skydio 2 from Skydio.

The less terrible news is that Skydio has promised to continue to support existing consumer drone customers:

We stand by all warranty terms, Skydio Care, and will continue vehicle repairs. Additionally, we will retain inventory of accessories for as long as we can to support the need for replacement batteries, propellers, charging cables, etc.

And since the Skydio 2+ is still being produced for sale for enterprise customers, it seems like those parts and accessories may be available longer than they would be otherwise.

If you don’t have a Skydio 2 consumer drone and you desperately want one, there aren’t a lot of good options. Last time we checked, the Skydio 2+ enterprise kit was US $5,000. Most of that value is in software and support, since the consumer edition of the Skydio 2+ with similar accessories was closer to US $2,400. That leaves buying a Skydio 2 used, or at least, buying one from a source other than Skydio—at the moment, there are a couple of Skydio 2 drones on eBay, one of which is being advertised as new.

Lastly, there is some very tenuous suggestion that Skydio may not be done with the consumer drone space forever. In a FA

Q on the company’s website about the change in strategy, Skydio says they do not explicitly rule out a future consumer drone. Rather, they saying only that “we are not able to share any updates about our future product roadmap.” So I’m just going to cross my fingers and assume that a Skydio 3 may still one day be on the way.



What started out as trying to create soft, simulated organs for medical devices and surgical robots, has given us, instead, a touch-sensitive, shape-morphing 3D display. This multifunctional device, developed by researchers at the University of Colorado Boulder and Max Planck Institute for Intelligent Systems, is about as big as a board game, and can create pop-up patterns, manipulate objects across its surface, and shake a beaker of liquid.

“The whole concept of creating the 3D display was…to try to replicate the human body, not biologically, but from a sense and response standpoint,” says Mark Rentschler, a roboticist at CU Boulder. Which meant designing a system with soft actuators and sensors to replicate muscles and nerves within the body, and a support structure to represent the skeleton.

“This type of sensing could create some very interesting surgical simulations for either training medical students or developing medical devices in robotics.”
—Mark Rentschler, University of Colorado Boulder

The result is a display surface comprising a 10-by-10 grid of individual cellular units, with high-speed actuation, sensing, and control. Each single cell is about 6 centimeters by 6 centimeters, and 1.4 cm high, packed with soft actuators and sensors, and supporting electronics. The system is connected to a small PC for computation. A paper about this work was published in Nature Communications in July.

The soft “muscles” of the device come from the earlier work of graduate student Ellen Rumley, who designed the Hydraulically Amplified Self-healing Electrostatic (HASEL) actuators. “These actuators use simple polymer sheets to hold oil inside of them,” Rentschler says. “By passing a current through the components, you can get the actuators to zip close.”

Each cell of the touch-sensitive surface contains a stack of HASEL actuators. Rentschler’s team used a modified folded design of the actuator. The compression of the individual oil chambers causes the entire stack to increase or decrease in size. Apart from being soft, the HASELs have a fast-response rate (of 50 hertz) and morph well enough to move solids and liquids across the entire display surface. It is sensitive to about 5 grams of mass and to deformations as small as 0.1 millimeter. In other words, a very small amount of force on the surface can be detected, says Rentschler.

To create sensing and response, both to external stimuli, and to provide closed-loop actuator control, the researchers used soft magnetic sensors made of silicone. Rentschler says the group placed the sensors directly on the surface layer, giving the system the ability to detect both surface deformation and external stimuli. This also gave the display a small footprint, and allowed it to perform various sequences of actuations, both with a user, and with other objects.

Shape-shifting display for 3D designs youtu.be

While shape-morphing displays aren’t exactly new, this system is unique in being smaller, faster, quieter, and softer. Its computational and power requirements are low. Plus, it is a continuous surface, not discrete points, Rentschler says, “And that allows us to do a couple of unique things with it.” He is optimistic in making the system even more compact in the future. “It’s really just reducing the actuator size, and as the electronics continue to evolve, having those shrink down as well.”

The device’s versatility also opens up application possibilities—from a consumer electronics interface, to various manufacturing or commercial applications. For example, processes that involve, say, handling toxic or delicate materials. He also sees possible applications in the gaming industry, providing tactile feedback in AR/VR environments.

Then, of course, there are medical applications. “This type of sensing could create some very interesting surgical simulations for either training medical students or developing medical devices in robotics,” Rentschler says. It could, for instance, simulate using such shape-morphing devices in a body that’s not human or an animal model, as a precursor to getting clearance for use on humans.

As for creating simulated organs, where the project began, “we’ve got a few different ideas for that,” he says. He says his lab is now looking at creating a simulated portion of the gastrointestinal tract, maybe the colon, to test surgical robotics.



What started out as trying to create soft, simulated organs for medical devices and surgical robots, has given us, instead, a touch-sensitive, shape-morphing 3D display. This multifunctional device, developed by researchers at the University of Colorado Boulder and Max Planck Institute for Intelligent Systems, is about as big as a board game, and can create pop-up patterns, manipulate objects across its surface, and shake a beaker of liquid.

“The whole concept of creating the 3D display was…to try to replicate the human body, not biologically, but from a sense and response standpoint,” says Mark Rentschler, a roboticist at CU Boulder. Which meant designing a system with soft actuators and sensors to replicate muscles and nerves within the body, and a support structure to represent the skeleton.

“This type of sensing could create some very interesting surgical simulations for either training medical students or developing medical devices in robotics.”
—Mark Rentschler, University of Colorado Boulder

The result is a display surface comprising a 10-by-10 grid of individual cellular units, with high-speed actuation, sensing, and control. Each single cell is about 6 centimeters by 6 centimeters, and 1.4 cm high, packed with soft actuators and sensors, and supporting electronics. The system is connected to a small PC for computation. A paper about this work was published in Nature Communications in July.

The soft “muscles” of the device come from the earlier work of graduate student Ellen Rumley, who designed the Hydraulically Amplified Self-healing Electrostatic (HASEL) actuators. “These actuators use simple polymer sheets to hold oil inside of them,” Rentschler says. “By passing a current through the components, you can get the actuators to zip close.”

Each cell of the touch-sensitive surface contains a stack of HASEL actuators. Rentschler’s team used a modified folded design of the actuator. The compression of the individual oil chambers causes the entire stack to increase or decrease in size. Apart from being soft, the HASELs have a fast-response rate (of 50 hertz) and morph well enough to move solids and liquids across the entire display surface. It is sensitive to about 5 grams of mass and to deformations as small as 0.1 millimeter. In other words, a very small amount of force on the surface can be detected, says Rentschler.

To create sensing and response, both to external stimuli, and to provide closed-loop actuator control, the researchers used soft magnetic sensors made of silicone. Rentschler says the group placed the sensors directly on the surface layer, giving the system the ability to detect both surface deformation and external stimuli. This also gave the display a small footprint, and allowed it to perform various sequences of actuations, both with a user, and with other objects.

Shape-shifting display for 3D designs youtu.be

While shape-morphing displays aren’t exactly new, this system is unique in being smaller, faster, quieter, and softer. Its computational and power requirements are low. Plus, it is a continuous surface, not discrete points, Rentschler says, “And that allows us to do a couple of unique things with it.” He is optimistic in making the system even more compact in the future. “It’s really just reducing the actuator size, and as the electronics continue to evolve, having those shrink down as well.”

The device’s versatility also opens up application possibilities—from a consumer electronics interface, to various manufacturing or commercial applications. For example, processes that involve, say, handling toxic or delicate materials. He also sees possible applications in the gaming industry, providing tactile feedback in AR/VR environments.

Then, of course, there are medical applications. “This type of sensing could create some very interesting surgical simulations for either training medical students or developing medical devices in robotics,” Rentschler says. It could, for instance, simulate using such shape-morphing devices in a body that’s not human or an animal model, as a precursor to getting clearance for use on humans.

As for creating simulated organs, where the project began, “we’ve got a few different ideas for that,” he says. He says his lab is now looking at creating a simulated portion of the gastrointestinal tract, maybe the colon, to test surgical robotics.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Humans are social creatures and learn from each other, even from a young age. Infants keenly observe their parents, siblings, or caregivers. They watch, imitate and replay what they see to learn skills and behaviors.

The way babies learn and explore their surroundings inspired researchers at Carnegie Mellon University and Meta AI to develop a new way to teach robots how to simultaneously learn multiple skills and leverage them to tackle unseen, everyday tasks. The researchers set out to develop a robotic AI agent with manipulation abilities equivalent to a 3-year-old child.

[ CMU ]

You’ll never be able to justify using a disposable coffee cup again thanks to a robot that does all the dishes for you.

[ Dino Robotics ]

While filming our new robot, this lovely curious cat became interested in the robot and drone. After a while, it started approaching the robot, and the following interaction ensued.

[ Zarrouk Lab ]

Robots are 100 percent more capable in slow motion with music.

[ MIT ]

Legged robots are heading to Mars!

[ JPL ]

I’m not sure how practical this is, but it’s sure fascinating to watch.

[ Somatic ]

Watch until the end, which is mildly NSFW.

Fun experiment aiming to gather data [for modeling] autonomous vehicles’ motion on ice. This is related to our vehicle dynamics work led by Dominic Baril, a Ph.D. student in our lab! Stay tuned for... paper preprint!

[ Norlab ]

Nauticus Robotics is working on something new.

[ Nauticus Robotics ]

The UBTECH humanoid robot Walker X can be used for smart SPS component sorting and intelligent aging testing in automated factory settings, which is another innovative step forward in the exploration of the commercial applications of humanoid robots.

With floors like those, why the heck wouldn’t you be using wheels, though?

[ UBTECH ]

BASF collaborates with ANYbotics to evaluate the potential of automated condition monitoring and digital documentation of operational data at their facilities. ANYmal X demonstrates its capabilities for extending robotic inspection into Ex-environments (Zone 1) that haven’t been accessible for this technology before.

[ ANYbotics ]

What meal is this robot kitting? My guess is some little tortillas, a single cherry tomato, guacamole, and walnuts. Grim.

[ Covariant ]

We present a mobile robot that provides an older adult with a handlebar located anywhere in space—“Handle Anywhere.” The robot consists of an omnidirectional mobile base attached to a repositionable handlebar.

[ MIT ]

The KUKA Innovation Award has been held annually since 2014 and is addressed to developers, graduates and research teams from universities or companies. For this year’s award, the applicants were asked to use open interfaces in our newly introduced robot operating system iiQKA and to add their own hardware and software components.

The Team Fashion & Robotics from the University of Art and Design Linz worked to create a way for small and medium-sized textile companies and designers to increase their production by setting up microfactories with collaborative robot systems, while simultaneously enabling more efficient sorting and finishing processes on an industrial scale.

[ Kuka ]

Dive into a world of cutting-edge innovation and robotic cuteness, as we take a peek into Misty’s unique personality and human expressions. From surprise to excitement, from curiosity to joy, from amusement to sadness–Misty is a canvas to create your very own fun and engaging social interactions!

[ Misty Robotics ]

We are thrilled to launch ICCV 2023 SLAM Challenge. Navigate through complex and challenging environments with our TartanAir &SubT-MRS datasets, pursing the robustness of your SLAM algorithms. Let’s redefine sim-to-real transfer together.

[ AirLab ]

Check out how the ZenRobotics Fast Picker was retrofitted into a Grundon MRF in England. The Fast Picker has optimised their waste sorting process to pick higher-value products (HDPE & PET Plastic) and increase efficiency.

[ ZenRobotics ]

Ants are highly capable in many behaviors relevant to robotics. Our recent work has focused on bridging the gap to understanding the neural circuits that underlie capacities such as visual orientation, path integration, and the combination of multiple cues. A new direction for this research is to investigate the manipulation capabilites of ants, which allow them to handle a wide diversity of arbitrary, unknown objects with a skill that goes well beyond current robotics.

[ Festo ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Humans are social creatures and learn from each other, even from a young age. Infants keenly observe their parents, siblings, or caregivers. They watch, imitate and replay what they see to learn skills and behaviors.

The way babies learn and explore their surroundings inspired researchers at Carnegie Mellon University and Meta AI to develop a new way to teach robots how to simultaneously learn multiple skills and leverage them to tackle unseen, everyday tasks. The researchers set out to develop a robotic AI agent with manipulation abilities equivalent to a 3-year-old child.

[ CMU ]

You’ll never be able to justify using a disposable coffee cup again thanks to a robot that does all the dishes for you.

[ Dino Robotics ]

While filming our new robot, this lovely curious cat became interested in the robot and drone. After a while, it started approaching the robot, and the following interaction ensued.

[ Zarrouk Lab ]

Robots are 100 percent more capable in slow motion with music.

[ MIT ]

Legged robots are heading to Mars!

[ JPL ]

I’m not sure how practical this is, but it’s sure fascinating to watch.

[ Somatic ]

Watch until the end, which is mildly NSFW.

Fun experiment aiming to gather data [for modeling] autonomous vehicles’ motion on ice. This is related to our vehicle dynamics work led by Dominic Baril, a Ph.D. student in our lab! Stay tuned for... paper preprint!

[ Norlab ]

Nauticus Robotics is working on something new.

[ Nauticus Robotics ]

The UBTECH humanoid robot Walker X can be used for smart SPS component sorting and intelligent aging testing in automated factory settings, which is another innovative step forward in the exploration of the commercial applications of humanoid robots.

With floors like those, why the heck wouldn’t you be using wheels, though?

[ UBTECH ]

BASF collaborates with ANYbotics to evaluate the potential of automated condition monitoring and digital documentation of operational data at their facilities. ANYmal X demonstrates its capabilities for extending robotic inspection into Ex-environments (Zone 1) that haven’t been accessible for this technology before.

[ ANYbotics ]

What meal is this robot kitting? My guess is some little tortillas, a single cherry tomato, guacamole, and walnuts. Grim.

[ Covariant ]

We present a mobile robot that provides an older adult with a handlebar located anywhere in space—“Handle Anywhere.” The robot consists of an omnidirectional mobile base attached to a repositionable handlebar.

[ MIT ]

The KUKA Innovation Award has been held annually since 2014 and is addressed to developers, graduates and research teams from universities or companies. For this year’s award, the applicants were asked to use open interfaces in our newly introduced robot operating system iiQKA and to add their own hardware and software components.

The Team Fashion & Robotics from the University of Art and Design Linz worked to create a way for small and medium-sized textile companies and designers to increase their production by setting up microfactories with collaborative robot systems, while simultaneously enabling more efficient sorting and finishing processes on an industrial scale.

[ Kuka ]

Dive into a world of cutting-edge innovation and robotic cuteness, as we take a peek into Misty’s unique personality and human expressions. From surprise to excitement, from curiosity to joy, from amusement to sadness–Misty is a canvas to create your very own fun and engaging social interactions!

[ Misty Robotics ]

We are thrilled to launch ICCV 2023 SLAM Challenge. Navigate through complex and challenging environments with our TartanAir &SubT-MRS datasets, pursing the robustness of your SLAM algorithms. Let’s redefine sim-to-real transfer together.

[ AirLab ]

Check out how the ZenRobotics Fast Picker was retrofitted into a Grundon MRF in England. The Fast Picker has optimised their waste sorting process to pick higher-value products (HDPE & PET Plastic) and increase efficiency.

[ ZenRobotics ]

Ants are highly capable in many behaviors relevant to robotics. Our recent work has focused on bridging the gap to understanding the neural circuits that underlie capacities such as visual orientation, path integration, and the combination of multiple cues. A new direction for this research is to investigate the manipulation capabilites of ants, which allow them to handle a wide diversity of arbitrary, unknown objects with a skill that goes well beyond current robotics.

[ Festo ]

Introduction: The paper considers the improved design of the wheeled vibration-driven robot equipped with an inertial exciter (unbalanced rotor) and enhanced pantograph-type suspension. The primary purpose and objectives of the study are focused on mathematical modeling, computer simulation, and experimental testing of locomotion conditions of the novel robot prototype. The primary scientific novelty of the present research consists in substantiating the possibilities of implementing the enhanced pantograph-type suspension in order to improve the robot’s kinematic characteristics, particularly the average translational speed.

Methods: The simplified dynamic diagram of the robot’s oscillatory system is developed, and the mathematical model describing its locomotion conditions is derived using the Euler-Lagrange differential equations. The numerical modeling is carried out in the Mathematica software with the help of the Runge-Kutta methods. Computer simulation of the robot motion is performed in the SolidWorks Motion software using the variable step integration method (Gear’s method). The experimental investigations of the robot prototype operating conditions are conducted at the Vibroengineering Laboratory of Lviv Polytechnic National University using the WitMotion accelerometers and software. The experimental data is processed in the MathCad software.

Results and discussion: The obtained results show the time dependencies of the robot body’s basic kinematic parameters (accelerations, velocities, displacements) under different operating conditions, particularly the angular frequencies of the unbalanced rotor. The numerical modeling, computer simulation, and experimental investigations present almost similar results: the smallest horizontal speed of about 1 mm/s is observed at the supplied voltage of 3.47 V when the forced frequency is equal to 500 rpm; the largest locomotion speed is approximately 40 mm/s at the supplied voltage of 10 V and forced frequency of 1,500 rpm. The paper may be interesting for designers and researchers of similar vibration-driven robotic systems based on wheeled chassis, and the results may be used while implementing the experimental and industrial prototypes of vibration-driven robots for various purposes, particularly, for inspecting and cleaning the pipelines. Further investigation on the subject of the paper should be focused on analyzing the relations between the power consumption, average translational speed, and working efficiency of the considerer robot under various operating conditions.

Introduction: The challenge of navigating a Mobile robot in dynamic environments has grasped significant attention in recent years. Despite the available techniques, there is still a need for efficient and reliable approaches that can address the challenges of real-time near optimal navigation and collision avoidance.

Methods: This paper proposes a novel Log-concave Model Predictive Controller (MPC) algorithm that addresses these challenges by utilizing a unique formulation of cost functions and dynamic constraints, as well as a convergence criterion based on Lyapunov stability theory. The proposed approach is mapped onto a novel recurrent neural network (RNN) structure and compared with the CVXOPT optimization tool. The key contribution of this study is the combination of neural networks with model predictive controller to solve optimal control problems locally near the robot, which offers several advantages, including computational efficiency and the ability to handle nonlinear and complex systems.

Results: The major findings of this study include the successful implementation and evaluation of the proposed algorithm, which outperforms other methods such as RRT, A-Star, and LQ-MPC in terms of reliability and speed. This approach has the potential to facilitate real-time navigation of mobile robots in dynamic environments and ensure a feasible solution for the proposed constrained-optimization problem.

Small insects with flapping wings, such as bees and flies, have flexible wings with veins, and their compliant motion enhances flight efficiency and robustness. This study investigated the effects of integrating wing veins into soft wings for micro-flapping aerial vehicles. Prototypes of soft wings, featuring various wing areas and vein patterns in both the wing-chord and wing-span directions, were fabricated and evaluated to determine the force generated through flapping. The results indicated that the force is not solely dependent upon the wing area and is influenced by the wing vein pattern. Wings incorporating wing-chord veins produced more force compared to those with wing-span veins. In contrast, when the wing area was the specific wing area, wings with crossed wing veins, comprising both wing-span veins and wing-chord veins, produced more force. Although wing-chord veins tended to exert more influence on the force generated than the wing-span veins, the findings suggested that a combination of wing-span and wing-chord veins may be requisite, depending upon the wing area.

Human-robot teams collaborating to achieve tasks under various conditions, especially in unstructured, dynamic environments will require robots to adapt autonomously to a human teammate’s state. An important element of such adaptation is the robot’s ability to infer the human teammate’s tasks. Environmentally embedded sensors (e.g., motion capture and cameras) are infeasible in such environments for task recognition, but wearable sensors are a viable task recognition alternative. Human-robot teams will perform a wide variety of composite and atomic tasks, involving multiple activity components (i.e., gross motor, fine-grained motor, tactile, visual, cognitive, speech and auditory) that may occur concurrently. A robot’s ability to recognize the human’s composite, concurrent tasks is a key requirement for realizing successful teaming. Over a hundred task recognition algorithms across multiple activity components are evaluated based on six criteria: sensitivity, suitability, generalizability, composite factor, concurrency and anomaly awareness. The majority of the reviewed task recognition algorithms are not viable for human-robot teams in unstructured, dynamic environments, as they only detect tasks from a subset of activity components, incorporate non-wearable sensors, and rarely detect composite, concurrent tasks across multiple activity components.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA

Enjoy today’s videos!

NASA’s Curiosity rover recently made its most challenging climb on Mars. Curiosity faced a steep, slippery slope on its journey up Mount Sharp, so rover drivers had to come up with a creative detour.

[ JPL ]

Wheel knees for ANYmal! We should learn more about this at IROS 2023 this fall.

[ RSL ]

Hard vision and manipulation problem? Solve it by making it less hard!

[ Covariant ]

Oh good, drones are learning to open doors now.

[ ASL ]

If you look closely, you’ll see that Sanctuary’s robot has fingernails, a detail that I always appreciate on robotic hands.

[ Sanctuary AI ]

This summer, the University of Mary Washington (UMW) in Fredericksburg, Va. became the official home for Virginia’s SMART Community STEM Camp. The camp hosted over 30 local high school students for a full week to learn about cybersecurity, e-sports, [and] the drone industry—as well as [participating in] a hands-on flying experience.

[ Skydio ]

O_o

[ Pollen Robotics ]

Agility CEO and Co-Founder Damion Shelton talks with Pras Velagapudi, VP of Innovation and Chief Architect, about the best methods for robot control. Comparing Reinforcement Learning to what we can now do using LLMs.

[ Agility Robotics ]

In this episode of The Robot Brains Podcast, Pieter speaks with John Schulman, co-founder of OpenAI.

[ Robot Brains ]

This week, Geordie Rose (CEO) and Suzanne Gildert (CTO) continue the discussion about their co-authored position paper, now that it has been published. Titled “Building and Testing a General Intelligence Embodied in a Humanoid Robot,” the paper touches on metrics of intelligence, robotics, machine learning, and more. They round off by answering more audience questions.

[ Sanctuary AI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA

Enjoy today’s videos!

NASA’s Curiosity rover recently made its most challenging climb on Mars. Curiosity faced a steep, slippery slope on its journey up Mount Sharp, so rover drivers had to come up with a creative detour.

[ JPL ]

Wheel knees for ANYmal! We should learn more about this at IROS 2023 this fall.

[ RSL ]

Hard vision and manipulation problem? Solve it by making it less hard!

[ Covariant ]

Oh good, drones are learning to open doors now.

[ ASL ]

If you look closely, you’ll see that Sanctuary’s robot has fingernails, a detail that I always appreciate on robotic hands.

[ Sanctuary AI ]

This summer, the University of Mary Washington (UMW) in Fredericksburg, Va. became the official home for Virginia’s SMART Community STEM Camp. The camp hosted over 30 local high school students for a full week to learn about cybersecurity, e-sports, [and] the drone industry—as well as [participating in] a hands-on flying experience.

[ Skydio ]

O_o

[ Pollen Robotics ]

Agility CEO and Co-Founder Damion Shelton talks with Pras Velagapudi, VP of Innovation and Chief Architect, about the best methods for robot control. Comparing Reinforcement Learning to what we can now do using LLMs.

[ Agility Robotics ]

In this episode of The Robot Brains Podcast, Pieter speaks with John Schulman, co-founder of OpenAI.

[ Robot Brains ]

This week, Geordie Rose (CEO) and Suzanne Gildert (CTO) continue the discussion about their co-authored position paper, now that it has been published. Titled “Building and Testing a General Intelligence Embodied in a Humanoid Robot,” the paper touches on metrics of intelligence, robotics, machine learning, and more. They round off by answering more audience questions.

[ Sanctuary AI ]

This study presents a novel method that combines a computational fluid-structure interaction model with an interpretable deep-learning model to explore the fundamental mechanisms of seal whisker sensing. By establishing connections between crucial signal patterns, flow characteristics, and attributes of upstream obstacles, the method has the potential to enhance our understanding of the intricate sensing mechanisms. The effectiveness of the method is demonstrated through its accurate prediction of the location and orientation of a circular plate placed in front of seal whisker arrays. The model also generates temporal and spatial importance values of the signals, enabling the identification of significant temporal-spatial signal patterns crucial for the network’s predictions. These signal patterns are further correlated with flow structures, allowing for the identification of important flow features relevant for accurate prediction. The study provides insights into seal whiskers’ perception of complex underwater environments, inspiring advancements in underwater sensing technologies.

Pages