Feed aggregator

Human-robot teams collaborating to achieve tasks under various conditions, especially in unstructured, dynamic environments will require robots to adapt autonomously to a human teammate’s state. An important element of such adaptation is the robot’s ability to infer the human teammate’s tasks. Environmentally embedded sensors (e.g., motion capture and cameras) are infeasible in such environments for task recognition, but wearable sensors are a viable task recognition alternative. Human-robot teams will perform a wide variety of composite and atomic tasks, involving multiple activity components (i.e., gross motor, fine-grained motor, tactile, visual, cognitive, speech and auditory) that may occur concurrently. A robot’s ability to recognize the human’s composite, concurrent tasks is a key requirement for realizing successful teaming. Over a hundred task recognition algorithms across multiple activity components are evaluated based on six criteria: sensitivity, suitability, generalizability, composite factor, concurrency and anomaly awareness. The majority of the reviewed task recognition algorithms are not viable for human-robot teams in unstructured, dynamic environments, as they only detect tasks from a subset of activity components, incorporate non-wearable sensors, and rarely detect composite, concurrent tasks across multiple activity components.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA

Enjoy today’s videos!

NASA’s Curiosity rover recently made its most challenging climb on Mars. Curiosity faced a steep, slippery slope on its journey up Mount Sharp, so rover drivers had to come up with a creative detour.

[ JPL ]

Wheel knees for ANYmal! We should learn more about this at IROS 2023 this fall.

[ RSL ]

Hard vision and manipulation problem? Solve it by making it less hard!

[ Covariant ]

Oh good, drones are learning to open doors now.

[ ASL ]

If you look closely, you’ll see that Sanctuary’s robot has fingernails, a detail that I always appreciate on robotic hands.

[ Sanctuary AI ]

This summer, the University of Mary Washington (UMW) in Fredericksburg, Va. became the official home for Virginia’s SMART Community STEM Camp. The camp hosted over 30 local high school students for a full week to learn about cybersecurity, e-sports, [and] the drone industry—as well as [participating in] a hands-on flying experience.

[ Skydio ]

O_o

[ Pollen Robotics ]

Agility CEO and Co-Founder Damion Shelton talks with Pras Velagapudi, VP of Innovation and Chief Architect, about the best methods for robot control. Comparing Reinforcement Learning to what we can now do using LLMs.

[ Agility Robotics ]

In this episode of The Robot Brains Podcast, Pieter speaks with John Schulman, co-founder of OpenAI.

[ Robot Brains ]

This week, Geordie Rose (CEO) and Suzanne Gildert (CTO) continue the discussion about their co-authored position paper, now that it has been published. Titled “Building and Testing a General Intelligence Embodied in a Humanoid Robot,” the paper touches on metrics of intelligence, robotics, machine learning, and more. They round off by answering more audience questions.

[ Sanctuary AI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA

Enjoy today’s videos!

NASA’s Curiosity rover recently made its most challenging climb on Mars. Curiosity faced a steep, slippery slope on its journey up Mount Sharp, so rover drivers had to come up with a creative detour.

[ JPL ]

Wheel knees for ANYmal! We should learn more about this at IROS 2023 this fall.

[ RSL ]

Hard vision and manipulation problem? Solve it by making it less hard!

[ Covariant ]

Oh good, drones are learning to open doors now.

[ ASL ]

If you look closely, you’ll see that Sanctuary’s robot has fingernails, a detail that I always appreciate on robotic hands.

[ Sanctuary AI ]

This summer, the University of Mary Washington (UMW) in Fredericksburg, Va. became the official home for Virginia’s SMART Community STEM Camp. The camp hosted over 30 local high school students for a full week to learn about cybersecurity, e-sports, [and] the drone industry—as well as [participating in] a hands-on flying experience.

[ Skydio ]

O_o

[ Pollen Robotics ]

Agility CEO and Co-Founder Damion Shelton talks with Pras Velagapudi, VP of Innovation and Chief Architect, about the best methods for robot control. Comparing Reinforcement Learning to what we can now do using LLMs.

[ Agility Robotics ]

In this episode of The Robot Brains Podcast, Pieter speaks with John Schulman, co-founder of OpenAI.

[ Robot Brains ]

This week, Geordie Rose (CEO) and Suzanne Gildert (CTO) continue the discussion about their co-authored position paper, now that it has been published. Titled “Building and Testing a General Intelligence Embodied in a Humanoid Robot,” the paper touches on metrics of intelligence, robotics, machine learning, and more. They round off by answering more audience questions.

[ Sanctuary AI ]

This study presents a novel method that combines a computational fluid-structure interaction model with an interpretable deep-learning model to explore the fundamental mechanisms of seal whisker sensing. By establishing connections between crucial signal patterns, flow characteristics, and attributes of upstream obstacles, the method has the potential to enhance our understanding of the intricate sensing mechanisms. The effectiveness of the method is demonstrated through its accurate prediction of the location and orientation of a circular plate placed in front of seal whisker arrays. The model also generates temporal and spatial importance values of the signals, enabling the identification of significant temporal-spatial signal patterns crucial for the network’s predictions. These signal patterns are further correlated with flow structures, allowing for the identification of important flow features relevant for accurate prediction. The study provides insights into seal whiskers’ perception of complex underwater environments, inspiring advancements in underwater sensing technologies.

Origami folding is an ancient art which holds promise for creating compliant and adaptable mechanisms, but has yet to be extensively studied for granular environments. At the same time, biological systems exploit anisotropic body forces for locomotion, such as the frictional anisotropy of a snake’s skin. In this work, we explore how foldable origami feet can be used to passively induce anisotropic force response in granular media, through varying their resistive plane. We present a reciprocating burrower which transfers pure symmetric linear motion into directed burrowing motion using a pair of deployable origami feet on either end. We also present an application of the reduced order model granular Resistive Force Theory to inform the design of deformable structures, and compare results with those from experiments and Discrete Element Method simulations. Through a single actuator, and without the use of advanced controllers or sensors, these origami feet enable burrowing locomotion. In this paper, we achieve burrowing translation ratios—net forward motion to overall linear actuation—over 46% by changing foot design without altering overall foot size. Specifically, anisotropic folding foot parameters should be tuned for optimal performance given a linear actuator’s stroke length.



With the aid of crystals known as perovskites, solar cells are increasingly breaking records in how well they convert sunlight to electricity. Now a new automated system could make those records fall even faster. North Carolina State University’s RoboMapper can analyze how well perovskites might perform in solar cells, using roughly one-tenth to one-fiftieth the time, cost, and energy of either manual labor or previous robotic platforms, its inventors say.

The most common solar cells use silicon to convert light to electricity. These devices are rapidly approaching their theoretical conversion efficiency limit of 29.4 percent; modern commercial silicon solar cells now reach efficiencies of more than 24 percent, and the best lab cell has an efficiency of 26.8 percent.

One strategy to boost a solar cell’s efficiency is by stacking two different light-absorbing materials together into one device. This tandem method increases the spectrum of sunlight the solar cell can harvest. A common approach with tandem cells is to use a top cell made of perovskites to absorb higher-energy visible light and a bottom cell made of silicon for lower-energy infrared rays. Last year scientists unveiled the first perovskite-silicon tandem solar cells to pass the 30 percent efficiency threshold, and last month another group reported the same milestone.

Conventional materials research has scientists prepare a sample on a chip and then go through multiple steps to examine it using different instruments. Existing automation efforts “tend to emulate human workflows—we tend to process materials one parameter at a time,” says Aram Amassian, a materials scientist at North Carolina State University, in Raleigh.

RoboMapper’s greatest reduction in environmental impact came from improved energy efficiency during testing.

However, modern genetics and pharmaceutical analysis often achieves high throughput by placing dozens of samples on each plate and examining them all at once. RoboMapper also follows this strategy, using printing techniques to miniaturize the material samples.

“We’ve benefited a lot from hardware interoperability with biology and chemistry, such as in liquid handling,” Amassian says. However for RoboMapper, Amassian and his team had to develop new protocols for handling perovskite materials and different characterization experiments from what you’d find in chemistry automation. “One particular development we had to make is to make sure that characterization instruments can handle the high density of materials on a chip with automation. This required a little bit of engineering on both the hardware and software side.”

One key to saving time, energy, material, and money was to shrink the sample size by a factor of 1,000. “The print size is on the order of 50 to 150 [micrometers], while most other tools create samples on the order of centimeters,” Amassian says. “Typically, we print picoliter to nanoliter volumes while other platforms print or coat microliters.”

Perovskite Properties for Pennies

In the first tests of RoboMapper, the scientists analyzed 150 different perovskite compositions. In all RoboMapper was 12 percent the cost, nine times as fast, and 18 times as energy efficient as other robotic platforms. And it was 2 percent the cost, 14 times as fast, and 26 times as energy efficient as manual labor.

“We set out to build a robot that can generate large material libraries so that we can build datasets for training AI models in the future,” Amassian says. Such an AI could then predict which perovskite structures will perform best.

North Carolina State University

The researchers focused on perovskites’ stability, which is a major challenge when it comes to tandem cells. Perovskites tend to degrade when exposed to light, losing the properties that made them desirable in the first place, Amassian explains.

The scientists analyzed perovskite structure, electronic properties, and stability in response to intense light using optical microscopy, microphotoluminescence spectroscopy mapping, and synchrotron-based wide-angle X-ray scattering mapping. This experimental data was then used to develop computational models that identified a specific composition that the researchers predicted would have the best combination of attributes.

“These models are now available for others to use,” Amassian says. He notes they are now in talks with leading tandem solar cell research groups.

Unexpectedly, the scientists found that RoboMapper’s greatest reduction in environmental impact came from improved energy efficiency during testing.

“We and others did not realize this, because electricity used by instruments in the lab is unseen, whereas materials and supplies are tangible,” Amassian says. “RoboMapper was designed in part to address this insidious problem by placing dozens of materials in the same measurement tools and significantly reducing the amount of time it needs to be powered on to collect data. We showed that tenfold reduction in carbon footprint and other negative environmental impacts can be achieved.”

In the future, “we will continue to search for newer and better perovskites,” Amassian says. “We’re also actively looking at organic solar-cell materials to find compositions that are stable for solar-energy applications. The ability to test dozens of compositions under intense simulated sunlight helps save tremendous time and energy.”

The scientists detailed their findings online 25 July in the journal Matter.



With the aid of crystals known as perovskites, solar cells are increasingly breaking records in how well they convert sunlight to electricity. Now a new automated system could make those records fall even faster. North Carolina State University’s RoboMapper can analyze how well perovskites might perform in solar cells, using roughly one-tenth to one-fiftieth the time, cost, and energy of either manual labor or previous robotic platforms, its inventors say.

The most common solar cells use silicon to convert light to electricity. These devices are rapidly approaching their theoretical conversion efficiency limit of 29.4 percent; modern commercial silicon solar cells now reach efficiencies of more than 24 percent, and the best lab cell has an efficiency of 26.8 percent.

One strategy to boost a solar cell’s efficiency is by stacking two different light-absorbing materials together into one device. This tandem method increases the spectrum of sunlight the solar cell can harvest. A common approach with tandem cells is to use a top cell made of perovskites to absorb higher-energy visible light and a bottom cell made of silicon for lower-energy infrared rays. Last year scientists unveiled the first perovskite-silicon tandem solar cells to pass the 30 percent efficiency threshold, and last month another group reported the same milestone.

Conventional materials research has scientists prepare a sample on a chip and then go through multiple steps to examine it using different instruments. Existing automation efforts “tend to emulate human workflows—we tend to process materials one parameter at a time,” says Aram Amassian, a materials scientist at North Carolina State University, in Raleigh.

RoboMapper’s greatest reduction in environmental impact came from improved energy efficiency during testing.

However, modern genetics and pharmaceutical analysis often achieves high throughput by placing dozens of samples on each plate and examining them all at once. RoboMapper also follows this strategy, using printing techniques to miniaturize the material samples.

“We’ve benefited a lot from hardware interoperability with biology and chemistry, such as in liquid handling,” Amassian says. However for RoboMapper, Amassian and his team had to develop new protocols for handling perovskite materials and different characterization experiments from what you’d find in chemistry automation. “One particular development we had to make is to make sure that characterization instruments can handle the high density of materials on a chip with automation. This required a little bit of engineering on both the hardware and software side.”

One key to saving time, energy, material, and money was to shrink the sample size by a factor of 1,000. “The print size is on the order of 50 to 150 [micrometers], while most other tools create samples on the order of centimeters,” Amassian says. “Typically, we print picoliter to nanoliter volumes while other platforms print or coat microliters.”

Perovskite Properties for Pennies

In the first tests of RoboMapper, the scientists analyzed 150 different perovskite compositions. In all RoboMapper was 12 percent the cost, nine times as fast, and 18 times as energy efficient as other robotic platforms. And it was 2 percent the cost, 14 times as fast, and 26 times as energy efficient as manual labor.

“We set out to build a robot that can generate large material libraries so that we can build datasets for training AI models in the future,” Amassian says. Such an AI could then predict which perovskite structures will perform best.

North Carolina State University

The researchers focused on perovskites’ stability, which is a major challenge when it comes to tandem cells. Perovskites tend to degrade when exposed to light, losing the properties that made them desirable in the first place, Amassian explains.

The scientists analyzed perovskite structure, electronic properties, and stability in response to intense light using optical microscopy, microphotoluminescence spectroscopy mapping, and synchrotron-based wide-angle X-ray scattering mapping. This experimental data was then used to develop computational models that identified a specific composition that the researchers predicted would have the best combination of attributes.

“These models are now available for others to use,” Amassian says. He notes they are now in talks with leading tandem solar cell research groups.

Unexpectedly, the scientists found that RoboMapper’s greatest reduction in environmental impact came from improved energy efficiency during testing.

“We and others did not realize this, because electricity used by instruments in the lab is unseen, whereas materials and supplies are tangible,” Amassian says. “RoboMapper was designed in part to address this insidious problem by placing dozens of materials in the same measurement tools and significantly reducing the amount of time it needs to be powered on to collect data. We showed that tenfold reduction in carbon footprint and other negative environmental impacts can be achieved.”

In the future, “we will continue to search for newer and better perovskites,” Amassian says. “We’re also actively looking at organic solar-cell materials to find compositions that are stable for solar-energy applications. The ability to test dozens of compositions under intense simulated sunlight helps save tremendous time and energy.”

The scientists detailed their findings online 25 July in the journal Matter.



When Marc Raibert founded Boston Dynamics in 1992, he wasn’t even sure it was going to be a robotics company—he thought it might become a modeling and simulation company instead. Now, of course, Boston Dynamics is the authority in legged robots, with its Atlas biped and Spot quadruped. But as the company focuses more on commercializing its technology, Raibert has become more interested in pursuing the long-term vision of what robotics can be.

To that end, Raibert founded the Boston Dynamics AI Institute in August of 2022. Funded by Hyundai (the company also acquired Boston Dynamics in 2020), the Institute’s first few projects will focus on making robots useful outside the lab by teaching them to better understand the world around them.

Marc Raibert 

Raibert was a professor at Carnegie Mellon and MIT before founding Boston Dynamics in 1992. He now leads the Boston Dynamics AI Institute.

At the 2023 IEEE International Conference on Robotics at Automation (ICRA) in London this past May, Raibert gave a keynote talk that discussed some of his specific goals, with an emphasis on developing practical, helpful capabilities in robots. For example, Raibert hopes to teach robots to watch humans perform tasks, understand what they’re seeing, and then do it themselves—or know when they don’t understand something, and how to ask questions to fill in those gaps. Another of Raibert’s goals is to teach robots to inspect equipment to figure out whether something is working—and if it’s not, to determine what’s wrong with it and make repairs. Raibert showed concept art at ICRA that included robots working in domestic environments such as kitchens, living rooms, and laundry rooms as well as industrial settings. “I look forward to having some demos of something like this happening at ICRA 2028 or 2029,” Raibert quipped.

Following his keynote, IEEE Spectrum spoke with Raibert, and he answered five questions about where he wants robotics to go next.

At the Institute, you’re starting to share your vision for the future of robotics more than you did at Boston Dynamics. Why is that?

Marc Raibert: At Boston Dynamics, I don’t think we talked about the vision. We just did the next thing, saw how it went, and then decided what to do after that. I was taught that when you wrote a paper or gave a presentation, you showed what you had accomplished. All that really mattered was the data in your paper. You could talk about what you want to do, but people talk about all kinds of things that way—the future is so cheap, and so variable. That’s not the same as showing what you did. And I took pride in showing what we actually did at Boston Dynamics.

But if you’re going to make the Bell Labs of robotics, and you’re trying to do it quickly from scratch, you have to paint the vision. So I’m starting to be a little more comfortable with doing that. Not to mention that at this point, we don’t have any actual results to show.

Right now, robots must be carefully trained to complete specific tasks. But Marc Raibert wants to give robots the ability to watch a human do a task, understand what\u2019s happening, and then do the task themselves, whether it\u2019s in a factory [top left and bottom] or in your home [top right and bottom]. Boston Dynamics AI Institute

The Institute will be putting a lot of effort into how robots can better manipulate objects. What’s the opportunity there?

Raibert: I think that for 50 years, people have been working on manipulation, and it hasn’t progressed enough. I’m not criticizing anybody, but I think that there’s been so much work on path planning, where path planning means how you move through open space. But that’s not where the action is. The action is when you’re in contact with things—we humans basically juggle with our hands when we’re manipulating, and I’ve seen very few things that look like that. It’s going to be hard, but maybe we can make progress on it. One idea is that going from static robot manipulation to dynamic can advance the field the way that going from static to dynamic advanced legged robots.

How are you going to make your vision happen?

Raibert: I don’t know any of the answers for how we’re going to do any of this! That’s the technical fearlessness—or maybe the technical foolishness. My long-term hope for the Institute is that most of the ideas don’t come from me, and that we succeed in hiring the kind of people who can have ideas that lead the field. We’re looking for people who are good at bracketing a problem, doing a quick pass at it (“quick” being maybe a year), seeing what sticks, and then taking another pass at it. And we’ll give them the resources they need to go after problems that way.

“If you’re going to make the Bell Labs of robotics, and you’re trying to do it quickly from scratch, you have to paint the vision.”

Are you concerned about how the public perception of robots, and especially of robots you have developed, is sometimes negative?

Raibert: The media can be over the top with stories about the fear of robots. I think that by and large, people really love robots. Or at least, a lot of people could love them, even though sometimes they’re afraid of them. But I think people just have to get to know robots, and at some point I’d like to open up an outreach center where people could interact with our robots in positive ways. We are actively working on that.

What do you find so interesting about dancing robots?

Raibert: I think there are a lot of opportunities for emotional expression by robots, and there’s a lot to be done that hasn’t been done. Right now, it’s labor-intensive to create these performances, and the robots are not perceiving anything. They’re just playing back the behaviors that we program. They should be listening to the music. They should be seeing who they’re dancing with, and coordinating with them. And I have to say, every time I think about that, I wonder if I’m getting soft because robots don’t have to be emotional, either on the giving side or on the receiving side. But somehow, it’s captivating.

Marc Raibert was a professor at Carnegie Mellon and MIT before founding Boston Dynamics in 1992. He now leads the Boston Dynamics AI Institute.

This article appears in the August 2023 print issue as “5 Questions for Marc Raibert.”



When Marc Raibert founded Boston Dynamics in 1992, he wasn’t even sure it was going to be a robotics company—he thought it might become a modeling and simulation company instead. Now, of course, Boston Dynamics is the authority in legged robots, with its Atlas biped and Spot quadruped. But as the company focuses more on commercializing its technology, Raibert has become more interested in pursuing the long-term vision of what robotics can be.

To that end, Raibert founded the Boston Dynamics AI Institute in August of 2022. Funded by Hyundai (the company also acquired Boston Dynamics in 2020), the Institute’s first few projects will focus on making robots useful outside the lab by teaching them to better understand the world around them.

Marc Raibert 

Raibert was a professor at Carnegie Mellon and MIT before founding Boston Dynamics in 1992. He now leads the Boston Dynamics AI Institute.

At the 2023 IEEE International Conference on Robotics at Automation (ICRA) in London this past May, Raibert gave a keynote talk that discussed some of his specific goals, with an emphasis on developing practical, helpful capabilities in robots. For example, Raibert hopes to teach robots to watch humans perform tasks, understand what they’re seeing, and then do it themselves—or know when they don’t understand something, and how to ask questions to fill in those gaps. Another of Raibert’s goals is to teach robots to inspect equipment to figure out whether something is working—and if it’s not, to determine what’s wrong with it and make repairs. Raibert showed concept art at ICRA that included robots working in domestic environments such as kitchens, living rooms, and laundry rooms as well as industrial settings. “I look forward to having some demos of something like this happening at ICRA 2028 or 2029,” Raibert quipped.

Following his keynote, IEEE Spectrum spoke with Raibert, and he answered five questions about where he wants robotics to go next.

At the Institute, you’re starting to share your vision for the future of robotics more than you did at Boston Dynamics. Why is that?

Marc Raibert: At Boston Dynamics, I don’t think we talked about the vision. We just did the next thing, saw how it went, and then decided what to do after that. I was taught that when you wrote a paper or gave a presentation, you showed what you had accomplished. All that really mattered was the data in your paper. You could talk about what you want to do, but people talk about all kinds of things that way—the future is so cheap, and so variable. That’s not the same as showing what you did. And I took pride in showing what we actually did at Boston Dynamics.

But if you’re going to make the Bell Labs of robotics, and you’re trying to do it quickly from scratch, you have to paint the vision. So I’m starting to be a little more comfortable with doing that. Not to mention that at this point, we don’t have any actual results to show.

Right now, robots must be carefully trained to complete specific tasks. But Marc Raibert wants to give robots the ability to watch a human do a task, understand what\u2019s happening, and then do the task themselves, whether it\u2019s in a factory [top left and bottom] or in your home [top right and bottom]. Boston Dynamics AI Institute

The Institute will be putting a lot of effort into how robots can better manipulate objects. What’s the opportunity there?

Raibert: I think that for 50 years, people have been working on manipulation, and it hasn’t progressed enough. I’m not criticizing anybody, but I think that there’s been so much work on path planning, where path planning means how you move through open space. But that’s not where the action is. The action is when you’re in contact with things—we humans basically juggle with our hands when we’re manipulating, and I’ve seen very few things that look like that. It’s going to be hard, but maybe we can make progress on it. One idea is that going from static robot manipulation to dynamic can advance the field the way that going from static to dynamic advanced legged robots.

How are you going to make your vision happen?

Raibert: I don’t know any of the answers for how we’re going to do any of this! That’s the technical fearlessness—or maybe the technical foolishness. My long-term hope for the Institute is that most of the ideas don’t come from me, and that we succeed in hiring the kind of people who can have ideas that lead the field. We’re looking for people who are good at bracketing a problem, doing a quick pass at it (“quick” being maybe a year), seeing what sticks, and then taking another pass at it. And we’ll give them the resources they need to go after problems that way.

“If you’re going to make the Bell Labs of robotics, and you’re trying to do it quickly from scratch, you have to paint the vision.”

Are you concerned about how the public perception of robots, and especially of robots you have developed, is sometimes negative?

Raibert: The media can be over the top with stories about the fear of robots. I think that by and large, people really love robots. Or at least, a lot of people could love them, even though sometimes they’re afraid of them. But I think people just have to get to know robots, and at some point I’d like to open up an outreach center where people could interact with our robots in positive ways. We are actively working on that.

What do you find so interesting about dancing robots?

Raibert: I think there are a lot of opportunities for emotional expression by robots, and there’s a lot to be done that hasn’t been done. Right now, it’s labor-intensive to create these performances, and the robots are not perceiving anything. They’re just playing back the behaviors that we program. They should be listening to the music. They should be seeing who they’re dancing with, and coordinating with them. And I have to say, every time I think about that, I wonder if I’m getting soft because robots don’t have to be emotional, either on the giving side or on the receiving side. But somehow, it’s captivating.

Marc Raibert was a professor at Carnegie Mellon and MIT before founding Boston Dynamics in 1992. He now leads the Boston Dynamics AI Institute.

This article appears in the August 2023 print issue as “5 Questions for Marc Raibert.”

The lateral line system of zebrafish consists of the anterior lateral line, with neuromasts distributed on the head, and the posterior lateral line, with neuromasts distributed on the trunk. The sensory afferent neurons are contained in the anterior and posterior lateral line ganglia, respectively. So far, the vast majority of physiological and developmental studies have focused on the posterior lateral line. However, studies that focus on the anterior lateral line, especially on its physiology, are very rare. The anterior lateral line involves different neuromast patterning processes, specific distribution of synapses, and a unique role in behavior. Here, we report our observations regarding the development of the lateral line and analyze the physiological responses of the anterior lateral line to mechanical and water jet stimuli. Sensing in the fish head may be crucial to avoid obstacles, catch prey, and orient in water current, especially in the absence of visual cues. Alongside the lateral line, the trigeminal system, with its fine nerve endings innervating the skin, could contribute to perceiving mechanosensory stimulation. Therefore, we compare the physiological responses of the lateral line afferent neurons to responses of trigeminal neurons and responsiveness of auditory neurons. We show that anterior lateral line neurons are tuned to the velocity of mechanosensory ramp stimulation, while trigeminal neurons either only respond to mechanical step stimuli or fast ramp and step stimuli. Auditory neurons did not respond to mechanical or water jet stimuli. These results may prove to be essential in designing underwater robots and artificial lateral lines, with respect to the spectra of stimuli that the different mechanosensory systems in the larval head are tuned to, and underline the importance and functionality of the anterior lateral line system in the larval fish head.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Two interesting things about this video: First, the “where’s the button” poke at 2:20, and second, the custom Spot-friendly wrench.

[ Boston Dynamics ]

This is one of the more interesting drone designs that I’ve seen recently, since it’s modular, and you can clip on wings and props. And somehow it just flies.

[ AIR Lab ]

This soft robotic gripper is not only 3D printed in one print, it also doesn’t need any electronics to work. The researchers wanted to design a soft gripper that would be ready to use right as it comes off the 3D printer, equipped with built in gravity and touch sensors. As a result, the gripper can pick up, hold, and release objects.

[ UCSD ]

Thanks, Daniel!

Through this powerful collaboration with the U.S. Agency for International Development (USAID), we are proud to donate cutting-edge Skydio drones, complemented by 3D Scan technology and comprehensive professional training. These resources will aid the Office of the Prosecutor General to document the more than 115,000 instances of destroyed civilian infrastructure, and evidence of human rights abuses on frontline communities and liberated territories.

[ Skydio ]

Grasping objects with limited or no prior knowledge about them is a highly relevant skill in assistive robotics. Still, in this general setting, it has remained an open problem. We present a deep learning pipeline consisting of a shape completion module that is based on a single depth image, and followed by a grasp predictor that is based on the predicted object shape.

[ DLR RM ]

This is a video announcing the opening of the MyoChallenge 23, part of the challenge track of the NeurIPS23 conference. This competition merges physiologically realistic musculoskeletal models and AI with the goal of creating controllers for locomotion and manipulation.

[ MyoChallenge ]

Thanks, Guillaume!

The new DJI Air 3 has a transmission range of 20 kilometers and a flight time of 46 minutes. Consumer drones have made a lot of progress in a pretty short time, haven’t they?

[ DJI ]

With [human driving’s track] record of nearly 43,000 deaths and 2.5 million injuries in the U.S. alone in 2021, we believe autonomous driving technology has the potential to save lives and improve mobility options for millions of people. The data to-date indicates that the Waymo Driver is reducing traffic injuries and fatalities in the places where we operate, and we aim to continue safely designing and deploying our Driver to help more people in more places.

Humans are bad drivers for sure, but according to expert Missy Cummings, as quoted by AP, “autonomous vehicles from Waymo, a spinoff of Google, are four times more likely than humans to crash.”

Watch Tanner Lecturers, Fei-Fei Li and Eric Horvitz, discuss the topics of AI and Human Values.

[ Stanford HAI ]

Tin Lun Lam writes, “in the last two months, we have organized a Lecture Series on Multi-robot Systems and invited eight world-renowned scholars to share their wisdom to help promote knowledge sharing and technological advancement in this field.” Here’s two of the lectures, and you can find the other six at the link below.

[ Freeform Robotics ]

Thanks, Tin Lun!



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Two interesting things about this video: First, the “where’s the button” poke at 2:20, and second, the custom Spot-friendly wrench.

[ Boston Dynamics ]

This is one of the more interesting drone designs that I’ve seen recently, since it’s modular, and you can clip on wings and props. And somehow it just flies.

[ AIR Lab ]

This soft robotic gripper is not only 3D printed in one print, it also doesn’t need any electronics to work. The researchers wanted to design a soft gripper that would be ready to use right as it comes off the 3D printer, equipped with built in gravity and touch sensors. As a result, the gripper can pick up, hold, and release objects.

[ UCSD ]

Thanks, Daniel!

Through this powerful collaboration with the U.S. Agency for International Development (USAID), we are proud to donate cutting-edge Skydio drones, complemented by 3D Scan technology and comprehensive professional training. These resources will aid the Office of the Prosecutor General to document the more than 115,000 instances of destroyed civilian infrastructure, and evidence of human rights abuses on frontline communities and liberated territories.

[ Skydio ]

Grasping objects with limited or no prior knowledge about them is a highly relevant skill in assistive robotics. Still, in this general setting, it has remained an open problem. We present a deep learning pipeline consisting of a shape completion module that is based on a single depth image, and followed by a grasp predictor that is based on the predicted object shape.

[ DLR RM ]

This is a video announcing the opening of the MyoChallenge 23, part of the challenge track of the NeurIPS23 conference. This competition merges physiologically realistic musculoskeletal models and AI with the goal of creating controllers for locomotion and manipulation.

[ MyoChallenge ]

Thanks, Guillaume!

The new DJI Air 3 has a transmission range of 20 kilometers and a flight time of 46 minutes. Consumer drones have made a lot of progress in a pretty short time, haven’t they?

[ DJI ]

With [human driving’s track] record of nearly 43,000 deaths and 2.5 million injuries in the U.S. alone in 2021, we believe autonomous driving technology has the potential to save lives and improve mobility options for millions of people. The data to-date indicates that the Waymo Driver is reducing traffic injuries and fatalities in the places where we operate, and we aim to continue safely designing and deploying our Driver to help more people in more places.

Humans are bad drivers for sure, but according to expert Missy Cummings, as quoted by AP, “autonomous vehicles from Waymo, a spinoff of Google, are four times more likely than humans to crash.”

Watch Tanner Lecturers, Fei-Fei Li and Eric Horvitz, discuss the topics of AI and Human Values.

[ Stanford HAI ]

Tin Lun Lam writes, “in the last two months, we have organized a Lecture Series on Multi-robot Systems and invited eight world-renowned scholars to share their wisdom to help promote knowledge sharing and technological advancement in this field.” Here’s two of the lectures, and you can find the other six at the link below.

[ Freeform Robotics ]

Thanks, Tin Lun!

The robust detection of GNSS non-line-of-sight (NLOS) signals is of vital importance for land- and close-to-land-based safe navigation applications. The usage of GNSS measurements affected by NLOS can lead to large unbounded positioning errors and loss of safety. Due to the complex signal conditions in urban environments, the use of machine learning or artificial intelligence techniques and algorithms has recently been identified as potential tools to classify GNSS LOS/NLOS signals. The design of machine learning algorithms with GNSS features is an emerging field of research that must, however, be tackled carefully to avoid biased estimation results and to guarantee algorithms that can be generalized for different scenarios, receivers, antennas, and their specific installations and configurations. This work first provides new options to guarantee a proper generalization of trained algorithms by means of a pre-normalization of features with models extracted in open-sky (nominal) scenarios. The second main contribution focuses on designing a branched (or parallel) machine learning process to handle the intermittent presence of GNSS features in certain frequencies. This allows to exploit measurements in all available frequencies as compared to current approaches in the literature based on only the single frequency. The detection by means of logistic regression not only provides a binary LOS/NLOS decision but also an associated probability which can be used in the future as a means to weight-specific measurements. The detection with the proposed branched logistic regression with pre-normalized multi-frequency features has shown better results than the state-of-the-art algorithms, reaching 90% detection accuracy in the validation scenarios evaluated.

Robots currently provide only a limited amount of information about their future movements to human collaborators. In human interaction, communication through gaze can be helpful by intuitively directing attention to specific targets. Whether and how this mechanism could benefit the interaction with robots and how a design of predictive robot eyes in general should look like is not well understood. In a between-subjects design, four different types of eyes were therefore compared with regard to their attention directing potential: a pair of arrows, human eyes, and two anthropomorphic robot eye designs. For this purpose, 39 subjects performed a novel, screen-based gaze cueing task in the laboratory. Participants’ attention was measured using manual responses and eye-tracking. Information on the perception of the tested cues was provided through additional subjective measures. All eye models were overall easy to read and were able to direct participants’ attention. The anthropomorphic robot eyes were most efficient at shifting participants’ attention which was revealed by faster manual and saccadic reaction times. In addition, a robot equipped with anthropomorphic eyes was perceived as being more competent. Abstract anthropomorphic robot eyes therefore seem to trigger a reflexive reallocation of attention. This points to a social and automatic processing of such artificial stimuli.

Deep-sea manganese nodules are abundant in the ocean, with high exploitation potential and commercial value, and have become mineral resources that coastal countries compete to develop. The pipeline-lifting mining system is the most promising deep-sea mining system at present. A deep-sea mining vehicle is the core equipment of this system. Mining quality and efficiency rely on mining vehicles to a great extent. According to the topographic and geomorphic environmental characteristics of deep-sea manganese nodules at the bottom of the ocean, a new deep-sea mining system based on an autonomous manganese nodule mining vehicle is proposed in this paper. According to the operating environment and functional requirements of the seabed, a new mining method is proposed, and the global traverse path planning research of the autonomous manganese nodule mining vehicle based on this mining method is carried out. The arc round-trip acquisition path planning method is put forward, and the simulation verification shows that the method effectively solves the problems of low efficiency of mining vehicle traversing acquisition and obstacle avoidance.



It’s been just two years since Hangzhou, China–based Unitree introduced the Go1, a US $2,700 quadruped robot. And since then, the Go1 has had a huge influence over the small quadruped-research market, due to its unique combination of performance, accessibility, and being (as legged robots go) supercheap.

Unitree has just announced the Go2, a new version that manages to both be significantly better and super-duper-cheap—it’s faster and more agile and now even includes a lidar, but somehow costs just $1,600.

Okay, yes, some of the word choice in that video is slightly odd. But who cares, because that’s some very impressive, dynamic mobility at a shockingly low cost. The $1,600 base model, the Go2 Air, includes a chin-mounted 360- by 90-degree hemispherical lidar, which has a minimum sensing range of 0.05 meters for intelligent terrain navigation and obstacle avoidance. The Go2 can move at a brisk 2.5 meters per second with a 7-kilogram payload, and operates for up to 2 hours with a 8000 milliamp-hour (mAh) battery. There’s even a graphical programming interface, if you have no idea what you’re doing but just want to mess around a little bit.

Like its predecessor, the Go2 is available in several different models. For $2,800, you get the Go2 Pro, with an additional kilo of payload capacity, an extra meter per second of speed, onboard compute, and 4G connectivity. It also comes with side-following, which is what will let the robot go for a jog alongside you. And if you need even more, the Go2 Edu (which you’ll have to contact Unitree about directly) boasts a peak speed of a blistering 5 m/s, has force sensors on its feet, and will run for up to 4 hours with a 15,000-mAh battery.

Unitree

“Go2 was a huge project with many difficulties we had to overcome,” Unitree founder and CEO Xingxing Wang told IEEE Spectrum. “We have researched and developed almost every mechanical part and circuit board. Through continuously improving the design, we tried hard to improve its performance and quality as well as reduce costs, which required a lot of work and effort.”

We also asked Wang what has impressed him the most about how other people have used his robots. “We are very happy that many global institutions and companies use our quadruped robot in meaningful and innovative development,” he says. He points to a couple of his favorite examples, including CSIC using a Go1 as a robot guide dog that he hopes will have significant benefits for the visually impaired, and a recent paper in Science Robotics that uses a brain-inspired multimodal hybrid-neural network running on a Go1 for place recognition.

Lastly, we wanted to know whether all of this new footage of Go2 balancing on two legs means that Unitree might be taking an interest in bipeds sometime soon. “I think it’s cool that a quadruped robot can realize bipedal locomotion,” says Wang. “We may try to make a bipedal robot on the basis of a quadruped robot.” Yeah, sign us up for that.



It’s been just two years since Hangzhou, China–based Unitree introduced the Go1, a US $2,700 quadruped robot. And since then, the Go1 has had a huge influence over the small quadruped-research market, due to its unique combination of performance, accessibility, and being (as legged robots go) supercheap.

Unitree has just announced the Go2, a new version that manages to both be significantly better and super-duper-cheap—it’s faster and more agile and now even includes a lidar, but somehow costs just $1,600.

Okay, yes, some of the word choice in that video is slightly odd. But who cares, because that’s some very impressive, dynamic mobility at a shockingly low cost. The $1,600 base model, the Go2 Air, includes a chin-mounted 360- by 90-degree hemispherical lidar, which has a minimum sensing range of 0.05 meters for intelligent terrain navigation and obstacle avoidance. The Go2 can move at a brisk 2.5 meters per second with a 7-kilogram payload, and operates for up to 2 hours with a 8000 milliamp-hour (mAh) battery. There’s even a graphical programming interface, if you have no idea what you’re doing but just want to mess around a little bit.

Like its predecessor, the Go2 is available in several different models. For $2,800, you get the Go2 Pro, with an additional kilo of payload capacity, an extra meter per second of speed, onboard compute, and 4G connectivity. It also comes with side-following, which is what will let the robot go for a jog alongside you. And if you need even more, the Go2 Edu (which you’ll have to contact Unitree about directly) boasts a peak speed of a blistering 5 m/s, has force sensors on its feet, and will run for up to 4 hours with a 15,000-mAh battery.

Unitree

“Go2 was a huge project with many difficulties we had to overcome,” Unitree founder and CEO Xingxing Wang told IEEE Spectrum. “We have researched and developed almost every mechanical part and circuit board. Through continuously improving the design, we tried hard to improve its performance and quality as well as reduce costs, which required a lot of work and effort.”

We also asked Wang what has impressed him the most about how other people have used his robots. “We are very happy that many global institutions and companies use our quadruped robot in meaningful and innovative development,” he says. He points to a couple of his favorite examples, including CSIC using a Go1 as a robot guide dog that he hopes will have significant benefits for the visually impaired, and a recent paper in Science Robotics that uses a brain-inspired multimodal hybrid-neural network running on a Go1 for place recognition.

Lastly, we wanted to know whether all of this new footage of Go2 balancing on two legs means that Unitree might be taking an interest in bipeds sometime soon. “I think it’s cool that a quadruped robot can realize bipedal locomotion,” says Wang. “We may try to make a bipedal robot on the basis of a quadruped robot.” Yeah, sign us up for that.



As the automotive industry navigates a new era of self-driving cars, every second matters. Information from sensors and electronics must reach the main CPU as quickly as possible, but faster data rates impact signals. Coping with data loss is imperative for safety. Validating receiver operation in a car’s noisy environment in both ideal and stressed conditions improves in-vehicle network (IVN) performance. Delve into the automotive trends driving focus on receiver testing, understand the implications of not testing, and learn how to prepare for receiver testing and validate its performance at the physical layer in this white paper.

Download this free whitepaper now!



As the automotive industry navigates a new era of self-driving cars, every second matters. Information from sensors and electronics must reach the main CPU as quickly as possible, but faster data rates impact signals. Coping with data loss is imperative for safety. Validating receiver operation in a car’s noisy environment in both ideal and stressed conditions improves in-vehicle network (IVN) performance. Delve into the automotive trends driving focus on receiver testing, understand the implications of not testing, and learn how to prepare for receiver testing and validate its performance at the physical layer in this white paper.

Download this free whitepaper now!

As the responses of chat dialogue systems have become more natural, the empathy skill of dialogue systems has become an important new issue. In text-based chat dialogue systems, the definition of empathy is not precise, and how to design the kind of utterance that improves the user’s impression of receiving empathy is not clear since the main method used is to imitate utterances and dialogues that humans consider empathetic. In this study, we focus on the necessity of grasping an agent as an experienceable Other, which is considered the most important factor when empathy is performed by an agent, and propose an utterance design that directly conveys the fact that the agent can experience and feel empathy through text. Our system has an experience database including the system’s pseudo-experience and feelings to show empathetic feelings. Then, the system understands the user’s experiences and empathizes with the user on the basis of the system’s experience database, in line with the dialogue content. As a result of developing and evaluating several systems with different ways of conveying the aforementioned rationale, we found that conveying the rationale as a hearsay experience improved the user’s impression of receiving empathy more than conveying it as the system’s own experience. Moreover, an exhaustive evaluation shows that our empathetic utterance design using hearsay experience is effective to improve the user’s impression about the system’s cognitive empathy.

Pages