Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2023: 13–16 March 2023, STOCKHOLMRobotics Summit & Expo: 10–11 May 2023, BOSTONICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL

Enjoy today’s videos!

Have you ever wanted to tell a robot what to do using just words? Our team at Microsoft is introducing a new paradigm for robotics based on language, powered by ChatGPT. Our recent paper “ChatGPT for Robotics” describes a series of design principles that can be used to guide ChatGPT toward solving robotics tasks. In this video, we present a summary of our ideas, and experimental results from some of the many scenarios that ChatGPT enables in the domain of robotics, such as manipulation, aerial navigation, even full perception-action loops. Our goal is to empower even nontechnical users to harness the full potential of robotics using ChatGPT.

[ Microsoft ]

In which we see the moderate amount of progress that Tesla has made with its humanoid robot (including a questionably cut-together video of some manipulation), followed by Musk saying some stuff.

[ Tesla ]

It is very important to find out the dangerous situation at the rescue site in advance, so the emergency rescue robot came into being.

[ Unitree ]

The hairy caterpillar is a bug that poses a serious threat to everyday life in the Maldives, where it is devastating the local Indian almond trees, a critical species at the heart of the islands’ ecosystem. DJI Agriculture’s T30 spraying drones have been deployed in a series of groundbreaking entomological projects, and have been shown to be effective at protecting against hairy caterpillars, with minimal use of pesticide.

[ DJI ]

Never get tired of watching this.

[ JPL ]

Teleoperation creates a unique opportunity for experts to embody general-purpose robots and offer their services on a global scale.

[ Sanctuary ]

The qualification video for RoboCup 2023 in Bordeaux, France, of the Middle Size League team, Tech United Eindhoven.

[ Tech United ]

New year, new look for our test fleet! We’re rolling out a new design of our test vehicles.

[ Zoox ]

TRI is using AI technology to augment and enhance human capabilities to advance its mission towards achieving carbon neutrality, driving future product innovations, and creating harmonious communities for Toyota’s goal of “happiness for all.”

[ TRI ]

Pranali Desai and Jamie Barnes, software engineers at Torc Robotics, talk about their experiences navigating a male-dominated workplace.

[ Torc Robotics ]

Athena 3D is a leading additive-manufacturing-service bureau located in Tempe, AZ, that prints parts on demand. Printers run according to programmed specifications, and as jobs are completed (often at 3:00 am), the cobot removes the print bed, sets it on a storage rack, and places a clean print bed back into the printer. The application programming interface (API) then communicates to the printer to start the next job.

[ Fanuc ]

Robotic leg prostheses: the future of assistive technology or a step too far? Powered leg prostheses could make walking, climbing stairs, and navigating obstacles easier for users. But despite their promise, robotic legs have remained elusive. Have you got a winning design? Register now to take part in our CYBATHLON Challenges, which will take place in March 2023.

[ Cybathlon ]

A robotic work cell that can pack novel and irregular 3D objects densely, using vision and tactile sensing to adjust for errors encountered during packing. Ninety-nine out of 100 five-item packing attempts were successful in these trials.

[ IML ]

The work on the ANT guidance, navigation, and control system for a future planetary exploration walking system has finished. The final tests proved the system’s capabilities on unconsolidated, unstructured, and inclined terrains. The visual foothold adaptation relies on a highly frequent and drift-free pose estimation and on an up-to-date map. Therefore, contact information is added to the map to incorporate changed surface structures. In addition, a load-bearing assessment can be performed to evaluate the stability of the next foothold before relying on it. This way, a stable recovery from collapsed rock formations is possible, which can increase the safety of future legged exploration missions.

[ DFKI ]

Hod Lipson is the James and Sally Scapa Professor of Innovation in the department of mechanical engineering at Columbia University. He is a roboticist who works in the areas of artificial intelligence and digital manufacturing. He and his students design and build robots that do what you’d least expect them to do: self-replicate, self-reflect, ask questions, and even be creative.

[ Columbia ]

Auro from Ridecell introduces the Empty Vehicle Automation feature for car sharing, in which an empty vehicle (with no passenger inside it) repositions itself autonomously at an easily accessible location for the next rider.

[ Auro ]

Founded in 2014, Verity delivers fully autonomous indoor drone systems that are trusted in environments where failure is not an option. Based in Zurich, Switzerland, with global operations, Verity’s system is used to complete thousands of fully autonomous inventory checks every day in warehouses everywhere.
We are happy to have been a part of Amazon Web Services (AWS) Swiss Cloud Day! Verity CEO Raffaello D’Andrea shared the stage with other featured speakers.

[ Verity ]

The ALMI project, funded by the Assuring Autonomy International Programme at the University of York and PAL Robotics, focuses on the development of adaptation methods that enable the assistive-care TIAGo robot to cope with the uncertainty and disruptions unavoidable in a home environment. In this solution, the TIAGo robot has used both its speech interaction and its object-manipulation capabilities to help a user with mild motor and cognitive impairments in the daily activity of preparing a meal, including reacting in emergency situations. To achieve this, we created an identical representation of the kitchen environment that the TIAGo robot operates in by using the Gazebo simulator.

[ PAL Robotics ]

This spring 2022 GRASP Industry Talk, from Mujin, is called “We’re All About Robotics.”

Mujin started in 2011 in Tokyo, with founding members from the USA, Japan, and China. Since our humble beginnings from a bike garage (40 square meters), we’ve grown to become Tokyo’s largest robotics lab (14,000 m2) through innovation and a focus on quality. Mujin helped China’s e-commerce giant JD.com build the world’s first fully automated logistics warehouse in 2017, and is now partnering with firms like Uniqlo, Accenture, and others to automate logistics and manufacturing around the world.

[ Mujin ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2023: 13–16 March 2023, STOCKHOLMRobotics Summit & Expo: 10–11 May 2023, BOSTONICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL

Enjoy today’s videos!

Have you ever wanted to tell a robot what to do using just words? Our team at Microsoft is introducing a new paradigm for robotics based on language, powered by ChatGPT. Our recent paper “ChatGPT for Robotics” describes a series of design principles that can be used to guide ChatGPT toward solving robotics tasks. In this video, we present a summary of our ideas, and experimental results from some of the many scenarios that ChatGPT enables in the domain of robotics, such as manipulation, aerial navigation, even full perception-action loops. Our goal is to empower even nontechnical users to harness the full potential of robotics using ChatGPT.

[ Microsoft ]

In which we see the moderate amount of progress that Tesla has made with its humanoid robot (including a questionably cut-together video of some manipulation), followed by Musk saying some stuff.

[ Tesla ]

It is very important to find out the dangerous situation at the rescue site in advance, so the emergency rescue robot came into being.

[ Unitree ]

The hairy caterpillar is a bug that poses a serious threat to everyday life in the Maldives, where it is devastating the local Indian almond trees, a critical species at the heart of the islands’ ecosystem. DJI Agriculture’s T30 spraying drones have been deployed in a series of groundbreaking entomological projects, and have been shown to be effective at protecting against hairy caterpillars, with minimal use of pesticide.

[ DJI ]

Never get tired of watching this.

[ JPL ]

Teleoperation creates a unique opportunity for experts to embody general-purpose robots and offer their services on a global scale.

[ Sanctuary ]

The qualification video for RoboCup 2023 in Bordeaux, France, of the Middle Size League team, Tech United Eindhoven.

[ Tech United ]

New year, new look for our test fleet! We’re rolling out a new design of our test vehicles.

[ Zoox ]

TRI is using AI technology to augment and enhance human capabilities to advance its mission towards achieving carbon neutrality, driving future product innovations, and creating harmonious communities for Toyota’s goal of “happiness for all.”

[ TRI ]

Pranali Desai and Jamie Barnes, software engineers at Torc Robotics, talk about their experiences navigating a male-dominated workplace.

[ Torc Robotics ]

Athena 3D is a leading additive-manufacturing-service bureau located in Tempe, AZ, that prints parts on demand. Printers run according to programmed specifications, and as jobs are completed (often at 3:00 am), the cobot removes the print bed, sets it on a storage rack, and places a clean print bed back into the printer. The application programming interface (API) then communicates to the printer to start the next job.

[ Fanuc ]

Robotic leg prostheses: the future of assistive technology or a step too far? Powered leg prostheses could make walking, climbing stairs, and navigating obstacles easier for users. But despite their promise, robotic legs have remained elusive. Have you got a winning design? Register now to take part in our CYBATHLON Challenges, which will take place in March 2023.

[ Cybathlon ]

A robotic work cell that can pack novel and irregular 3D objects densely, using vision and tactile sensing to adjust for errors encountered during packing. Ninety-nine out of 100 five-item packing attempts were successful in these trials.

[ IML ]

The work on the ANT guidance, navigation, and control system for a future planetary exploration walking system has finished. The final tests proved the system’s capabilities on unconsolidated, unstructured, and inclined terrains. The visual foothold adaptation relies on a highly frequent and drift-free pose estimation and on an up-to-date map. Therefore, contact information is added to the map to incorporate changed surface structures. In addition, a load-bearing assessment can be performed to evaluate the stability of the next foothold before relying on it. This way, a stable recovery from collapsed rock formations is possible, which can increase the safety of future legged exploration missions.

[ DFKI ]

Hod Lipson is the James and Sally Scapa Professor of Innovation in the department of mechanical engineering at Columbia University. He is a roboticist who works in the areas of artificial intelligence and digital manufacturing. He and his students design and build robots that do what you’d least expect them to do: self-replicate, self-reflect, ask questions, and even be creative.

[ Columbia ]

Auro from Ridecell introduces the Empty Vehicle Automation feature for car sharing, in which an empty vehicle (with no passenger inside it) repositions itself autonomously at an easily accessible location for the next rider.

[ Auro ]

Founded in 2014, Verity delivers fully autonomous indoor drone systems that are trusted in environments where failure is not an option. Based in Zurich, Switzerland, with global operations, Verity’s system is used to complete thousands of fully autonomous inventory checks every day in warehouses everywhere.
We are happy to have been a part of Amazon Web Services (AWS) Swiss Cloud Day! Verity CEO Raffaello D’Andrea shared the stage with other featured speakers.

[ Verity ]

The ALMI project, funded by the Assuring Autonomy International Programme at the University of York and PAL Robotics, focuses on the development of adaptation methods that enable the assistive-care TIAGo robot to cope with the uncertainty and disruptions unavoidable in a home environment. In this solution, the TIAGo robot has used both its speech interaction and its object-manipulation capabilities to help a user with mild motor and cognitive impairments in the daily activity of preparing a meal, including reacting in emergency situations. To achieve this, we created an identical representation of the kitchen environment that the TIAGo robot operates in by using the Gazebo simulator.

[ PAL Robotics ]

This spring 2022 GRASP Industry Talk, from Mujin, is called “We’re All About Robotics.”

Mujin started in 2011 in Tokyo, with founding members from the USA, Japan, and China. Since our humble beginnings from a bike garage (40 square meters), we’ve grown to become Tokyo’s largest robotics lab (14,000 m2) through innovation and a focus on quality. Mujin helped China’s e-commerce giant JD.com build the world’s first fully automated logistics warehouse in 2017, and is now partnering with firms like Uniqlo, Accenture, and others to automate logistics and manufacturing around the world.

[ Mujin ]

Capturing vertical profiles of the atmosphere and measuring wind conditions can be of significant value for weather forecasting and pollution monitoring however, collecting such data can be limited by current approaches using balloon-based radiosondes and expensive ground-based sensors. Multirotor vehicles can be significantly affected by the local wind conditions, and due to their under-actuated nature, the response to the flow is visible in the changes in the orientation. From these changes in orientation, wind speed and direction estimates can be determined, allowing accurate estimation with no additional sensors. In this work, we expand on and improve this method of wind speed and direction estimation and incorporate corrections for climbing flight to improve estimation during vertical profiling. These corrections were validated against sonic anemometer data before being used to gather vertical profiles of the wind conditions around Volcan De Fuego in Guatemala up to altitudes of 3000 m Above Ground Level (AGL). From the results of this work, we show we can improve the accuracy of multirotor wind estimation in vertical profiling through our improved model and some of the practical limitations of radiosondes that can be overcome through the use of UAS in this application.

HD-maps are one of the core components of the self-driving pipeline. Despite the effort of many companies to develop a completely independent vehicle, many state-of-the-art solutions rely on high-definition maps of the environment for localization and navigation. Nevertheless, the creation process of such maps can be complex and error-prone or expensive if performed via ad-hoc surveys. For this reason, robust automated solutions are required. One fundamental component of an high-definition map is traffic lights. In particular, traffic light detection has been a well-known problem in the autonomous driving field. Still, the focus has always been on the light state, not the features (i.e., shape, orientation, pictogram). This work presents a pipeline for lights HD-map creation designed to provide accurate georeferenced position and description of all traffic lights seen by a camera mounted on a surveying vehicle. Our algorithm considers consecutive detection of the same light and uses Kalman filtering techniques to provide each target’s smoother and more precise position. Our pipeline has been validated for the detection and mapping task using the state-of-the-art dataset DriveU Traffic Light Dataset. The results show that our model is robust even with noisy GPS data. Moreover, for the detection task, we highlight how our model can correctly identify even far-away targets which are not labeled in the original dataset.



Today, a robotics startup called Figure is unveiling “the world’s first commercially viable general purpose humanoid robot,” called Figure 01. Shown in the rendering above, Figure 01 does not yet exist, but according to this morning’s press release, it will “have the ability to think, learn, and interact with its environment and is designed for initial deployment into the workforce to address labor shortages and over time lead the way in eliminating the need for unsafe and undesirable jobs.” Which sounds great, when (or if) it happens.

We are generally skeptical of announcements like these, where a company comes out of stealth with ambitious promises and some impressive renderings but little actual hardware to demonstrate along with them. What caught our eye in Figure’s case is its exceptionally qualified robotics team, led by its chief technology officer, Jerry Pratt. Pratt spent 20 years at the Florida Institute for Human and Machine Cognition (IHMC), where he led the team that took second place at the DARPA Robotics Challenge Finals. Working with DRC Atlas, NASA’s Valkyrie, and most recently Nadia, IHMC has established itself as a leader in robot design and control. And if anyone is going to take a useful humanoid robot from an engineering concept to commercial reality, these are the folks to do it.

Figure was founded in 2022 by Brett Adcock, who also founded Archer Aviation, which has successfully built and is currently flight-testing a commercial passenger eVTOL aircraft. Over the past year, the company has hired more than 40 engineers from institutions that include IHMC, Boston Dynamics, Tesla, Waymo, and Google X, most of whom have significant prior experience with humanoid robots or other autonomous systems.

“It’s our view that this is the best humanoid robotics team out there,” Adcock tells IEEE Spectrum. “Collectively, the team has probably built 12 major humanoid robots,” adds CTO Pratt. “We’ll have expertise in just about every part of the thousands of things that you need to do for humanoids.” Pratt says that initially, Figure isn’t expecting to use much in the way of new technology with its robot—it’s not based around some secret actuator technology or anything like that. “It’ll be a new design, with really solid engineering.”

The commercially viable general-purpose humanoid robot that Figure is working toward is going to look something like this:

Obviously, the above video (and all of the robot images in this article) are renderings, and do not show a real robot doing real things. However, these renderings are based on a CAD model of the actual robot that Figure plans to build, so Figure expects that its final hardware will be very similar to what they are showing today. Which, if that’s how it ultimately turns out, will be impressive: it’s a very slim form factor, which does put some limits on its performance. The final robot will be fully electric, 1.6 meters tall, weigh 60 kilograms with a 20 kg payload, and run for 5 hours on a charge.

Figure

“Having a humanoid form—it’s really tough doing the packaging,” explains Pratt. “In general, with technology that’s available today, you can hit somewhere around 50 and 60 percent on most human specs, like degrees of freedom, peak speeds and torques, things like that. It won’t be superhuman; we’ll be focusing on real-world applications and not trying to push the limits of pure performance.” This focus has helped Figure to constrain its design in pursuit of commercial utility: you need a robot to be slim in order to work in spaces designed for humans. With this design philosophy, you’re not going to get a robot that will be able to do backflips, but you are going to get a robot that can be productive in a cramped workspace or walk safely through a crowded warehouse.

This relates back to the reason why Figure is building a humanoid robot in the first place. The added complexity of legs has to be justified somehow, and Figure’s perspective is that building a robot without legs that has the necessary range of motion to do what it needs to do in a human workspace would be complex enough that you might as well just build the robot with legs anyway. And doing so opens up the opportunity (or perhaps the imperative) to generalize. “If you’re making humanoids, you pretty much have to get to general purpose,” says Pratt. “For just one application, there’ll probably always be a dedicated robot that’ll be better.”

“With today’s technology, it’s impossible to get to as good as a human, so I think the strategy of getting as close to a human as you can is a perfectly valid one.” —Jerry Pratt, Figure CTO

Figure, like most other companies working on commercial humanoids, sees warehouses as an obvious entry point. “The warehouse makes it easier on us,” says Adcock. “It’s indoors. There are no customers around. There are already AMRs [autonomous mobile robots] and cobots [collaborative robots] working around humans. And there’s a warehouse-management software system to manage high-level behaviors. Our bet here is that if we can figure out how to get one application that’s big enough and deploy enough robots, we can add new software as we go to do more things and over time manufacture really high volumes and get the robot to be affordable.” Adcock acknowledges that the robot must make financial sense in the market that it’s targeting. That is, if it’s going to take the place of human labor in a warehouse, it must be competitive in cost with human labor, which will be a serious challenge that may (at least initially) rely on some option for human teleoperation to maximize reliability.

Figure believes that it has a realistic shot at being the first company to actually commercialize a general-purpose humanoid robot, although both Adcock and Pratt pointed out that there is so much potential demand that they’re not especially worried about competition. “I think it’s just a question of getting there,” Pratt tells us. “There’s room for several companies to just get there, and I think we can be one of them.”

I don’t think anybody’s going to dispute that general purpose humanoids will happen. I think it’s just a matter of when they’ll happen, and what that will look like. —Brett Adcock, Figure founder

Getting there, as Figure makes explicit in its master plan, “will require significant advancements in technology.” Here is what the company believes it will need to make happen, in its own words:

  • System Hardware: Our team is designing a fully electromechanical humanoid, including hands. The goal is to develop hardware with the physical capabilities of a nonexpert human. We are measuring this in terms of range of motion, payload, torque, cost of transport and speed, and will continue to improve through rapid cycles of development, each cycle as part of a continuum.
  • Unit Cost: We’re aiming to reduce individual humanoid unit costs through high-rate volume manufacturing, working towards a sustainable economy of scale. We are measuring our costs through the fully burdened operating cost per hour. At high rates of volume manufacturing, [we are] optimistic unit cost will come down to affordable levels.
  • Safety: It’s essential that our humanoids will be able to interact with humans in the workplace safely. We will design them to be able to adhere to industry standards and corporate requirements.
  • Volume Manufacturing: We foresee not only needing to deliver a high-quality product but also needing to deliver it at an exceptionally high volume. We anticipate a steep learning curve as we exit prototyping and enter volume manufacturing. We are preparing for this by being thoughtful about design for manufacturing, system safety, reliability, quality, and other production planning.
  • Artificial Intelligence: Building an AI system that enables our humanoids to perform everyday tasks autonomously is arguably one of the hardest problems we face long-term. We are tackling this by building intelligent embodied agents that can interact with complex and unstructured real-world environments.

This all sounds very compelling, but it’s important to note that as far as we’re aware, Figure has not done any of it yet. It has goals and aims, it is designing towards those goals and aims, and it can do its best to foresee and anticipate some of the challenges that lie ahead and plan and prepare to the extent that it’s possible to do so. However, at this point it’s premature for us (or anyone) to judge whether or not the company will be successful, since it still has a lot of things to figure out. To be clear, I believe that Figure believes that it can, eventually, do what it says it plans to do. My criticism here is mostly that the company is doing more telling than showing—historically, this has not been a good strategy for robotics, which tends to be vulnerable to the underdelivery of overpromises.

Figure does acknowledge that this is going to be a hard process, and that the company faces “high risk and extremely low chances of success,” which is an eye-catching statement in the midst of what is otherwise rather a lot of uniformly positive hype, for lack of a better phrase. And Adcock understands that the loftier goals (like the “consumer household” and “off-world” applications) will likely take a while, telling us that the company “gets really excited about the potential here over a multidecade long period.”

A rendering of a humanoid robot shows a black faceplate that can also display information.Figure

So what is the actual state of Figure’s robot right now? “We just finished our alpha build,” Adcock says. “It’s our first full-scale robot. We’re building five of them. We hope it will start to take its first steps within the next 30 days. And now we’ve started on our second-generation hardware and software version that we’ll have completed this summer.” It’s an aggressive timeline, and Figure hopes to be developing a new major version of both hardware and software every 6 months, indefinitely. “We think we’re positioned well,” Adcock continues. “Hopefully we’ll make our big milestones this year, and be in a position to be first to market. We’re going to try. We’re going to move as fast as we possibly can to hit that goal.”



Today, a robotics startup called Figure is unveiling “the world’s first commercially viable general purpose humanoid robot,” called Figure 01. Shown in the rendering above, Figure 01 does not yet exist, but according to this morning’s press release, it will “have the ability to think, learn, and interact with its environment and is designed for initial deployment into the workforce to address labor shortages and over time lead the way in eliminating the need for unsafe and undesirable jobs.” Which sounds great, when (or if) it happens.

We are generally skeptical of announcements like these, where a company comes out of stealth with ambitious promises and some impressive renderings but little actual hardware to demonstrate along with them. What caught our eye in Figure’s case is its exceptionally qualified robotics team, led by its chief technology officer, Jerry Pratt. Pratt spent 20 years at the Florida Institute for Human and Machine Cognition (IHMC), where he led the team that took second place at the DARPA Robotics Challenge Finals. Working with DRC Atlas, NASA’s Valkyrie, and most recently Nadia, IHMC has established itself as a leader in robot design and control. And if anyone is going to take a useful humanoid robot from an engineering concept to commercial reality, these are the folks to do it.

Figure was founded in 2022 by Brett Adcock, who also founded Archer Aviation, which has successfully built and is currently flight-testing a commercial passenger eVTOL aircraft. Over the past year, the company has hired more than 40 engineers from institutions that include IHMC, Boston Dynamics, Tesla, Waymo, and Google X, most of whom have significant prior experience with humanoid robots or other autonomous systems.

“It’s our view that this is the best humanoid robotics team out there,” Adcock tells IEEE Spectrum. “Collectively, the team has probably built 12 major humanoid robots,” adds CTO Pratt. “We’ll have expertise in just about every part of the thousands of things that you need to do for humanoids.” Pratt says that initially, Figure isn’t expecting to use much in the way of new technology with its robot—it’s not based around some secret actuator technology or anything like that. “It’ll be a new design, with really solid engineering.”

The commercially viable general-purpose humanoid robot that Figure is working toward is going to look something like this:

Obviously, the above video (and all of the robot images in this article) are renderings, and do not show a real robot doing real things. However, these renderings are based on a CAD model of the actual robot that Figure plans to build, so Figure expects that its final hardware will be very similar to what they are showing today. Which, if that’s how it ultimately turns out, will be impressive: it’s a very slim form factor, which does put some limits on its performance. The final robot will be fully electric, 1.6 meters tall, weigh 60 kilograms with a 20 kg payload, and run for 5 hours on a charge.

Figure

“Having a humanoid form—it’s really tough doing the packaging,” explains Pratt. “In general, with technology that’s available today, you can hit somewhere around 50 and 60 percent on most human specs, like degrees of freedom, peak speeds and torques, things like that. It won’t be superhuman; we’ll be focusing on real-world applications and not trying to push the limits of pure performance.” This focus has helped Figure to constrain its design in pursuit of commercial utility: you need a robot to be slim in order to work in spaces designed for humans. With this design philosophy, you’re not going to get a robot that will be able to do backflips, but you are going to get a robot that can be productive in a cramped workspace or walk safely through a crowded warehouse.

This relates back to the reason why Figure is building a humanoid robot in the first place. The added complexity of legs has to be justified somehow, and Figure’s perspective is that building a robot without legs that has the necessary range of motion to do what it needs to do in a human workspace would be complex enough that you might as well just build the robot with legs anyway. And doing so opens up the opportunity (or perhaps the imperative) to generalize. “If you’re making humanoids, you pretty much have to get to general purpose,” says Pratt. “For just one application, there’ll probably always be a dedicated robot that’ll be better.”

“With today’s technology, it’s impossible to get to as good as a human, so I think the strategy of getting as close to a human as you can is a perfectly valid one.” —Jerry Pratt, Figure CTO

Figure, like most other companies working on commercial humanoids, sees warehouses as an obvious entry point. “The warehouse makes it easier on us,” says Adcock. “It’s indoors. There are no customers around. There are already AMRs [autonomous mobile robots] and cobots [collaborative robots] working around humans. And there’s a warehouse-management software system to manage high-level behaviors. Our bet here is that if we can figure out how to get one application that’s big enough and deploy enough robots, we can add new software as we go to do more things and over time manufacture really high volumes and get the robot to be affordable.” Adcock acknowledges that the robot must make financial sense in the market that it’s targeting. That is, if it’s going to take the place of human labor in a warehouse, it must be competitive in cost with human labor, which will be a serious challenge that may (at least initially) rely on some option for human teleoperation to maximize reliability.

Figure believes that it has a realistic shot at being the first company to actually commercialize a general-purpose humanoid robot, although both Adcock and Pratt pointed out that there is so much potential demand that they’re not especially worried about competition. “I think it’s just a question of getting there,” Pratt tells us. “There’s room for several companies to just get there, and I think we can be one of them.”

I don’t think anybody’s going to dispute that general purpose humanoids will happen. I think it’s just a matter of when they’ll happen, and what that will look like. —Brett Adcock, Figure founder

Getting there, as Figure makes explicit in its master plan, “will require significant advancements in technology.” Here is what the company believes it will need to make happen, in its own words:

  • System Hardware: Our team is designing a fully electromechanical humanoid, including hands. The goal is to develop hardware with the physical capabilities of a nonexpert human. We are measuring this in terms of range of motion, payload, torque, cost of transport and speed, and will continue to improve through rapid cycles of development, each cycle as part of a continuum.
  • Unit Cost: We’re aiming to reduce individual humanoid unit costs through high-rate volume manufacturing, working towards a sustainable economy of scale. We are measuring our costs through the fully burdened operating cost per hour. At high rates of volume manufacturing, [we are] optimistic unit cost will come down to affordable levels.
  • Safety: It’s essential that our humanoids will be able to interact with humans in the workplace safely. We will design them to be able to adhere to industry standards and corporate requirements.
  • Volume Manufacturing: We foresee not only needing to deliver a high-quality product but also needing to deliver it at an exceptionally high volume. We anticipate a steep learning curve as we exit prototyping and enter volume manufacturing. We are preparing for this by being thoughtful about design for manufacturing, system safety, reliability, quality, and other production planning.
  • Artificial Intelligence: Building an AI system that enables our humanoids to perform everyday tasks autonomously is arguably one of the hardest problems we face long-term. We are tackling this by building intelligent embodied agents that can interact with complex and unstructured real-world environments.

This all sounds very compelling, but it’s important to note that as far as we’re aware, Figure has not done any of it yet. It has goals and aims, it is designing towards those goals and aims, and it can do its best to foresee and anticipate some of the challenges that lie ahead and plan and prepare to the extent that it’s possible to do so. However, at this point it’s premature for us (or anyone) to judge whether or not the company will be successful, since it still has a lot of things to figure out. To be clear, I believe that Figure believes that it can, eventually, do what it says it plans to do. My criticism here is mostly that the company is doing more telling than showing—historically, this has not been a good strategy for robotics, which tends to be vulnerable to the underdelivery of overpromises.

Figure does acknowledge that this is going to be a hard process, and that the company faces “high risk and extremely low chances of success,” which is an eye-catching statement in the midst of what is otherwise rather a lot of uniformly positive hype, for lack of a better phrase. And Adcock understands that the loftier goals (like the “consumer household” and “off-world” applications) will likely take a while, telling us that the company “gets really excited about the potential here over a multidecade long period.”

A rendering of a humanoid robot shows a black faceplate that can also display information.Figure

So what is the actual state of Figure’s robot right now? “We just finished our alpha build,” Adcock says. “It’s our first full-scale robot. We’re building five of them. We hope it will start to take its first steps within the next 30 days. And now we’ve started on our second-generation hardware and software version that we’ll have completed this summer.” It’s an aggressive timeline, and Figure hopes to be developing a new major version of both hardware and software every 6 months, indefinitely. “We think we’re positioned well,” Adcock continues. “Hopefully we’ll make our big milestones this year, and be in a position to be first to market. We’re going to try. We’re going to move as fast as we possibly can to hit that goal.”

Introduction: Video-based clinical rating plays an important role in assessing dystonia and monitoring the effect of treatment in dyskinetic cerebral palsy (CP). However, evaluation by clinicians is time-consuming, and the quality of rating is dependent on experience. The aim of the current study is to provide a proof-of-concept for a machine learning approach to automatically assess scoring of dystonia using 2D stick figures extracted from videos. Model performance was compared to human performance.

Methods: A total of 187 video sequences of 34 individuals with dyskinetic CP (8–23 years, all non-ambulatory) were filmed at rest during lying and supported sitting. Videos were scored by three raters according to the Dyskinesia Impairment Scale (DIS) for arm and leg dystonia (normalized scores ranging from 0–1). Coordinates in pixels of the left and right wrist, elbow, shoulder, hip, knee and ankle were extracted using DeepLabCut, an open source toolbox that builds on a pose estimation algorithm. Within a subset, tracking accuracy was assessed for a pretrained human model and for models trained with an increasing number of manually labeled frames. The mean absolute error (MAE) between DeepLabCut’s prediction of the position of body points and manual labels was calculated. Subsequently, movement and position features were calculated from extracted body point coordinates. These features were fed into a Random Forest Regressor to train a model to predict the clinical scores. The model performance trained with data from one rater evaluated by MAEs (model-rater) was compared to inter-rater accuracy.

Results: A tracking accuracy of 4.5 pixels (approximately 1.5 cm) could be achieved by adding 15–20 manually labeled frames per video. The MAEs for the trained models ranged from 0.21 ± 0.15 for arm dystonia to 0.14 ± 0.10 for leg dystonia (normalized DIS scores). The inter-rater MAEs were 0.21 ± 0.22 and 0.16 ± 0.20, respectively.

Conclusion: This proof-of-concept study shows the potential of using stick figures extracted from common videos in a machine learning approach to automatically assess dystonia. Sufficient tracking accuracy can be reached by manually adding labels within 15–20 frames per video. With a relatively small data set, it is possible to train a model that can automatically assess dystonia with a performance comparable to human scoring.

Introduction: In this study, the development of a social robot, capable of giving speech simultaneously in more than one language was in mind. However, the negative effect of background noise on speech comprehension is well-documented in previous works. This deteriorating effect is more highlighted when the background noise has speech-like properties. Hence, the presence of speech as the background noise in a simultaneously speaking bilingual robot can be fatal for the speech comprehension of each person listening to the robot.

Methods: To improve speech comprehension and consequently, user experience in the intended bilingual robot, the effect of time expansion on speech comprehension in a multi-talker speech scenario was investigated. Sentence recognition, speech comprehension, and subjective evaluation tasks were implemented in the study.

Results: The obtained results suggest that a reduced speech rate, leading to an expansion in the speech time, in addition to increased pause duration in both the target and background speeches can lead to statistically significant improvement in both sentence recognition, and speech comprehension of participants. More interestingly, participants got a higher score in the time-expanded multi-talker speech than in the standard-speed single-talker speech in the speech comprehension and, in the sentence recognition task. However, this positive effect could not be attributed merely to the time expansion, as we could not repeat the same positive effect in a time-expanded single-talker speech.

Discussion: The results obtained in this study suggest a facilitating effect of the presence of the background speech in a simultaneously speaking bilingual robot provided that both languages are presented in a time-expanded manner. The implications of such a simultaneously speaking robot are discussed.



Secure Comms Shield provides a framework for protecting these distributed networks using a zero-trust network architecture - making it easier to implement secure communication.



Secure Comms Shield provides a framework for protecting these distributed networks using a zero-trust network architecture - making it easier to implement secure communication.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2023: 13–16 March 2023, STOCKHOLMRobotics Summit & Expo: 10–11 May 2023, BOSTONICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL

Enjoy today’s videos!

This video presents the AmphiSAW robot. The Robot relies on a unique wave-producing mechanism using a single motor. The AmphiSAW is one of the fastest amphibious robots and the most energy efficient amphibious robot. Its Bio-inspired mechanism is bio-friendly as it allows the robot to swim among fish without intimidating them.

A paper on AmphiSAW appears in Bioinspiration and Biomimetics 2023.

[ BGU ]

It’s the whole-body gesturing here that’s the most impressive, I think.

[ Sanctuary AI ]

Some very impressive jumping from Cassie Cal.

[ UC Berkeley ]

I am pretty sure this is a fake robot arm, which means Northrop Grumman gets added to the list of “companies that really should be able to do things with real robot arms but aren’t for some reason.”

[ YouTube ]

This is not a great video, but it’s a really cool idea: Hod Lipson’s Robotics Studio course at Columbia teaches students to design, fabricate, and program robots that walk. Here are 49 of them.

[ Columbia ]

Robots throwing robots.

[ Recon Robotics ]

There are many moments in the Waymo Driver’s day when it finds itself at a crossroads and must decide what to do in a fraction of a second. Watch Software Engineer Daylen Yang break down the challenge of intersections and what we’re doing to build a safe driver—the Waymo Driver—for every road user.

[ Waymo ]

The final episode of NASA’s series on the history of lidar.

[ NASA ]

Kaitlyn Becker was working on her doctorate at Harvard University when she helped develop a soft robotic system that can handle complex objects by using entanglement grasping. She joins to explain how creatures of the sea inspired the robotic gripper and how it might be used in the future.

[ NSF ]

A panel on “Future Challenges and Big Problems” features UPenn GRASP Lab faculty members and is moderated by Vijay Kumar.

[ UPenn ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2023: 13–16 March 2023, STOCKHOLMRobotics Summit & Expo: 10–11 May 2023, BOSTONICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL

Enjoy today’s videos!

This video presents the AmphiSAW robot. The Robot relies on a unique wave producing mechanism using a single motor. The AmphiSAW is one of the fastest amphibious robots and the most energy efficient amphibious robot. Its Bio-inspired mechanism is bio-friendly as it allows the robot to swim among fish without intimidating them.

A paper on AmphiSAW appears in Bioinspiration and Biomimetics 2023.

[ BGU ]

It’s the whole-body gesturing here that’s the most impressive, I think.

[ Sanctuary AI ]

Some very impressive jumping from Cassie Cal.

[ UC Berkeley ]

I am pretty sure this is a fake robot arm, which means Northrop Grumman gets added to the list of “companies that really should be able to do things with real robot arms but aren’t for some reason.”

[ YouTube ]

This is not a great video, but it’s a really cool idea: Hod Lipsons’ Robotics Studio course at Columbia teaches students to design, fabricate, and program robots that walk. Here are 49 of them.

[ Columbia ]

Robots throwing robots.

[ Recon Robotics ]

There are many moments in the Waymo Driver’s day when it finds itself at a crossroads and must decide what to do in a fraction of a second. Watch Software Engineer Daylen Yang break down the challenge of intersections and what we’re doing to build a safe driver—the Waymo Driver—for every road user.

[ Waymo ]

The final episode of NASA’s series on the history of lidar.

[ NASA ]

Kaitlyn Becker was working on her doctorate at Harvard University when she helped develop a soft robotic system that can handle complex objects by using entanglement grasping. She joins to explain how creatures of the sea inspired the robotic gripper and how it might be used in the future.

[ NSF ]

A panel on “Future Challenges and Big Problems” featuring UPenn GRASP Lab faculty members and moderated by Vijay Kumar.

[ UPenn ]

Objective: The instrument THERapy-related InterACTion (THER-I-ACT) was developed to document therapeutic interactions comprehensively in the human therapist–patient setting. Here, we investigate whether the instrument can also reliably be used to characterise therapeutic interactions when a digital system with a humanoid robot as a therapeutic assistant is used.

Methods:Participants and therapy: Seventeen stroke survivors receiving arm rehabilitation (i.e., arm basis training (ABT) for moderate-to-severe arm paresis [n = 9] or arm ability training (AAT) for mild arm paresis [n = 8]) using the digital therapy system E-BRAiN over a course of nine sessions. Analysis of the therapeutic interaction: A total of 34 therapy sessions were videotaped. All therapeutic interactions provided by the humanoid robot during the first and the last (9th) session of daily training were documented both in terms of their frequency and time used for that type of interaction using THER-I-ACT. Any additional therapeutic interaction spontaneously given by the supervising staff or a human helper providing physical assistance (ABT only) was also documented. All ratings were performed by two trained independent raters.

Statistical analyses: Intraclass correlation coefficients (ICCs) were calculated for the frequency of occurrence and time used for each category of interaction observed.

Results: Therapeutic interactions could comprehensively be documented and were observed across the dimensions provision of information, feedback, and bond-related interactions. ICCs for therapeutic interaction category assessments from 34 therapy sessions by two independent raters were high (ICC ≥0.90) for almost all categories of the therapeutic interaction observed, both for the occurrence frequency and time used for categories of therapeutic interactions, and both for the therapeutic interaction performed by the robot and, even though much less frequently observed, additional spontaneous therapeutic interactions by the supervisory staff and a helper being present. The ICC was similarly high for an overall subjective rating of the concentration and engagement of patients (0.87).

Conclusion: Therapeutic interactions can comprehensively and reliably be documented by trained raters using the instrument THER-I-ACT not only in the traditional patient–therapist setting, as previously shown, but also in a digital therapy setting with a humanoid robot as the therapeutic agent and for more complex therapeutic settings with more than one therapeutic agent being present.

This paper explores a mixed assembly architecture trade study for a Built On-orbit Robotically assembled Gigatruss (BORG). Robotic in-space assembly (ISA) and servicing is a crucial field to expand endeavors in space. Currently, large structures in space are commonly only deployable and must be efficiently folded and packed into a launch vehicle (LV) and then deployed perfectly for operational status to be achieved. To actualize being able to build increasingly large structures in space, this scheme becomes less feasible, being constrained by LV volume and mass requirements. ISA allows the use of multiple launches to create even larger structures. The common ISA proposals consist of either strut-by-strut or multiple deployable module construction methodologies. In this paper, a mixed assembly scheme is explored and a trade study is conducted on its possible advantages with respect to many phases of a mission: 1) manufacturing, 2) stowage and transport, 3) ISA, and 4) servicing. Finally, a weighted decision matrix was created to help compare the various advantages and disadvantages of different architectural schemes.

During the recent decade, we have witnessed an extraordinary flourishing of soft robotics. Rekindled interest in soft robots is partially associated with the advances in manufacturing techniques that enable the fabrication of sophisticated multi-material robotic bodies with dimensions ranging across multiple length scales. In recent manuscripts, a reader might find peculiar-looking soft robots capable of grasping, walking, or swimming. However, the growth in publication numbers does not always reflect the real progress in the field since many manuscripts employ very similar ideas and just tweak soft body geometries. Therefore, we unreservedly agree with the sentiment that future research must move beyond “soft for soft’s sake.” Soft robotics is an undoubtedly fascinating field, but it requires a critical assessment of the limitations and challenges, enabling us to spotlight the areas and directions where soft robots will have the best leverage over their traditional counterparts. In this perspective paper, we discuss the current state of robotic research related to such important aspects as energy autonomy, electronic-free logic, and sustainability. The goal is to critically look at perspectives of soft robotics from two opposite points of view provided by early career researchers and highlight the most promising future direction, that is, in our opinion, the employment of soft robotic technologies for soft bio-inspired artificial organs.

In view of the need for emergency steering to avoid collision when the vehicle is in a dangerous scene, and the stability control of the vehicle during collision avoidance. This paper proposes a planning and control framework. A path planner considering the kinematics and dynamics of the vehicle system is used to formulate the safe driving path under emergency conditions. LQR lateral control algorithm is designed to calculate the output steering wheel angle. On this basis, adaptive MPC control algorithm and four-wheel braking force distribution control algorithm are designed to achieve coordinated control of vehicle driving stability and collision avoidance safety. The simulation results show that the proposed algorithm can complete the steering collision avoidance task quickly and stably.

Pages