Feed aggregator

Due to the severe consequences of their possible failure, robotic systems must be rigorously verified as to guarantee that their behavior is correct and safe. Such verification, carried out on a model, needs to cover various behavioral properties (e.g., safety and liveness), but also, given the timing constraints of robotic missions, real-time properties (e.g., schedulability and bounded response). In addition, in order to obtain valid and useful verification results, the model must faithfully represent the underlying robotic system and should therefore take into account all possible behaviors of the robotic software under the actual hardware and OS constraints (e.g., the scheduling policy and the number of cores). These requirements put the rigorous verification of robotic systems at the intersection of at least three communities: the robotic community, the formal methods community, and the real-time systems community. Verifying robotic systems is thus a complex, interdisciplinary task that involves a number of disciplines/techniques (e.g., model checking, schedulability analysis, component-based design) and faces a number of challenges (e.g., formalization, automation, scalability). For instance, the use of formal verification (formal methods community) is hindered by the state-space explosion problem, whereas schedulability analysis (real-time systems) is not suitable for behavioral properties. Moreover, current real-time implementations of robotic software are limited in terms of predictability and efficiency, leading to, e.g., unnecessary latencies. This is flagrant, in particular, at the level of locking protocols in robotic software. Such situation may benefit from major theoretical and practical findings of the real-time systems community. In this paper, we propose an interdisciplinary approach that, by joining forces of the different communities, provides a scalable and unified means to efficiently implement and rigorously verify real-time robots. First, we propose a scalable two-step verification solution that combines formal methods and schedulability analysis to verify both behavioral and real-time properties. Second, we devise a new multi-resource locking mechanism that is efficient, predictable, and suitable for real-time robots and show how it improves the latter’s real-time behavior. In both cases, we show, using a real drone example, how our approach compares favorably to that in the literature. This paper is a major extension of the RTCSA 2020 publication “A Two-Step Hybrid Approach for Verifying Real-Time Robotic Systems.”

This paper reports on a new approach to Signal Temporal Logic (STL) control synthesis, that 1) utilizes a navigation function as the basis to construct a Control Barrier Function (CBF), and 2) composes navigation function-based barrier functions using nonsmooth mappings to encode Boolean operations between the predicates that those barrier functions encode. Because of these two key features, the reported approach 1) covers a larger fragment of STL compared to existing approaches, 2) alleviates the computational cost associated with evaluation of the control law for the system in existing STL control barrier function methodologies, and 3) simultaneously relaxes some of the conservativeness of smooth combinations of barrier functions as a means of implementing Boolean operators. The paper demonstrates the efficacy of this new approach with three simulation case studies, one aiming at illustrating how complex STL motion planning specification can be realized, the second highlights the less-conservativeness of the approach in comparison to the existing methods, and another that shows how this technology can be brought to bear to push the envelope in the context of human-robot social interaction.

Low stiffness, large stroke, and axial force capabilities make Extensile Fluidic Artificial Muscles (EFAMs) a feasible soft actuator for continuum soft robots. EFAMs can be used to construct soft actuated structures that feature large deformation and enable soft robots to access large effective workspaces. Although FAM axial properties have been well studied, their bending behavior is not well characterized in the literature. Static and dynamic bending properties of a cantilevered EFAM specimen were investigated over a pressure range of 5–100 psi. The static properties were then estimated using an Euler-Bernoulli beam model and discrete elastic rod models. The experiments provided data for the determination of bending stiffness, damping ratio, and natural frequency of the tested specimen. The bending stiffness and the damping ratio were found to change fourfold over the pressure range. Experimentally validated bending properties of the EFAM presented insights into structural and control considerations of soft robots. Future work will utilize the data and models obtained in this study to predict the behavior of an EFAM-actuated continuum robot carrying payloads.

The rise of deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data. Unfortunately, it is prohibitively expensive to generate such data sets on a physical platform. Therefore, state-of-the-art approaches learn in simulation where data generation is fast as well as inexpensive and subsequently transfer the knowledge to the real robot (sim-to-real). Despite becoming increasingly realistic, all simulators are by construction based on models, hence inevitably imperfect. This raises the question of how simulators can be modified to facilitate learning robot control policies and overcome the mismatch between simulation and reality, often called the “reality gap.” We provide a comprehensive review of sim-to-real research for robotics, focusing on a technique named “domain randomization” which is a method for learning from randomized simulations.

Many keyhole interventions rely on bi-manual handling of surgical instruments, forcing the main surgeon to rely on a second surgeon to act as a camera assistant. In addition to the burden of excessively involving surgical staff, this may lead to reduced image stability, increased task completion time and sometimes errors due to the monotony of the task. Robotic endoscope holders, controlled by a set of basic instructions, have been proposed as an alternative, but their unnatural handling may increase the cognitive load of the (solo) surgeon, which hinders their clinical acceptance. More seamless integration in the surgical workflow would be achieved if robotic endoscope holders collaborated with the operating surgeon via semantically rich instructions that closely resemble instructions that would otherwise be issued to a human camera assistant, such as “focus on my right-hand instrument.” As a proof of concept, this paper presents a novel system that paves the way towards a synergistic interaction between surgeons and robotic endoscope holders. The proposed platform allows the surgeon to perform a bimanual coordination and navigation task, while a robotic arm autonomously performs the endoscope positioning tasks. Within our system, we propose a novel tooltip localization method based on surgical tool segmentation and a novel visual servoing approach that ensures smooth and appropriate motion of the endoscope camera. We validate our vision pipeline and run a user study of this system. The clinical relevance of the study is ensured through the use of a laparoscopic exercise validated by the European Academy of Gynaecological Surgery which involves bi-manual coordination and navigation. Successful application of our proposed system provides a promising starting point towards broader clinical adoption of robotic endoscope holders.

New bionic technologies and robots are becoming increasingly common in workspaces and private spheres. It is thus crucial to understand concerns regarding their use in social and legal terms and the qualities they should possess to be accepted as ‘co-workers’. Previous research in these areas used the Stereotype Content Model to investigate, for example, attributions of Warmth and Competence towards people who use bionic prostheses, cyborgs, and robots. In the present study, we propose to differentiate the Warmth dimension into the dimensions of Sociability and Morality to gain deeper insight into how people with or without bionic prostheses are perceived. In addition, we extend our research to the perception of robots. Since legal aspects need to be considered if robots are expected to be ‘co-workers’, for the first time, we also evaluated current perceptions of robots in terms of legal aspects. We conducted two studies: In Study 1, participants rated visual stimuli of individuals with or without disabilities and low- or high-tech prostheses, and robots of different levels of Anthropomorphism in terms of perceived Competence, Sociability, and Morality. In Study 2, participants rated robots of different levels of Anthropomorphism in terms of perceived Competence, Sociability, and Morality, and additionally, Legal Personality, and Decision-Making Authority. We also controlled for participants’ personality. Results showed that attributions of Competence and Morality varied as a function of the technical sophistication of the prostheses. For robots, Competence attributions were negatively related to Anthropomorphism. Perception of Sociability, Morality, Legal Personality, and Decision-Making Authority varied as functions of Anthropomorphism. Overall, this study contributes to technological design, which aims to ensure high acceptance and minimal undesirable side effects, both with regard to the application of bionic instruments and robotics. Additionally, first insights into whether more anthropomorphized robots will need to be considered differently in terms of legal practice are given.

In this paper, we survey the emerging design space of expandable structures in robotics, with a focus on how such structures may improve human-robot interactions. We detail various implementation considerations for researchers seeking to integrate such structures in their own work and describe how expandable structures may lead to novel forms of interaction for a variety of different robots and applications, including structures that enable robots to alter their form to augment or gain entirely new capabilities, such as enhancing manipulation or navigation, structures that improve robot safety, structures that enable new forms of communication, and structures for robot swarms that enable the swarm to change shape both individually and collectively. To illustrate how these considerations may be operationalized, we also present three case studies from our own research in expandable structure robots, sharing our design process and our findings regarding how such structures enable robots to produce novel behaviors that may capture human attention, convey information, mimic emotion, and provide new types of dynamic affordances.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PhiladelphiaIEEE ARSO 2022: 28–30 May 2022, Long Beach, Calif.ERF 2022: 28–30 June 2022, Rotterdam, Netherlands RoboCup 2022: 11–17 July 2022, BangkokIEEE CASE 2022: 20–24 August 2022, Mexico CityCLAWAR 2022: 12–14 September 2022, Azores, Portugal

Enjoy today’s videos!

DALL-E 2 is amazing, and also superdumb, in that very specific AI-dumb way. I love it.

[ OpenAI ]

This has to be real. Please. I want it SO BADLY.

[ Agility Robotics ]

I was using my Skydio drone like this even before the software update ::shrug::

[ Skydio ]

How is this an April Fool’s joke? It’s a real skill that works!

[ Energy Robotics ]

This video from the University of Washington is the first look we’ve had at what DARPA RACER might look like: high speed off-road driving without any external references, not even GPS. Fun!

[ UW ]

You may not know this, but Roombas (especially the older ones) are superhackable, meaning that you can turn them into robotic sumo wrestlers:

So cool to see these old Roombas getting a new life of violence!

[ Twitter ]

Turn your friends into robots!

We demonstrate a novel interface concept in which interactive systems directly manipulate the user’s head orientation. We implement this using electrical-muscle-stimulation (EMS) of the neck muscles, which turns the head around its yaw (left/right) and pitch (up/down) axis. As the first exploration of EMS for head actuation, we characterized which muscles can be robustly actuated.

[ HCIL ]

Measuring water quality throughout river networks with precision, speed and at lower cost than traditional methods is now possible with AquaBOT, an aquatic drone developed by Oak Ridge National Laboratory.

[ ORNL ]

Why robots dogs are better than real dogs: dancing.

[ RPL ]

OTTO Autonomous Mobile Robots (AMRs) automate common material handling tasks, big and small, to help manufacturers tackle labor shortages, scale their business, and outperform the competition.

[ OTTO ]

During “On the DL,” Host Pieter Abbeel asks the guest of the week a series of questions unrelated to their professional pursuits. Its a chance to get to know our interviewees a little better! This week’s guest is OSU’s Engineering School Dean Ayanna Howard.

[ Robot Brains ]

I think that this video from ABB might be trying to communicate something, but I have no idea what.

[ ABB ]

PneuBots is a modular soft robotics construction kit consisting of seven types of self-foldable segments with high tensile strength, three types of pneumatic connectors and splitters, and a custom-designed box. The kit enables various assemblies to be made with the modular pieces, allowing creators to explore the world of soft robotics in playful ways. When combined with a FlowIO device, PneuBots allows seamless programmability of the different assemblies, enabling artists, designers, engineers, and makers to create dynamic, shape-changing, and interactive works that can be used for education, storytelling, dynamic art, or expression of affect and gratitude.

[ PneuBots ]

Nowadays, social robots have become human’s important companions. The anthropomorphic features of robots, which are important in building natural user experience and trustable human-robot partnership, have attracted increasing attention. Among these features, eyes attract most audience’s attention and are particularly important. This study aims to investigate the influence of robot eye design on users’ trustworthiness perception.

[ CHI 2022 ]

Flying and ground robots complement each other in terms of their advantages and disadvantages. We propose a collaborative system combining flying and ground robots, using a universal physical coupling interface (PCI) that allows for momentary connections and disconnections between multiple robots/devices.

[ CHI 2022 ]

Here’s an early concept from a startup called Phantom Cybernetics, exploring how augmented reality can be used to control physical robots in useful ways. “What we’re trying to do is to bring this all to market as a mature hardware-agnostic platform others can readily use, deploy and build upon without re-inventing the wheel,” the company tells us, and they’d be happy to hear your thoughts about what they’re working on.

[ Phantom ]

Thanks, Mirek!

Andy Zeng at Google gives a talk on “Retrospectives on Scaling Robot Learning.”

Recent incredible results from models like BERT, GPT-3, DALL-E makes you wonder “what will it take to get to something like that for robots?” While we’ve made lots of progress, robot learning remains hard because scaling data collection is expensive. In this talk, I will discuss two views on how we might be able to work around this: (i) making the most out of our data, and (ii) robot learning from the Internet. I will dive into several projects in the context of learning visuomotor policies from demonstrations, where I will share key takeaways along the way, and conclude with some thoughts on where I think robot learning is headed.

[ MIT ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PhiladelphiaIEEE ARSO 2022: 28–30 May 2022, Long Beach, Calif.ERF 2022: 28–30 June 2022, Rotterdam, Netherlands RoboCup 2022: 11–17 July 2022, BangkokIEEE CASE 2022: 20–24 August 2022, Mexico CityCLAWAR 2022: 12–14 September 2022, Azores, Portugal

Enjoy today’s videos!

DALL-E 2 is amazing, and also superdumb, in that very specific AI-dumb way. I love it.

[ OpenAI ]

This has to be real. Please. I want it SO BADLY.

[ Agility Robotics ]

I was using my Skydio drone like this even before the software update ::shrug::

[ Skydio ]

How is this an April Fool’s joke? It’s a real skill that works!

[ Energy Robotics ]

This video from the University of Washington is the first look we’ve had at what DARPA RACER might look like: high speed off-road driving without any external references, not even GPS. Fun!

[ UW ]

You may not know this, but Roombas (especially the older ones) are superhackable, meaning that you can turn them into robotic sumo wrestlers:

So cool to see these old Roombas getting a new life of violence!

[ Twitter ]

Turn your friends into robots!

We demonstrate a novel interface concept in which interactive systems directly manipulate the user’s head orientation. We implement this using electrical-muscle-stimulation (EMS) of the neck muscles, which turns the head around its yaw (left/right) and pitch (up/down) axis. As the first exploration of EMS for head actuation, we characterized which muscles can be robustly actuated.

[ HCIL ]

Measuring water quality throughout river networks with precision, speed and at lower cost than traditional methods is now possible with AquaBOT, an aquatic drone developed by Oak Ridge National Laboratory.

[ ORNL ]

Why robots dogs are better than real dogs: dancing.

[ RPL ]

OTTO Autonomous Mobile Robots (AMRs) automate common material handling tasks, big and small, to help manufacturers tackle labor shortages, scale their business, and outperform the competition.

[ OTTO ]

During “On the DL,” Host Pieter Abbeel asks the guest of the week a series of questions unrelated to their professional pursuits. Its a chance to get to know our interviewees a little better! This week’s guest is OSU’s Engineering School Dean Ayanna Howard.

[ Robot Brains ]

I think that this video from ABB might be trying to communicate something, but I have no idea what.

[ ABB ]

PneuBots is a modular soft robotics construction kit consisting of seven types of self-foldable segments with high tensile strength, three types of pneumatic connectors and splitters, and a custom-designed box. The kit enables various assemblies to be made with the modular pieces, allowing creators to explore the world of soft robotics in playful ways. When combined with a FlowIO device, PneuBots allows seamless programmability of the different assemblies, enabling artists, designers, engineers, and makers to create dynamic, shape-changing, and interactive works that can be used for education, storytelling, dynamic art, or expression of affect and gratitude.

[ PneuBots ]

Nowadays, social robots have become human’s important companions. The anthropomorphic features of robots, which are important in building natural user experience and trustable human-robot partnership, have attracted increasing attention. Among these features, eyes attract most audience’s attention and are particularly important. This study aims to investigate the influence of robot eye design on users’ trustworthiness perception.

[ CHI 2022 ]

Flying and ground robots complement each other in terms of their advantages and disadvantages. We propose a collaborative system combining flying and ground robots, using a universal physical coupling interface (PCI) that allows for momentary connections and disconnections between multiple robots/devices.

[ CHI 2022 ]

Here’s an early concept from a startup called Phantom Cybernetics, exploring how augmented reality can be used to control physical robots in useful ways. “What we’re trying to do is to bring this all to market as a mature hardware-agnostic platform others can readily use, deploy and build upon without re-inventing the wheel,” the company tells us, and they’d be happy to hear your thoughts about what they’re working on.

[ Phantom ]

Thanks, Mirek!

Andy Zeng at Google gives a talk on “Retrospectives on Scaling Robot Learning.”

Recent incredible results from models like BERT, GPT-3, DALL-E makes you wonder “what will it take to get to something like that for robots?” While we’ve made lots of progress, robot learning remains hard because scaling data collection is expensive. In this talk, I will discuss two views on how we might be able to work around this: (i) making the most out of our data, and (ii) robot learning from the Internet. I will dive into several projects in the context of learning visuomotor policies from demonstrations, where I will share key takeaways along the way, and conclude with some thoughts on where I think robot learning is headed.

[ MIT ]



An integral part of NASA’s plan to return astronauts to the Moon this decade is the Lunar Gateway, a space station that will be humanity’s first permanent outpost outside of low Earth orbit. Gateway, a partnership between NASA, the Canadian Space Agency (CSA), the European Space Agency (ESA), and the Japan Aerospace Exploration Agency (JAXA), is intended to support operations on the lunar surface while also serving as a staging point for exploration to Mars.

Gateway will be significantly smaller than the International Space Station (ISS), initially consisting of just two modules with additional modules to be added over time. The first pieces of the station to reach lunar orbit will be the Power and Propulsion Element (PPE) attached to the Habitation and Logistics Outpost (HALO), scheduled to launch together on a SpaceX Falcon Heavy rocket in November of 2024. The relatively small size of Gateway is possible because the station won’t be crewed most of the time—astronauts may pass through for a few weeks, but the expectation is that Gateway will spend about 11 months out of the year without anyone on board.

This presents some unique challenges for Gateway. On the ISS, astronauts spend a substantial amount of time on station upkeep, but Gateway will have to keep itself functional for extended periods without any direct human assistance.

“The things that the crew does on the International Space Station will need to be handled by Gateway on its own,” explains Julia Badger, Gateway Autonomy System Manager at NASA’s Johnson Space Center. “There’s also a big difference in the operational paradigm. Right now, ISS has a mission control that’s full time. With Gateway, we’re eventually expecting to have just eight hours a week of ground operations.” The hundreds of commands that the ISS receives every day to keep it running will still be necessary on Gateway—they’ll just have to come from Gateway itself, rather than from humans back on Earth.

“It's a new way of thinking compared to ISS. If something breaks on Gateway, we either have to be able to live with it for a certain amount of time, or we’ve got to have the ability to remotely or autonomously fix it.” —Julia Badger, NASA JSC

To make this happen, NASA is developing a Vehicle System Manager, or VSM, that will act like the omnipresent computer system found on virtually every science-fiction starship. The VSM will autonomously manage all of Gateway’s functionality, taking care of any problems that come up to the extent that they can be managed with clever software and occasional input from a distant human. “It's a new way of thinking compared to ISS,” explains Badger. “If something breaks on Gateway, we either have to be able to live with it for a certain amount of time, or we’ve got to have the ability to remotely or autonomously fix it.”

While Gateway itself can be thought of as a robot of sorts, there’s a limited amount that can be reasonably and efficiently done through dedicated automated systems, and NASA had to find a compromise between redundancy and both complexity and mass. For example, there was some discussion about whether Gateway’s hatches should open and close on their own and NASA ultimately decided to leave the hatches manually operated. But that doesn't necessarily mean that Gateway won’t be able to open its hatches without human assistance: it just means that there will be a need for robotic hands rather than human ones.

“I hope eventually we have robots up there that can open the hatches,” Badger tells us. She explains that Gateway is being designed with potential intra-vehicular robots (IVR) in mind, including things like adding visual markers to important locations, placing convenient charging ports around the station interior, and designing the hatches such that the force required to open them is compatible with the capabilities of robotic limbs. Parts of Gateway’s systems may be modular as well, able to be removed and replaced by robots if necessary. “What we’re trying to do,” Badger says, “is make smart choices about Gateway’s design that don’t add a lot of mass but that will make it easier for a robot to work within the station.”

Robonaut at its test station in front of a manipulation taskboard on the ISS.JSC/NASA

NASA already has a substantial amount of experience with IVR. Robonaut 2, a full-sized humanoid robot, spent several years on the International Space Station starting in 2011, learning how to perform tasks that would otherwise have to be done by human astronauts. More recently, a trio of toaster-sized, cubical, free-flying robots called Astrobees have taken up residence on the ISS, where they’ve been experimenting with autonomous sensing and navigation. A NASA project called ISAAC (Integrated System for Autonomous and Adaptive Caretaking) is currently exploring how robots like Astrobee could be used for a variety of tasks on Gateway, from monitoring station health to autonomously transferring cargo, although at least in the near term, in Badger’s opinion, “maintenance of Gateway, like using robots that can switch out broken components, is going to be more important than logistics types of tasks.”

Badger believes that a combination of a generalized mobile manipulator like Robonaut 2 and a free flyer like Astrobee make for a good team, and this combination is currently the general concept for Gateway IVR. This is not to say that the intra-vehicular robots that end up on Gateway will look like the robots that have been working on the ISS, but they’ll be inspired by them, and will leverage all of the experience that NASA has gained with its robots on ISS so far. It might also be useful to have a limited number of specialized robots, Badger says. “For example, if there was a reason to get behind a rack, you may want a snake-type of robot for that.”

An Astrobee robot (this one is named Bumble) on the ISS.JSC/NASA

While NASA is actively preparing for intra-vehicular robots on Gateway, such robots do not yet exist, and the agency may not be building these robots itself, instead relying on industry partners to deliver designs that meet NASA’s requirements. At launch, and likely for the first several years at least, Gateway will have to take care of itself without internal robotic assistants. However, one of the goals of Gateway is to operate itself completely autonomously for up to three weeks without any contact with Earth at all, mimicking the three week solar conjunction between Earth and Mars where the sun blocks any communications between the two planets. “I think that we will get IVR on board,” Badger says. “If we really want Gateway to be able to take care of itself for 21 days, IVR is going to be a very important part of that. And having a robot is absolutely something that I think is going to be necessary as we move on to Mars.”

"Having a robot is absolutely something that I think is going to be necessary as we move on to Mars.”—Julia Badger, NASA JSC

Intra-vehicular robots are just half of the robotic team that will be necessary to keep Gateway running autonomously long-term. Space stations rely on complex external infrastructure for power, propulsion, thermal control, and much more. Since 2001, the ISS has been home to Canadarm2, a 17.6m robotic arm, which is able to move around the station to grasp and manipulate objects while under human control from either inside the station or from the ground.

The Canadian Space Agency (CSA) in partnership with MDA is developing a new robotic arm system for Gateway, called Canadarm3, scheduled to launch in 2027. Canadarm3 will include an 8.5m long arm for grappling spacecraft and moving large objects, as well as a smaller, more dexterous robotic arm that can be used for delicate tasks. The smaller arm can even repair the larger arm if necessary. But what really sets Canadarm3 apart from its predecessors is how it’s controlled, according to Daniel Rey, Gateway Chief Engineer and Systems Manager at CSA. “One of the very novel things about Canadarm3 is its ability to operate autonomously, without any crew required,” Rey says. This capability relies on a new generation of software and hardware that gives the arm a sense of touch as well as an ability to react to its environment without direct human supervision.

"With Canadarm3, we realize that if we want to get ready for Mars, more autonomy will be required."—Daniel Rey, CSA

Even though Gateway will be a thousand times farther away from Earth than the ISS, Rey explains that the added distance (about 400,000 km) isn’t what really necessitates Canadarm3’s added autonomy. “Surprisingly, the location of Gateway in its orbit around the Moon has a time delay to Earth that is not all that different from the time delay in low Earth orbit when you factor in various ground stations that signals have to pass through," says Rey. "With Canadarm3, we realize that if we want to get ready for Mars where that will no longer be the case, more autonomy will be required.”

Canadarm3’s autonomous tasks on Gateway will include external inspection, unloading logistics vehicles, deploying science payloads, and repairing Gateway by swapping damaged components with spares. Rey tells us that there will also be a science logistics airlock, with a moving table that can be used to pass equipment in and out of Gateway. “It'll be possible to deploy external science, or to bring external systems inside for repair, and for future internal robotic systems to cooperate with Canadarm3. I think that'll be a really exciting thing to see.”

Even though it’s going to take a couple of extra years for Gateway’s robotic residents to arrive, the station will be operating mostly autonomously (by necessity) as soon as the Power and Propulsion Element and the Habitation and Logistics Outpost begin their journey to lunar orbit in November of 2024. Several science payloads will be along for the ride, including helium physics and space weather experiments.

Gateway itself, though, is the arguably most important experiment of all. Its autonomous systems, whether embodied in internal and external robots or not, will be undergoing continual testing, and Gateway will need to prove itself before we’re ready to trust its technology to take us into deep space. In addition to being able to operate for 21 days without communications, one of Gateway’s eventual requirements is to be able to function for up to three years without any crew visits. This is the level of autonomy and reliability that we’ll need to be prepared for exploration of Mars, and beyond.



An integral part of NASA’s plan to return astronauts to the Moon this decade is the Lunar Gateway, a space station that will be humanity’s first permanent outpost outside of low Earth orbit. Gateway, a partnership between NASA, the Canadian Space Agency (CSA), the European Space Agency (ESA), and the Japan Aerospace Exploration Agency (JAXA), is intended to support operations on the lunar surface while also serving as a staging point for exploration to Mars.

Gateway will be significantly smaller than the International Space Station (ISS), initially consisting of just two modules with additional modules to be added over time. The first pieces of the station to reach lunar orbit will be the Power and Propulsion Element (PPE) attached to the Habitation and Logistics Outpost (HALO), scheduled to launch together on a SpaceX Falcon Heavy rocket in November of 2024. The relatively small size of Gateway is possible because the station won’t be crewed most of the time—astronauts may pass through for a few weeks, but the expectation is that Gateway will spend about 11 months out of the year without anyone on board.

This presents some unique challenges for Gateway. On the ISS, astronauts spend a substantial amount of time on station upkeep, but Gateway will have to keep itself functional for extended periods without any direct human assistance.

“The things that the crew does on the International Space Station will need to be handled by Gateway on its own,” explains Julia Badger, Gateway Autonomy System Manager at NASA’s Johnson Space Center. “There’s also a big difference in the operational paradigm. Right now, ISS has a mission control that’s full time. With Gateway, we’re eventually expecting to have just eight hours a week of ground operations.” The hundreds of commands that the ISS receives every day to keep it running will still be necessary on Gateway—they’ll just have to come from Gateway itself, rather than from humans back on Earth.

“It's a new way of thinking compared to ISS. If something breaks on Gateway, we either have to be able to live with it for a certain amount of time, or we’ve got to have the ability to remotely or autonomously fix it.” —Julia Badger, NASA JSC

To make this happen, NASA is developing a Vehicle System Manager, or VSM, that will act like the omnipresent computer system found on virtually every science-fiction starship. The VSM will autonomously manage all of Gateway’s functionality, taking care of any problems that come up to the extent that they can be managed with clever software and occasional input from a distant human. “It's a new way of thinking compared to ISS,” explains Badger. “If something breaks on Gateway, we either have to be able to live with it for a certain amount of time, or we’ve got to have the ability to remotely or autonomously fix it.”

While Gateway itself can be thought of as a robot of sorts, there’s a limited amount that can be reasonably and efficiently done through dedicated automated systems, and NASA had to find a compromise between redundancy and both complexity and mass. For example, there was some discussion about whether Gateway’s hatches should open and close on their own and NASA ultimately decided to leave the hatches manually operated. But that doesn't necessarily mean that Gateway won’t be able to open its hatches without human assistance: it just means that there will be a need for robotic hands rather than human ones.

“I hope eventually we have robots up there that can open the hatches,” Badger tells us. She explains that Gateway is being designed with potential intra-vehicular robots (IVR) in mind, including things like adding visual markers to important locations, placing convenient charging ports around the station interior, and designing the hatches such that the force required to open them is compatible with the capabilities of robotic limbs. Parts of Gateway’s systems may be modular as well, able to be removed and replaced by robots if necessary. “What we’re trying to do,” Badger says, “is make smart choices about Gateway’s design that don’t add a lot of mass but that will make it easier for a robot to work within the station.”

Robonaut at its test station in front of a manipulation taskboard on the ISS.JSC/NASA

NASA already has a substantial amount of experience with IVR. Robonaut 2, a full-sized humanoid robot, spent several years on the International Space Station starting in 2011, learning how to perform tasks that would otherwise have to be done by human astronauts. More recently, a trio of toaster-sized, cubical, free-flying robots called Astrobees have taken up residence on the ISS, where they’ve been experimenting with autonomous sensing and navigation. A NASA project called ISAAC (Integrated System for Autonomous and Adaptive Caretaking) is currently exploring how robots like Astrobee could be used for a variety of tasks on Gateway, from monitoring station health to autonomously transferring cargo, although at least in the near term, in Badger’s opinion, “maintenance of Gateway, like using robots that can switch out broken components, is going to be more important than logistics types of tasks.”

Badger believes that a combination of a generalized mobile manipulator like Robonaut 2 and a free flyer like Astrobee make for a good team, and this combination is currently the general concept for Gateway IVR. This is not to say that the intra-vehicular robots that end up on Gateway will look like the robots that have been working on the ISS, but they’ll be inspired by them, and will leverage all of the experience that NASA has gained with its robots on ISS so far. It might also be useful to have a limited number of specialized robots, Badger says. “For example, if there was a reason to get behind a rack, you may want a snake-type of robot for that.”

An Astrobee robot (this one is named Bumble) on the ISS.JSC/NASA

While NASA is actively preparing for intra-vehicular robots on Gateway, such robots do not yet exist, and the agency may not be building these robots itself, instead relying on industry partners to deliver designs that meet NASA’s requirements. At launch, and likely for the first several years at least, Gateway will have to take care of itself without internal robotic assistants. However, one of the goals of Gateway is to operate itself completely autonomously for up to three weeks without any contact with Earth at all, mimicking the three week solar conjunction between Earth and Mars where the sun blocks any communications between the two planets. “I think that we will get IVR on board,” Badger says. “If we really want Gateway to be able to take care of itself for 21 days, IVR is going to be a very important part of that. And having a robot is absolutely something that I think is going to be necessary as we move on to Mars.”

"Having a robot is absolutely something that I think is going to be necessary as we move on to Mars.”—Julia Badger, NASA JSC

Intra-vehicular robots are just half of the robotic team that will be necessary to keep Gateway running autonomously long-term. Space stations rely on complex external infrastructure for power, propulsion, thermal control, and much more. Since 2001, the ISS has been home to Canadarm2, a 17.6m robotic arm, which is able to move around the station to grasp and manipulate objects while under human control from either inside the station or from the ground.

The Canadian Space Agency (CSA) in partnership with MDA is developing a new robotic arm system for Gateway, called Canadarm3, scheduled to launch in 2027. Canadarm3 will include an 8.5m long arm for grappling spacecraft and moving large objects, as well as a smaller, more dexterous robotic arm that can be used for delicate tasks. The smaller arm can even repair the larger arm if necessary. But what really sets Canadarm3 apart from its predecessors is how it’s controlled, according to Daniel Rey, Gateway Chief Engineer and Systems Manager at CSA. “One of the very novel things about Canadarm3 is its ability to operate autonomously, without any crew required,” Rey says. This capability relies on a new generation of software and hardware that gives the arm a sense of touch as well as an ability to react to its environment without direct human supervision.

"With Canadarm3, we realize that if we want to get ready for Mars, more autonomy will be required."—Daniel Rey, CSA

Even though Gateway will be a thousand times farther away from Earth than the ISS, Rey explains that the added distance (about 400,000 km) isn’t what really necessitates Canadarm3’s added autonomy. “Surprisingly, the location of Gateway in its orbit around the Moon has a time delay to Earth that is not all that different from the time delay in low Earth orbit when you factor in various ground stations that signals have to pass through," says Rey. "With Canadarm3, we realize that if we want to get ready for Mars where that will no longer be the case, more autonomy will be required.”

Canadarm3’s autonomous tasks on Gateway will include external inspection, unloading logistics vehicles, deploying science payloads, and repairing Gateway by swapping damaged components with spares. Rey tells us that there will also be a science logistics airlock, with a moving table that can be used to pass equipment in and out of Gateway. “It'll be possible to deploy external science, or to bring external systems inside for repair, and for future internal robotic systems to cooperate with Canadarm3. I think that'll be a really exciting thing to see.”

Even though it’s going to take a couple of extra years for Gateway’s robotic residents to arrive, the station will be operating mostly autonomously (by necessity) as soon as the Power and Propulsion Element and the Habitation and Logistics Outpost begin their journey to lunar orbit in November of 2024. Several science payloads will be along for the ride, including helium physics and space weather experiments.

Gateway itself, though, is the arguably most important experiment of all. Its autonomous systems, whether embodied in internal and external robots or not, will be undergoing continual testing, and Gateway will need to prove itself before we’re ready to trust its technology to take us into deep space. In addition to being able to operate for 21 days without communications, one of Gateway’s eventual requirements is to be able to function for up to three years without any crew visits. This is the level of autonomy and reliability that we’ll need to be prepared for exploration of Mars, and beyond.

An increase of the aging population with a decrease in the available nursing staff has been seen in recent years. These two factors combined present a challenging problem for the future and has since become a political issue in many countries. Technological advances in robotics have made its use possible in new application fields like care and thus it appears to be a viable technological avenue to address the projected nursing labor shortage. The introduction of robots in nursing care creates an active triangular collaboration between the patient, nurse, and robot, which makes this area significantly different from traditional human–robot interaction (HRI) settings. In this review, we identify 133 robotic systems addressing nursing. We classify them according to two schemes: 1) a technical classification extended to include both patient and nurse and 2) a novel data-derived hierarchical classification based on use cases. We then analyze their intersection and build a multidimensional view of the state of technology. With this analytical tool, we describe an observed skew of the distribution of systems and identify gaps for future research. We also describe a link between the novel hierarchical use case classification and the typical phases of nursing care from admission to recovery.

The Free Energy Principle (FEP) postulates that biological agents perceive and interact with their environment in order to minimize a Variational Free Energy (VFE) with respect to a generative model of their environment. The inference of a policy (future control sequence) according to the FEP is known as Active Inference (AIF). The AIF literature describes multiple VFE objectives for policy planning that lead to epistemic (information-seeking) behavior. However, most objectives have limited modeling flexibility. This paper approaches epistemic behavior from a constrained Bethe Free Energy (CBFE) perspective. Crucially, variational optimization of the CBFE can be expressed in terms of message passing on free-form generative models. The key intuition behind the CBFE is that we impose a point-mass constraint on predicted outcomes, which explicitly encodes the assumption that the agent will make observations in the future. We interpret the CBFE objective in terms of its constituent behavioral drives. We then illustrate resulting behavior of the CBFE by planning and interacting with a simulated T-maze environment. Simulations for the T-maze task illustrate how the CBFE agent exhibits an epistemic drive, and actively plans ahead to account for the impact of predicted outcomes. Compared to an EFE agent, the CBFE agent incurs expected reward in significantly more environmental scenarios. We conclude that CBFE optimization by message passing suggests a general mechanism for epistemic-aware AIF in free-form generative models.

Bioacoustics monitoring has become increasingly popular for studying the behavior and ecology of vocalizing birds. This study aims to verify the practical effectiveness of localization technology for auditory monitoring of endangered Eurasian bittern (Botaurus stellaris) which inhabits wetlands in remote areas with thick vegetation. Their crepuscular and highly secretive nature, except during the breeding season when they vocalize advertisement calls, make them difficult to monitor. Because of the increasing rates of habitat loss, surveying accurate numbers and their habitat needs are both important conservation tasks. We investigated the feasibility of localizing their booming calls, at a low frequency range between 100–200 Hz, using microphone arrays and robot audition HARK (Honda Research Institute, Audition for Robots with Kyoto University). We first simulated sound source localization of actual bittern calls for microphone arrays of radii 10 cm, 50 cm, 1 m, and 10 m, under different noise levels. Second, we monitored bitterns in an actual field environment using small microphone arrays (height = 12 cm; width = 8 cm), in the Sarobetsu Mire, Hokkaido Island, Japan. The simulation results showed that the spectral detectability was higher for larger microphone arrays, whereas the temporal detectability was higher for smaller microphone arrays. We identified that false detection in smaller microphone arrays, which was coincidentally generated in the calculation proximate to the transfer function for the opposite side. Despite technical limitations, we successfully localized booming calls of at least two males in a reverberant wetland, surrounded by thick vegetation and riparian trees. This study is the first case of localizing such rare birds using small-sized microphone arrays in the field, thereby presenting how this technology could contribute to auditory surveys of population numbers, behaviors, and microhabitat selection, all of which are difficult to investigate using other observation methods. This methodology is not only useful for the better understanding of bitterns, but it can also be extended to investigate other rare nocturnal birds with low-frequency vocalizations, without direct ringing or tagging. Our results also suggest a future necessity for a robust localization system to avoid reverberation and echoing in the field, resulting in the false detection of the target birds.

Robots are more and more present in our lives, particularly in the health sector. In therapeutic centers, some therapists are beginning to explore various tools like video games, Internet exchanges, and robot-assisted therapy. These tools will be at the disposal of these professionals as additional resources that can support them to assist their patients intuitively and remotely. The humanoid robot can capture young children’s attention and then attract the attention of researchers. It can be considered as a play partner and can directly interact with children or without a third party’s presence. It can equally perform repetitive tasks that humans cannot achieve in the same way. Moreover, humanoid robots can assist a therapist by allowing him to teleoperated and interact from a distance. In this context, our research focuses on robot-assisted therapy and introduces a humanoid social robot in a pediatric hospital care unit. That will be performed by analyzing many aspects of the child’s behavior, such as verbal interactions, gestures and facial expressions, etc. Consequently, the robot can reproduce consistent experiences and actions for children with communication capacity restrictions. This work is done by applying a novel approach based on deep learning and reinforcement learning algorithms supported by an ontological knowledge base that contains relevant information and knowledge about patients, screening tests, and therapies. In this study, we realized a humanoid robot that will assist a therapist by equipping the robot NAO: 1) to detect whether a child is autistic or not using a convolutional neural network, 2) to recommend a set of therapies based on a selection algorithm using a correspondence matrix between screening test and therapies, and 2) to assist and monitor autistic children by executing tasks that require those therapies.



Happy National Robotics Week! It’s a very special NRW this year, because iRobot has seen fit to announce a brand new robot that is guaranteed to leave your floors 0 percent cleaner. And there was much rejoicing, because this is not a mop or a vacuum, but instead a new and updated version of iRobot Create: the Create 3.

Not only is the Create 3 based on a much more modern Roomba platform, it’s also compatible with ROS 2, the unexpectedly mature software that a surprising number of robots are now using to do cool stuff. If this mainstream vote of confidence in ROS 2 by a company like iRobot surprises you, well then maybe a Create 3 should be the next robot in your life.

It’s a little scary to recall that when iRobot last released an update to the Create DIY/educational robot platform, the calendar read 2014 (!). The Create 2 was based on the Roomba 600 series, which (for the record) is somehow still a workhorse in my house after a battery replacement. But Roombas have gotten way smarter over the past (not quite but close to a) decade; the Create 3, which is based on the Roomba i3, takes advantage of that.

Create 3 comes equipped with Wi-Fi, Ethernet-over-USB host, and Bluetooth. Create 3 is also equipped with a suite of intelligent technology including an inertial measurement unit (IMU), optical floor tracking sensor, wheel encoders, and infrared sensors for autonomous localization, navigation, and telepresence applications. Additionally, the robot includes cliff, bump and slip detection, along with LED lights and a speaker.

What's more, Create 3 brings a variety of new functionalities to users, including compatibility with ROS 2, an industry-standard software for roboticists worldwide. Robots require many different components, such as actuators, sensors and control systems; many of them must communicate with each other in order for the machine to work. ROS 2 enables this communication, even allowing samateurs like students to speed up the development of their projects by focusing more on their core application rather than the platform itself. Learning ROS 2 also gives students valuable experience that many companies are seeking when hiring robotics developers.

But wait! There’s even more! Create 3 also supports Python and Ignition Gazebo, and is immediately available for 299 USD and 399 CAD with worldwide availability in the coming months.

A big advantage of using a Roomba i3 as the Create 3's jumping-off point is that it leverages all of the hardware smarts that iRobot has accumulated over the seemingly hundreds of years that they’ve been making different flavors of Roombas. Roombas are incredibly rugged and reliable; I’ve had two of my Roombas throw themselves down a flight of stairs (let’s not get into whose fault that was) and they emerged entirely unscathed. You can expect the Create 3 to take almost whatever you can throw at it—or more importantly, almost whatever can be thrown at it in an educational environment.

As far as the kind of clever capabilities that the Create 3 can be imbued with, well, iRobot has helpfully put together an “iRobot® Create® 3 Hookup Guide,” which seriously confused me for just a second after I read it. But as it turns out, it covers various methods of inserting dongles into the Create 3’s cargo bay, as shown here:

NVIDIA Jetson Xavier NX developer kit (on left), and Raspberry Pi 4 (on right), connected to a Create 3 interface.

Now, we can’t talk about Create 3 without mentioning the upcoming TurtleBot 4. We’re not going to get into TB4 too much right now, because we’ve talked to a bunch of other folks about it and we’ll have a lot more to say very soon. But it’s absolutely true that Create 3 will be an integral part of TB4, in the same way that the Kobuki base was an integral part of the TurtleBot 2.

For more details on the Create 3, we spoke with Charlotte Redman, iRobot's product manager, and Steven Shamlian, principal electrical engineer.

IEEE Spectrum: Why is the Create important to iRobot?

Charlotte Redman: Part of iRobot’s DNA is STEM education. Providing access to others to get into the robotics industry is where the Create robots came from. The original Create helped enable the TurtleBot 1, which drove adoption of ROS. And so, with the Create 3, we’re building on that history of enabling access to the ROS community.

Steven Shamlian: I think it really comes down to iRobot being a group of people who believe strongly that everybody can be builders. That’s where the Create 1 and Create 2 came from, and Create 3 is the next huge step: You go from this basic serial interface to something featuring Ethernet, and USB, and WiFi, and ROS, and other things that we’re hoping to support soon. It’ll make it a lot easier for people to create cool stuff in their labs or their living rooms. That’s what we’re excited about, and that’s why we do it.

iRobot has a lot of new robots with some really cool new sensors and mapping capabilities and stuff, but none of that seems to have made it into the Create platform. Why not?

Shamlian: So, you're asking, why did iRobot base the Create 3 on the i3, and not on s9 or j7? I think there are two reasons. The first reason is cost. It's important for the robot to be accessible; it’s important for people to be able to afford this platform so that they can build their projects on it. It's important for them to be able to iterate as their interest grows. And so, we chose a robot with a set of sensors that we thought would provide the things that people requested the most, and found most interesting about Create 2, which was its rock-solid odometry. The new Create has a downward-facing sensor to do optical flow.

Why didn’t we use a robot with a camera? We could have, and we talked about it, but the fact of the matter is that compared to the things that we see roboticists using for their research projects, they would be very disappointed with what they’d get out of the robot if we gave them the camera that we're using. That's my suspicion. And so we thought, okay, we could package the camera and charge more for this robot, but people would probably be much happier buying the camera or depth sensor that they wanted to use, instead of us burdening them with something that they don't necessarily want.

When you think about who is going to be using the Create 3, is that imagined end user different than who you foresaw using earlier generations of the platform?

Shamlian: We're definitely targeting Create 3 at a higher age level than something like Root. I think our hope is that the robot will be accessible to high school students as well as postdocs.

Redman: Originally, with the earlier Create, iRobot didn’t have Root as a platform. Now that we have Root, that really covers block level coding and the basics of computational thinking for kids from K through 12. You can start with directional learning and you can get all the way up to Python with Root; and now the Create 3 is what’s next. You can program it in ROS 2, with the Python SDK, or even with the iRobot coding app.

What kind of autonomy will Create 3 have out of the box?

Shamlian: Create 2 didn’t really have access to on-robot behaviors. With Create 3, we’re hoping to be able to provide ROS actions for some behaviors where we think we close the loop on the robot well. We also hope to do it for those behaviors that use our sensors in a way that might be difficult for somebody to do offboard. It’s things like wall following, navigating through difficult spaces, and especially docking, getting the robot back to where it can charge. We can take care of that. That’s really the goal of this platform: With Create 3, we’re able to get people past questions like, “How do I make a mobility base that navigates and charges?” and instead help them work on the more interesting problems that appeal to them.

So what’s iRobot’s relationship with ROS now?

Shamlian: I don't know what I can say about what we're using internally, but I can tell you that it wasn't a huge leap to make ROS 2 work on Create 3. I think iRobot believes in ROS 2 becoming more successful, and giving researchers and community members a common language. It only helps iRobot if more people use ROS 2.

iRobot has a really solid education program with Root and now Create 3. What do you think is the next step for folks who learn to code on those platforms?

Redman: Join iRobot!



Happy National Robotics Week! It’s a very special NRW this year, because iRobot has seen fit to announce a brand new robot that is guaranteed to leave your floors 0 percent cleaner. And there was much rejoicing, because this is not a mop or a vacuum, but instead a new and updated version of iRobot Create: the Create 3.

Not only is the Create 3 based on a much more modern Roomba platform, it’s also compatible with ROS 2, the unexpectedly mature software that a surprising number of robots are now using to do cool stuff. If this mainstream vote of confidence in ROS 2 by a company like iRobot surprises you, well then maybe a Create 3 should be the next robot in your life.

It’s a little scary to recall that when iRobot last released an update to the Create DIY/educational robot platform, the calendar read 2014 (!). The Create 2 was based on the Roomba 600 series, which (for the record) is somehow still a workhorse in my house after a battery replacement. But Roombas have gotten way smarter over the past (not quite but close to a) decade; the Create 3, which is based on the Roomba i3, takes advantage of that.

Create 3 comes equipped with Wi-Fi, Ethernet-over-USB host, and Bluetooth. Create 3 is also equipped with a suite of intelligent technology including an inertial measurement unit (IMU), optical floor tracking sensor, wheel encoders, and infrared sensors for autonomous localization, navigation, and telepresence applications. Additionally, the robot includes cliff, bump and slip detection, along with LED lights and a speaker.

What's more, Create 3 brings a variety of new functionalities to users, including compatibility with ROS 2, an industry-standard software for roboticists worldwide. Robots require many different components, such as actuators, sensors and control systems; many of them must communicate with each other in order for the machine to work. ROS 2 enables this communication, even allowing samateurs like students to speed up the development of their projects by focusing more on their core application rather than the platform itself. Learning ROS 2 also gives students valuable experience that many companies are seeking when hiring robotics developers.

But wait! There’s even more! Create 3 also supports Python and Ignition Gazebo, and is immediately available for 299 USD and 399 CAD with worldwide availability in the coming months.

A big advantage of using a Roomba i3 as the Create 3's jumping-off point is that it leverages all of the hardware smarts that iRobot has accumulated over the seemingly hundreds of years that they’ve been making different flavors of Roombas. Roombas are incredibly rugged and reliable; I’ve had two of my Roombas throw themselves down a flight of stairs (let’s not get into whose fault that was) and they emerged entirely unscathed. You can expect the Create 3 to take almost whatever you can throw at it—or more importantly, almost whatever can be thrown at it in an educational environment.

As far as the kind of clever capabilities that the Create 3 can be imbued with, well, iRobot has helpfully put together an “iRobot® Create® 3 Hookup Guide,” which seriously confused me for just a second after I read it. But as it turns out, it covers various methods of inserting dongles into the Create 3’s cargo bay, as shown here:

NVIDIA Jetson Xavier NX developer kit (on left), and Raspberry Pi 4 (on right), connected to a Create 3 interface.

Now, we can’t talk about Create 3 without mentioning the upcoming TurtleBot 4. We’re not going to get into TB4 too much right now, because we’ve talked to a bunch of other folks about it and we’ll have a lot more to say very soon. But it’s absolutely true that Create 3 will be an integral part of TB4, in the same way that the Kobuki base was an integral part of the TurtleBot 2.

For more details on the Create 3, we spoke with Charlotte Redman, iRobot's product manager, and Steven Shamlian, principal electrical engineer.

IEEE Spectrum: Why is the Create important to iRobot?

Charlotte Redman: Part of iRobot’s DNA is STEM education. Providing access to others to get into the robotics industry is where the Create robots came from. The original Create helped enable the TurtleBot 1, which drove adoption of ROS. And so, with the Create 3, we’re building on that history of enabling access to the ROS community.

Steven Shamlian: I think it really comes down to iRobot being a group of people who believe strongly that everybody can be builders. That’s where the Create 1 and Create 2 came from, and Create 3 is the next huge step: You go from this basic serial interface to something featuring Ethernet, and USB, and WiFi, and ROS, and other things that we’re hoping to support soon. It’ll make it a lot easier for people to create cool stuff in their labs or their living rooms. That’s what we’re excited about, and that’s why we do it.

iRobot has a lot of new robots with some really cool new sensors and mapping capabilities and stuff, but none of that seems to have made it into the Create platform. Why not?

Shamlian: So, you're asking, why did iRobot base the Create 3 on the i3, and not on s9 or j7? I think there are two reasons. The first reason is cost. It's important for the robot to be accessible; it’s important for people to be able to afford this platform so that they can build their projects on it. It's important for them to be able to iterate as their interest grows. And so, we chose a robot with a set of sensors that we thought would provide the things that people requested the most, and found most interesting about Create 2, which was its rock-solid odometry. The new Create has a downward-facing sensor to do optical flow.

Why didn’t we use a robot with a camera? We could have, and we talked about it, but the fact of the matter is that compared to the things that we see roboticists using for their research projects, they would be very disappointed with what they’d get out of the robot if we gave them the camera that we're using. That's my suspicion. And so we thought, okay, we could package the camera and charge more for this robot, but people would probably be much happier buying the camera or depth sensor that they wanted to use, instead of us burdening them with something that they don't necessarily want.

When you think about who is going to be using the Create 3, is that imagined end user different than who you foresaw using earlier generations of the platform?

Shamlian: We're definitely targeting Create 3 at a higher age level than something like Root. I think our hope is that the robot will be accessible to high school students as well as postdocs.

Redman: Originally, with the earlier Create, iRobot didn’t have Root as a platform. Now that we have Root, that really covers block level coding and the basics of computational thinking for kids from K through 12. You can start with directional learning and you can get all the way up to Python with Root; and now the Create 3 is what’s next. You can program it in ROS 2, with the Python SDK, or even with the iRobot coding app.

What kind of autonomy will Create 3 have out of the box?

Shamlian: Create 2 didn’t really have access to on-robot behaviors. With Create 3, we’re hoping to be able to provide ROS actions for some behaviors where we think we close the loop on the robot well. We also hope to do it for those behaviors that use our sensors in a way that might be difficult for somebody to do offboard. It’s things like wall following, navigating through difficult spaces, and especially docking, getting the robot back to where it can charge. We can take care of that. That’s really the goal of this platform: With Create 3, we’re able to get people past questions like, “How do I make a mobility base that navigates and charges?” and instead help them work on the more interesting problems that appeal to them.

So what’s iRobot’s relationship with ROS now?

Shamlian: I don't know what I can say about what we're using internally, but I can tell you that it wasn't a huge leap to make ROS 2 work on Create 3. I think iRobot believes in ROS 2 becoming more successful, and giving researchers and community members a common language. It only helps iRobot if more people use ROS 2.

iRobot has a really solid education program with Root and now Create 3. What do you think is the next step for folks who learn to code on those platforms?

Redman: Join iRobot!

Robotics and AI-based applications (RAI) are often said to be so technologically advanced that they should be held responsible for their actions, instead of the human who designs or operates them. The paper aims to prove that this thesis (“the exceptionalist claim”)—as it stands—is both theoretically incorrect and practically inadequate. Indeed, the paper argues that such claim is based on a series of misunderstanding over the very notion and functions of “legal responsibility”, which it then seeks to clarify by developing and interdisciplinary conceptual taxonomy. In doing so, it aims to set the premises for a more constructive debate over the feasibility of granting legal standing to robotic application. After a short Introduction setting the stage of the debate, the paper addresses the ontological claim, distinguishing the philosophical from the legal debate on the notion of i) subjectivity and ii) agency, with their respective implications. The analysis allows us to conclude that the attribution of legal subjectivity and agency are purely fictional and technical solutions to facilitate legal interactions, and is not dependent upon the intrinsic nature of the RAI. A similar structure is maintained with respect to the notion of responsibility, addressed first in a philosophical and then legal perspective, to demonstrate how the latter is often utilized to both pursue ex ante deterrence and ex post compensation. The focus on the second objective allows us to bridge the analysis towards functional (law and economics based) considerations, to discuss how even the attribution of legal personhood may be conceived as an attempt to simplify certain legal interactions and relations. Within such a framework, the discussion whether to attribute legal subjectivity to the machine needs to be kept entirely within the legal domain, and grounded on technical (legal) considerations, to be argued on a functional, bottom-up analysis of specific classes of RAI. That does not entail the attribution of animacy or the ascription of a moral status to the entity itself.

Strategic management and production of internal energy in autonomous robots is becoming a research topic with growing importance, especially for platforms that target long-endurance missions, with long-range and duration. It is fundamental for autonomous vehicles to have energy self-generation capability to improve energy autonomy, especially in situations where refueling is not viable, such as an autonomous sailboat in ocean traversing. Hence, the development of energy estimation and management solutions is an important research topic to better optimize the use of available energy supply and generation potential. In this work, we revisit the challenges behind the project design and construction for two fully autonomous sailboats and propose a methodology based on the Restricted Boltzmann Machine (RBM) in order to find the best way to manage the supplementary energy generated by solar panels. To verify the approach, we introduce a case study with our two developed sailboats that have planned payload with electric and electronics, and one of them is equipped with an electrical engine that may eventually help with the sailboat propulsion. Our current results show that it is possible to augment the system confidence level for the potential energy that can be harvested from the environment and the remaining energy stored, optimizing the energy usage of autonomous vehicles and improving their energy robustness.

Pages