Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

CSIRO SubT Summit – December 10, 2021 – OnlineICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Ameca is the world’s most advanced human shaped robot representing the forefront of human-robotics technology. Designed specifically as a platform for development into future robotics technologies, Ameca is the perfect humanoid robot platform for human-robot interaction.

Apparently, the eventual plan is to get Ameca to walk.

[ Engineered Arts ]

Looks like Flexiv had a tasty and exceptionally safe Thanksgiving!

But also kind of lonely :(

[ Flexiv ]

Thanks, Yunfan!

Cedars-Sinai is now home to a pair of Moxi robots, named Moxi and Moxi. Yeah, they should work on the names. But they've totally nailed the beeps!

[ Diligent Robotics ] via [ Cedars Sinai ]

Somehow we already have a robot holiday video, I don't know whether to be thrilled or horrified.

The Faculty of Electrical Engineering of the CTU in Prague wishes you a Merry Christmas and much success, health and energy in 2022!

[ CTU ]

Carnegie Mellon University's Iris rover is bolted in and ready for its journey to the moon. The tiny rover passed a huge milestone on Wednesday, Dec. 1, when it was secured to one of the payload decks of Astrobotic's Peregrine Lunar Lander, which will deliver it to the moon next year.

[ CMU ]

This robot has some of the absolute best little feetsies I've ever. Seen.

[ SDU ]

Thanks, Poramate!

With the help of artificial intelligence and four collaborative robots, researchers at ETH Zurich are designing and fabricating a 22.5-metre-tall green architectural sculpture.

[ ETH Zurich ]

Cassie Blue autonomously navigates on the second floor of the Ford Robotics Building at the University of Michigan. The total traverse distance is 200 m (656.168 feet).

[ Michigan Robotics ]

Thanks, Bruce!

The Mohamed Bin Zayed International Robotics Challenge (MBZIRC) will be held in the UAE capital, Abu Dhabi, in June 2023, where tech innovators will participate to seek marine safety and security solutions to take home more than US$3 million in prize money.

[ MBZIRC ]

Madagascar Flying Labs and WeRobotics are using cargo drones to deliver essential medicines to very remote communities in northern Madagascar. This month, they delivered the 250 doses of the Janssen COVID-19 vaccine for the first time, with many more such deliveries to come over the next 12 months.

[ WeRobotics ]

It's... Cozmo?

Already way overfunded on Kickstarter.

[ Kickstarter ] via [ RobotStart ]

At USC's Center for Advanced Manufacturing, we have taught the Baxter robot to manipulate fluid food substances to create pancake art of various user created designs.

[ USC ]

Face-first perching for fixed wing drones looks kinda painful, honestly.

[ EPFL ]

Video footage from NASA’s Perseverance Mars rover of the Ingenuity Mars Helicopter’s 13th flight on Sept. 4 provides the most detailed look yet of the rotorcraft in action.

During takeoff, Ingenuity kicks up a small plume of dust that the right camera, or “eye,” captures moving to the right of the helicopter during ascent. After its initial climb to planned maximum altitude of 26 feet (8 meters), the helicopter performs a small pirouette to line up its color camera for scouting. Then Ingenuity pitches over, allowing the rotors’ thrust to begin moving it horizontally through the thin Martian air before moving offscreen. Later, the rotorcraft returns and lands in the vicinity of where it took off. The team targeted a different landing spot–about 39 feet (12 meters) from takeoff–to avoid a ripple of sand it landed on at the completion of Flight 12.

[ JPL ]

I'm not totally sold on the viability of commercial bathroom cleaning robots, but I do appreciate how well the techology seems to work. In the videos, at least.

[ SOMATIC ]

An interdisciplinary team at Harvard University School of Engineering and the Wyss Institute at Harvard University is building soft robots for older adults and people with physical impairments. Examples of these robots are the Assistive Hip Suit and Soft Robotic Glove, both of which have been included in the 2021-2022 Smithsonian Institution exhibit entitled "FUTURES".

[ SI ]

Subterranean robot exploration is difficult with many mobility, communications, and navigation challenges that require an approach with a diverse set of systems, and reliable autonomy. While prior work has demonstrated partial successes in addressing the problem, here we convey a comprehensive approach to address the problem of subterranean exploration in a wide range of tunnel, urban, and cave environments. Our approach is driven by the themes of resiliency and modularity, and we show examples of how these themes influence the design of the different modules. In particular, we detail our approach to artifact detection, pose estimation, coordination, planning, control, and autonomy, and discuss our performance in the Final DARPA Subterranean Challenge.

[ CMU ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

CSIRO SubT Summit – December 10, 2021 – OnlineICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Ameca is the world’s most advanced human shaped robot representing the forefront of human-robotics technology. Designed specifically as a platform for development into future robotics technologies, Ameca is the perfect humanoid robot platform for human-robot interaction.

Apparently, the eventual plan is to get Ameca to walk.

[ Engineered Arts ]

Looks like Flexiv had a tasty and exceptionally safe Thanksgiving!

But also kind of lonely :(

[ Flexiv ]

Thanks, Yunfan!

Cedars-Sinai is now home to a pair of Moxi robots, named Moxi and Moxi. Yeah, they should work on the names. But they've totally nailed the beeps!

[ Diligent Robotics ] via [ Cedars Sinai ]

Somehow we already have a robot holiday video, I don't know whether to be thrilled or horrified.

The Faculty of Electrical Engineering of the CTU in Prague wishes you a Merry Christmas and much success, health and energy in 2022!

[ CTU ]

Carnegie Mellon University's Iris rover is bolted in and ready for its journey to the moon. The tiny rover passed a huge milestone on Wednesday, Dec. 1, when it was secured to one of the payload decks of Astrobotic's Peregrine Lunar Lander, which will deliver it to the moon next year.

[ CMU ]

This robot has some of the absolute best little feetsies I've ever. Seen.

[ SDU ]

Thanks, Poramate!

With the help of artificial intelligence and four collaborative robots, researchers at ETH Zurich are designing and fabricating a 22.5-metre-tall green architectural sculpture.

[ ETH Zurich ]

Cassie Blue autonomously navigates on the second floor of the Ford Robotics Building at the University of Michigan. The total traverse distance is 200 m (656.168 feet).

[ Michigan Robotics ]

Thanks, Bruce!

The Mohamed Bin Zayed International Robotics Challenge (MBZIRC) will be held in the UAE capital, Abu Dhabi, in June 2023, where tech innovators will participate to seek marine safety and security solutions to take home more than US$3 million in prize money.

[ MBZIRC ]

Madagascar Flying Labs and WeRobotics are using cargo drones to deliver essential medicines to very remote communities in northern Madagascar. This month, they delivered the 250 doses of the Janssen COVID-19 vaccine for the first time, with many more such deliveries to come over the next 12 months.

[ WeRobotics ]

It's... Cozmo?

Already way overfunded on Kickstarter.

[ Kickstarter ] via [ RobotStart ]

At USC's Center for Advanced Manufacturing, we have taught the Baxter robot to manipulate fluid food substances to create pancake art of various user created designs.

[ USC ]

Face-first perching for fixed wing drones looks kinda painful, honestly.

[ EPFL ]

Video footage from NASA’s Perseverance Mars rover of the Ingenuity Mars Helicopter’s 13th flight on Sept. 4 provides the most detailed look yet of the rotorcraft in action.

During takeoff, Ingenuity kicks up a small plume of dust that the right camera, or “eye,” captures moving to the right of the helicopter during ascent. After its initial climb to planned maximum altitude of 26 feet (8 meters), the helicopter performs a small pirouette to line up its color camera for scouting. Then Ingenuity pitches over, allowing the rotors’ thrust to begin moving it horizontally through the thin Martian air before moving offscreen. Later, the rotorcraft returns and lands in the vicinity of where it took off. The team targeted a different landing spot–about 39 feet (12 meters) from takeoff–to avoid a ripple of sand it landed on at the completion of Flight 12.

[ JPL ]

I'm not totally sold on the viability of commercial bathroom cleaning robots, but I do appreciate how well the techology seems to work. In the videos, at least.

[ SOMATIC ]

An interdisciplinary team at Harvard University School of Engineering and the Wyss Institute at Harvard University is building soft robots for older adults and people with physical impairments. Examples of these robots are the Assistive Hip Suit and Soft Robotic Glove, both of which have been included in the 2021-2022 Smithsonian Institution exhibit entitled "FUTURES".

[ SI ]

Subterranean robot exploration is difficult with many mobility, communications, and navigation challenges that require an approach with a diverse set of systems, and reliable autonomy. While prior work has demonstrated partial successes in addressing the problem, here we convey a comprehensive approach to address the problem of subterranean exploration in a wide range of tunnel, urban, and cave environments. Our approach is driven by the themes of resiliency and modularity, and we show examples of how these themes influence the design of the different modules. In particular, we detail our approach to artifact detection, pose estimation, coordination, planning, control, and autonomy, and discuss our performance in the Final DARPA Subterranean Challenge.

[ CMU ]

Participatory design (PD) has been used to good success in human-robot interaction (HRI) but typically remains limited to the early phases of development, with subsequent robot behaviours then being hardcoded by engineers or utilised in Wizard-of-Oz (WoZ) systems that rarely achieve autonomy. In this article, we present LEADOR (Led-by-Experts Automation and Design Of Robots), an end-to-end PD methodology for domain expert co-design, automation, and evaluation of social robot behaviour. This method starts with typical PD, working with the domain expert(s) to co-design the interaction specifications and state and action space of the robot. It then replaces the traditional offline programming or WoZ phase by an in situ and online teaching phase where the domain expert can live-program or teach the robot how to behave whilst being embedded in the interaction context. We point out that this live teaching phase can be best achieved by adding a learning component to a WoZ setup, which captures implicit knowledge of experts, as they intuitively respond to the dynamics of the situation. The robot then progressively learns an appropriate, expert-approved policy, ultimately leading to full autonomy, even in sensitive and/or ill-defined environments. However, LEADOR is agnostic to the exact technical approach used to facilitate this learning process. The extensive inclusion of the domain expert(s) in robot design represents established responsible innovation practice, lending credibility to the system both during the teaching phase and when operating autonomously. The combination of this expert inclusion with the focus on in situ development also means that LEADOR supports a mutual shaping approach to social robotics. We draw on two previously published, foundational works from which this (generalisable) methodology has been derived to demonstrate the feasibility and worth of this approach, provide concrete examples in its application, and identify limitations and opportunities when applying this framework in new environments.

In recent years, the demand for remote services has increased with concerns regarding the spread of infectious diseases and employees’ quality of life. Many attempts have been made to enable store staff to provide various services remotely via avatars displayed to on-site customers. However, the workload required on the part of service staff by the emerging new work style of operating avatar robots remains a concern. No study has compared the performance and perceived workload of the same staff working locally versus remotely via an avatar. In this study, we conducted an experiment to identify differences between the performance of in-person services and remote work through an avatar robot in an actual public space. The results showed that there were significant differences in the partial performance between working via an avatar and working locally, and we could not find significant difference in the overall performance. On the other hand, the perceived workload was significantly lower when the avatar robot was used. We also found that customers reacted differently to the robots and to the in-person participants. In addition, the workload perceived by operators in the robotic task was correlated with their personality and experience. To the best of our knowledge, this study is the first investigation of both performance and workload in remote customer service through robotic avatars, and it has important implications for the implementation of avatar robots in service settings.

In the inspection work involving foodstuffs in food factories, there are cases where people not only visually inspect foodstuffs, but must also physically touch foodstuffs with their hands to find foreign or undesirable objects mixed in the product. To contribute to the automation of the inspection process, this paper proposes a method for detecting foreign objects in food based on differences in hardness using a camera-based tactile image sensor. Because the foreign objects to be detected are often small, the tactile sensor requires a high spatial resolution. In addition, inspection work in food factories requires a sufficient inspection speed. The proposed cylindrical tactile image sensor meets these requirements because it can efficiently acquire high-resolution tactile images with a camera mounted inside while rolling the cylindrical sensor surface over the target object. By analyzing the images obtained from the tactile image sensor, we detected the presence of foreign objects and their locations. By using a reflective membrane-type sensor surface with high sensitivity, small and hard foreign bodies of sub-millimeter size mixed in with soft food were successfully detected. The effectiveness of the proposed method was confirmed through experiments to detect shell fragments left on the surface of raw shrimp and bones left in fish fillets.

In recent years, the governance of robotic technologies has become an important topic in policy-making contexts. The many potential applications and roles of robots in combination with steady advances in their uptake within society are expected to cause various unprecedented issues, which in many cases will increase the demand for new policy measures. One of the major issues is the way in which societies will address potential changes in the moral and legal status of autonomous social robots. Robot standing is an important concept that aims to understand and elaborate on such changes in robots’ status. This paper explores the concept of robot standing as a useful idea that can assist in the anticipatory governance of social robots. However, at the same time, the concept necessarily involves forms of speculative thinking, as it is anticipating a future that has not yet fully arrived. This paper elaborates on how such speculative engagement with the potential of technology represents an important point of discussion in the critical study of technology more generally. The paper then situates social robotics in the context of anticipatory technology governance by emphasizing the idea that robots are currently in the process of becoming constituted as objects of governance. Subsequently, it explains how specifically a speculative concept like robot standing can be of value in this process.



There’s no reliably good way of getting a human to trust a robot. Part of the problem is that robots, generally, just do whatever they’ve been programmed to do, and for a human, there’s typically no feeling that the robot is in the slightest bit interested in making any sort of non-functional connection. From a robot’s perspective, humans are fragile ambulatory meatsacks that are not supposed to be touched and who help with tasks when necessary, nothing more.

Humans come to trust other humans by forming an emotional connection with them, something that robots are notoriously bad at. An emotional connection obviously doesn’t have to mean love, or even like, but it does mean that there’s some level of mutual understanding and communication and predictability, a sense that the other doesn’t just see you as an object (and vice versa). For robots, which are objects, this is a real challenge, and with funding from the National Science Foundation, roboticists from the Georgia Tech Center for Music Technology have partnered with the Kennesaw State University dance department on a “forest” of improvising robot musicians and dancers who interact with humans to explore creative collaboration and the establishment of human-robot trust.

According to the researchers, the FOREST robots and accompanying musical robots are not rigid mimickers of human melody and movement; rather, they exhibit a remarkable level of emotional expression and human-like gesture fluency–what the researchers call “emotional prosody and gesture” to project emotions and build trust.

Looking up what “prosody” means will absolutely take you down a Wikipedia black hole, but the term broadly refers to parts of speech that aren’t defined by the actual words being spoken. For example, you could say “robots are smart” and impart a variety of meanings to it depending on whether you say it ironically or sarcastically or questioningly or while sobbing, as I often do. That’s prosody. You can imagine how this concept can extend to movements and gestures as well, and effective robot-to-human interaction will need to account for this.

Many of the robots in this performance are already well known, including Shimon, one of Gil Weinberg’s most creative performers. Here’s some additional background about how the performance came together:

What I find personally a little strange about all this is the idea of trust, because in some ways, it seems as though robots should be totally trustworthy because they can (in an ideal world) be totally predictable, right? Like, if a robot is programmed to do things X, Y, and Z in that sequence, you don’t have to trust that a robot will do Y after X in the same way that you’d have to trust a human to do so, because strictly speaking the robot has no choice. As robots get more complicated, though, and there’s more expectation that they’ll be able to interact with humans socially, that gap between what is technically predictable (or maybe, predictable after the fact) and what is predictable by the end user can get very, very wide, which is why a more abstract kind of trust becomes increasingly important. Music and dance may not be the way to make that happen for every robot out there, but it’s certainly a useful place to start.



There’s no reliably good way of getting a human to trust a robot. Part of the problem is that robots, generally, just do whatever they’ve been programmed to do, and for a human, there’s typically no feeling that the robot is in the slightest bit interested in making any sort of non-functional connection. From a robot’s perspective, humans are fragile ambulatory meatsacks that are not supposed to be touched and who help with tasks when necessary, nothing more.

Humans come to trust other humans by forming an emotional connection with them, something that robots are notoriously bad at. An emotional connection obviously doesn’t have to mean love, or even like, but it does mean that there’s some level of mutual understanding and communication and predictability, a sense that the other doesn’t just see you as an object (and vice versa). For robots, which are objects, this is a real challenge, and with funding from the National Science Foundation, roboticists from the Georgia Tech Center for Music Technology have partnered with the Kennesaw State University dance department on a “forest” of improvising robot musicians and dancers who interact with humans to explore creative collaboration and the establishment of human-robot trust.

According to the researchers, the FOREST robots and accompanying musical robots are not rigid mimickers of human melody and movement; rather, they exhibit a remarkable level of emotional expression and human-like gesture fluency–what the researchers call “emotional prosody and gesture” to project emotions and build trust.

Looking up what “prosody” means will absolutely take you down a Wikipedia black hole, but the term broadly refers to parts of speech that aren’t defined by the actual words being spoken. For example, you could say “robots are smart” and impart a variety of meanings to it depending on whether you say it ironically or sarcastically or questioningly or while sobbing, as I often do. That’s prosody. You can imagine how this concept can extend to movements and gestures as well, and effective robot-to-human interaction will need to account for this.

Many of the robots in this performance are already well known, including Shimon, one of Gil Weinberg’s most creative performers. Here’s some additional background about how the performance came together:

What I find personally a little strange about all this is the idea of trust, because in some ways, it seems as though robots should be totally trustworthy because they can (in an ideal world) be totally predictable, right? Like, if a robot is programmed to do things X, Y, and Z in that sequence, you don’t have to trust that a robot will do Y after X in the same way that you’d have to trust a human to do so, because strictly speaking the robot has no choice. As robots get more complicated, though, and there’s more expectation that they’ll be able to interact with humans socially, that gap between what is technically predictable (or maybe, predictable after the fact) and what is predictable by the end user can get very, very wide, which is why a more abstract kind of trust becomes increasingly important. Music and dance may not be the way to make that happen for every robot out there, but it’s certainly a useful place to start.

Approaches to robotic manufacturing, assembly, and servicing of in-space assets range from autonomous operation to direct teleoperation, with many forms of semi-autonomous teleoperation in between. Because most approaches require one or more human operators at some level, it is important to explore the control and visualization interfaces available to those operators, taking into account the challenges due to significant telemetry time delay. We consider one motivating application of remote teleoperation, which is ground-based control of a robot on-orbit for satellite servicing. This paper presents a model-based architecture that: 1) improves visualization and situation awareness, 2) enables more effective human/robot interaction and control, and 3) detects task failures based on anomalous sensor feedback. We illustrate elements of the architecture by drawing on 10 years of our research in this area. The paper further reports the results of several multi-user experiments to evaluate the model-based architecture, on ground-based test platforms, for satellite servicing tasks subject to round-trip communication latencies of several seconds. The most significant performance gains were obtained by enhancing the operators’ situation awareness via improved visualization and by enabling them to precisely specify intended motion. In contrast, changes to the control interface, including model-mediated control or an immersive 3D environment, often reduced the reported task load but did not significantly improve task performance. Considering the challenges of fully autonomous intervention, we expect that some form of teleoperation will continue to be necessary for robotic in-situ servicing, assembly, and manufacturing tasks for the foreseeable future. We propose that effective teleoperation can be enabled by modeling the remote environment, providing operators with a fused view of the real environment and virtual model, and incorporating interfaces and control strategies that enable interactive planning, precise operation, and prompt detection of errors.

A frequent concern for robot manipulators deployed in dangerous and hazardous environments for humans is the reliability of task executions in the event of a joint failure. A redundant robotic manipulator can be used to mitigate the risk and guarantee a post-failure task completion, which is critical for instance for space applications. This paper describes methods to analyze potential risks due to a joint failure, and introduces tools for fault-tolerant task design and path planning for robotic manipulators. The presented methods are based on off-line precomputed workspace models. The methods are general enough to cope with robots with any type of joint (revolute or prismatic) and any number of degrees of freedom, and might include arbitrarily shaped obstacles in the process, without resorting to simplified models. Application examples illustrate the potential of the approach.

Creativity, in one sense, can be seen as an effort or action to bring novelty. Following this, we explore how a robot can be creative by bringing novelty in a human–robot interaction (HRI) scenario. Studies suggest that proactivity is closely linked with creativity. Proactivity can be defined as acting or interacting by anticipating future needs or actions. This study aims to explore the effect of proactive behavior and the relation of such behaviors to the two aspects of creativity: 1) the perceived creativity observed by the user in the robot’s proactive behavior and 2) creativity of the user by assessing how creativity in HRI can be shaped or influenced by proactivity. We do so by conducting an experimental study, where the robot tries to support the user on the completion of the task regardless of the end result being novel or not and does so by exhibiting anticipatory proactive behaviors. In our study, the robot instantiates a set of verbal communications as proactive robot behavior. To our knowledge, the study is among the first to establish and investigate the relationship between creativity and proactivity in the HRI context, based on user studies. The initial results have indicated a relationship between observed proactivity, creativity, and task achievement. It also provides valuable pointers for further investigation in this domain.

During communication, humans express their emotional states using various modalities (e.g., facial expressions and gestures), and they estimate the emotional states of others by paying attention to multimodal signals. To ensure that a communication robot with limited resources can pay attention to such multimodal signals, the main challenge involves selecting the most effective modalities among those expressed. In this study, we propose an active perception method that involves selecting the most informative modalities using a criterion based on energy minimization. This energy-based model can learn the probability of the network state using energy values, whereby a lower energy value represents a higher probability of the state. A multimodal deep belief network, which is an energy-based model, was employed to represent the relationships between the emotional states and multimodal sensory signals. Compared to other active perception methods, the proposed approach demonstrated improved accuracy using limited information in several contexts associated with affective human–robot interaction. We present the differences and advantages of our method compared to other methods through mathematical formulations using, for example, information gain as a criterion. Further, we evaluate performance of our method, as pertains to active inference, which is based on the free energy principle. Consequently, we establish that our method demonstrated superior performance in tasks associated with mutually correlated multimodal information.

To what extent, if any, should the law protect sentient artificial intelligence (that is, AI that can feel pleasure or pain)? Here we surveyed United States adults (n = 1,061) on their views regarding granting 1) general legal protection, 2) legal personhood, and 3) standing to bring forth a lawsuit, with respect to sentient AI and eight other groups: humans in the jurisdiction, humans outside the jurisdiction, corporations, unions, non-human animals, the environment, humans living in the near future, and humans living in the far future. Roughly one-third of participants endorsed granting personhood and standing to sentient AI (assuming its existence) in at least some cases, the lowest of any group surveyed on, and rated the desired level of protection for sentient AI as lower than all groups other than corporations. We further investigated and observed political differences in responses; liberals were more likely to endorse legal protection and personhood for sentient AI than conservatives. Taken together, these results suggest that laypeople are not by-and-large in favor of granting legal protection to AI, and that the ordinary conception of legal status, similar to codified legal doctrine, is not based on a mere capacity to feel pleasure and pain. At the same time, the observed political differences suggest that previous literature regarding political differences in empathy and moral circle expansion apply to artificially intelligent systems and extend partially, though not entirely, to legal consideration, as well.

Plants have evolved different mechanisms to disperse from parent plants and improve germination to sustain their survival. The study of seed dispersal mechanisms, with the related structural and functional characteristics, is an active research topic for ecology, plant diversity, climate change, as well as for its relevance for material science and engineering. The natural mechanisms of seed dispersal show a rich source of robust, highly adaptive, mass and energy efficient mechanisms for optimized passive flying, landing, crawling and drilling. The secret of seeds mobility is embodied in the structural features and anatomical characteristics of their tissues, which are designed to be selectively responsive to changes in the environmental conditions, and which make seeds one of the most fascinating examples of morphological computation in Nature. Particularly clever for their spatial mobility performance, are those seeds that use their morphology and structural characteristics to be carried by the wind and dispersed over great distances (i.e. “winged” and “parachute” seeds), and seeds able to move and penetrate in soil with a self-burial mechanism driven by their hygromorphic properties and morphological features. By looking at their motion mechanisms, new design principles can be extracted and used as inspiration for smart artificial systems endowed with embodied intelligence. This mini-review systematically collects, for the first time together, the morphological, structural, biomechanical and aerodynamic information from selected plant seeds relevant to take inspiration for engineering design of soft robots, and discusses potential future developments in the field across material science, plant biology, robotics and embodied intelligence.

Robots for minimally invasive surgery introduce many advantages, but still require the surgeon to alternatively control the surgical instruments and the endoscope. This work aims at providing autonomous navigation of the endoscope during a surgical procedure. The autonomous endoscope motion was based on kinematic tracking of the surgical instruments and integrated with the da Vinci Research Kit. A preclinical usability study was conducted by 10 urologists. They carried out an ex vivo orthotopic neobladder reconstruction twice, using both traditional and autonomous endoscope control. The usability of the system was tested by asking participants to fill standard system usability scales. Moreover, the effectiveness of the method was assessed by analyzing the total procedure time and the time spent with the instruments out of the field of view. The average system usability score overcame the threshold usually identified as the limit to assess good usability (average score = 73.25 > 68). The average total procedure time with the autonomous endoscope navigation was comparable with the classic control (p = 0.85 > 0.05), yet it significantly reduced the time out of the field of view (p = 0.022 < 0.05). Based on our findings, the autonomous endoscope improves the usability of the surgical system, and it has the potential to be an additional and customizable tool for the surgeon that can always take control of the endoscope or leave it to move autonomously.

Dielectric elastomer actuators (DEAs) are a promising actuator technology for soft robotics. As a configuration of this technology, stacked DEAs afford a muscle-like contraction that is useful to build soft robotic systems. In stacked DEAs, dielectric and electrode layers are alternately stacked. Thus, often a dedicated setup with complicated processes or sometimes laborious manual stacking of the layers is required to fabricate stacked actuators. In this study, we propose a method to monolithically fabricate stacked DEAs without alternately stacking the dielectric and electrode layers. In this method, the actuators are fabricated mainly through two steps: 1) molding of an elastomeric matrix containing free-form microfluidic channels and 2) injection of a liquid conductive material that acts as an electrode. The feasibility of our method is investigated via the fabrication and characterization of simple monolithic DEAs with multiple electrodes (2, 4, and 10). The fabricated actuators are characterized in terms of actuation stroke, output force, and frequency response. In the actuators, polydimethylsiloxane (PDMS) and eutectic gallium–indium (EGaIn) are used for the elastomeric matrix and electrode material, respectively. Microfluidic channels are realized by dissolving a three-dimensional printed part suspended in the elastomeric structure. The experimental results show the successful implementation of the proposed method and the good agreement between the measured data and theoretical predication, validating the feasibility of the proposed method.

Stationary motorized cycling assisted by functional electrical stimulation (FES) is a popular therapy for people with movement impairments. Maximizing volitional contributions from the rider of the cycle can lead to long-term benefits like increased muscular strength and cardiovascular endurance. This paper develops a combined motor and FES control system that tasks the rider with maintaining their cadence near a target point using their own volition, while assistance or resistance is applied gradually as their cadence approaches the lower or upper boundary, respectively, of a user-defined safe range. Safety-ensuring barrier functions are used to guarantee that the rider’s cadence is constrained to the safe range, while minimal assistance is provided within the range to maximize effort by the rider. FES stimulation is applied before electric motor assistance to further increase power output from the rider. To account for uncertain dynamics, barrier function methods are combined with robust control tools from Lyapunov theory to develop controllers that guarantee safety in the worst-case. Because of the intermittent nature of FES stimulation, the closed-loop system is modeled as a hybrid system to certify that the set of states for which the cadence is in the safe range is asymptotically stable. The performance of the developed control method is demonstrated experimentally on five participants. The barrier function controller constrained the riders’ cadence in a range of 50 ± 5 RPM with an average cadence standard deviation of 1.4 RPM for a protocol where cadence with minimal variance was prioritized and used minimal assistance from the motor (4.1% of trial duration) in a separate protocol where power output from the rider was prioritized.

This article discusses the creative and technical approaches in a performative robot project called “Embodied Musicking Robots” (2018–present). The core approach of this project is human-centered AI (HC-AI) which focuses on the design, development, and deployment of intelligent systems that cooperate with humans in real time in a “deep and meaningful way.”1 This project applies this goal as a central philosophy from which the concepts of creative AI and experiential learning are developed. At the center of this discussion is the articulation of a shift in thinking of what constitutes creative AI and new HC-AI forms of computational learning from inside the flow of the shared experience between robots and humans. The central case study (EMRv1) investigates the technical solutions and artistic potential of AI-driven robots co-creating with an improvising human musician (the author) in real time. This project is ongoing, currently at v4, with limited conclusions; other than this, the approach can be felt to be cooperative but requires further investigation.

Biodegradability is an important property for soft robots that makes them environmentally friendly. Many biodegradable materials have natural origins, and creating robots using these materials ensures sustainability. Hence, researchers have fabricated biodegradable soft actuators of various materials. During microbial degradation, the mechanical properties of biodegradable materials change; these cause changes in the behaviors of the actuators depending on the progression of degradation, where the outputs do not always remain the same against identical inputs. Therefore, to achieve appropriate operation with biodegradable soft actuators and robots, it is necessary to reflect the changes in the material properties in their design and control. However, there is a lack of insight on how biodegradable actuators change their actuation characteristics and how to identify them. In this study, we build and validate a framework that clarifies changes in the mechanical properties of biodegradable materials; further, it allows prediction of the actuation characteristics of degraded soft actuators through simulations incorporating the properties of the materials as functions of the degradation rates. As a biodegradable material, we use a mixture of gelatin and glycerol, which is fabricated in the form of a pneumatic soft actuator. The experimental results show that the actuation performance of the physical actuator reduces with the progression of biodegradation. The experimental data and simulations are in good agreement (R2 value up to 0.997), thus illustrating the applicability of our framework for designing and controlling biodegradable soft actuators and robots.



Last week, Google or Alphabet or X or whatever you want to call it announced that its Everyday Robots team has grown enough and made enough progress that it's time for it to become its own thing, now called, you guessed it, "Everyday Robots." There's a new website of questionable design along with a lot of fluffy descriptions of what Everyday Robots is all about. But fortunately, there are also some new videos and enough details about the engineering and the team's approach that it's worth spending a little bit of time wading through the clutter to see what Everyday Robots has been up to over the last couple of years and what their plans are for the near future.

That close to the arm seems like a really bad place to put an E-Stop, right?

Our headline may sound a little bit snarky, but the headline in Alphabet's own announcement blog post is "everyday robots are (slowly) leaving the lab." It's less of a dig and more of an acknowledgement that getting mobile manipulators to usefully operate in semi-structured environments has been, and continues to be, a huge challenge. We'll get into the details in a moment, but the high-level news here is that Alphabet appears to have thrown a lot of resources behind this effort while embracing a long time horizon, and that its investment is starting to pay dividends. This is a nice surprise, considering the somewhat haphazard state (at least to outside appearances) of Google's robotics ventures over the years.

The goal of Everyday Robots, according to Astro Teller, who runs Alphabet's moonshot stuff, is to create "a general-purpose learning robot," which sounds moonshot-y enough I suppose. To be fair, they've got an impressive amount of hardware deployed, says Everyday Robots' Hans Peter Brøndmo:

We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices. The same robot that sorts trash can now be equipped with a squeegee to wipe tables, and use the same gripper that grasps cups to open doors.

That's a lot of robots, which is awesome, but I have to question what "autonomously" actually means along with what "a range of useful tasks" actually means. There is really not enough publicly available information for us (or anyone?) to assess what Everyday Robots is doing with its fleet of 100 prototypes, how much manipulator-holding is required, the constraints under which they operate, and whether calling what they do "useful" is appropriate.

If you'd rather not wade through Everyday Robots' weirdly overengineered website, we've extracted the good stuff (the videos, mostly) and reposted them here, along with a little bit of commentary underneath each.

Introducing Everyday Robots

Everyday Robots

0:01 — Is it just me, or does the gearing behind those motions sound kind of, um, unhealthy?

0:25 — A bit of an overstatement about the Nobel Prize for picking a cup up off of a table, I think. Robots are pretty good at perceiving and grasping cups off of tables, because it's such a common task. Like, I get the point, but I just think there are better examples of problems that are currently human-easy and robot-hard.

1:13 — It's not necessarily useful to draw that parallel between computers and smartphones and compare them to robots, because there are certain physical realities (like motors and manipulation requirements) that prevent the kind of scaling to which the narrator refers.

1:35 — This is a red flag for me because we've heard this "it's a platform" thing so many times before and it never, ever works out. But people keep on trying it anyway. It might be effective when constrained to a research environment, but fundamentally, "platform" typically means "getting it to do (commercially?) useful stuff is someone else's problem," and I'm not sure that's ever been a successful model for robots.

2:10 — Yeah, okay. This robot sounds a lot more normal than the robots at the beginning of the video; what's up with that?

2:30 — I am a big fan of Moravec's Paradox and I wish it would get brought up more when people talk to the public about robots.

The challenge of everyday

Everyday Robots

0:18 — I like the door example, because you can easily imagine how many different ways it can go that would be catastrophic for most robots: different levers or knobs, glass in places, variable weight and resistance, and then, of course, thresholds and other nasty things like that.

1:03 — Yes. It can't be reinforced enough, especially in this context, that computers (and by extension robots) are really bad at understanding things. Recognizing things, yes. Understanding them, not so much.

1:40 — People really like throwing shade at Boston Dynamics, don't they? But this doesn't seem fair to me, especially for a company that Google used to own. What Boston Dynamics is doing is very hard, very impressive, and come on, pretty darn exciting. You can acknowledge that someone else is working on hard and exciting problems while you're working on different hard and exciting problems yourself, and not be a little miffed because what you're doing is, like, less flashy or whatever.

A robot that learns

Everyday Robots

0:26 — Saying that the robot is low cost is meaningless without telling us how much it costs. Seriously: "low cost" for a mobile manipulator like this could easily be (and almost certainly is) several tens of thousands of dollars at the very least.

1:10 — I love the inclusion of things not working. Everyone should do this when presenting a new robot project. Even if your budget is infinity, nobody gets everything right all the time, and we all feel better knowing that others are just as flawed as we are.

1:35 — I'd personally steer clear of using words like "intelligently" when talking about robots trained using reinforcement learning techniques, because most people associate "intelligence" with the kind of fundamental world understanding that robots really do not have.

Training the first task

Everyday Robots

1:20 — As a research task, I can see this being a useful project, but it's important to point out that this is a terrible way of automating the sorting of recyclables from trash. Since all of the trash and recyclables already get collected and (presumably) brought to a few centralized locations, in reality you'd just have your system there, where the robots could be stationary and have some control over their environment and do a much better job much more efficiently.

1:15 — Hopefully they'll talk more about this later, but when thinking about this montage, it's important to ask what of these tasks in the real world would you actually want a mobile manipulator to be doing, and which would you just want automated somehow, because those are very different things.

Building with everyone

Everyday Robots

0:19 — It could be a little premature to be talking about ethics at this point, but on the other hand, there's a reasonable argument to be made that there's no such thing as too early to consider the ethical implications of your robotics research. The latter is probably a better perspective, honestly, and I'm glad they're thinking about it in a serious and proactive way.

1:28 — Robots like these are not going to steal your job. I promise.

2:18 — Robots like these are also not the robots that he's talking about here, but the point he's making is a good one, because in the near- to medium term, robots are going to be most valuable in roles where they can increase human productivity by augmenting what humans can do on their own, rather than replacing humans completely.

3:16 — Again, that platform idea...blarg. The whole "someone has written those applications" thing, uh, who, exactly? And why would they? The difference between smartphones (which have a lucrative app ecosystem) and robots (which do not) is that without any third party apps at all, a smartphone has core functionality useful enough that it justifies its own cost. It's going to be a long time before robots are at that point, and they'll never get there if the software applications are always someone else's problem.

Everyday Robots

I'm a little bit torn on this whole thing. A fleet of 100 mobile manipulators is amazing. Pouring money and people into solving hard robotics problems is also amazing. I'm just not sure that the vision of an "Everyday Robot" that we're being asked to buy into is necessarily a realistic one.

The impression I get from watching all of these videos and reading through the website is that Everyday Robot wants us to believe that it's actually working towards putting general purpose mobile manipulators into everyday environments in a way where people (outside of the Google Campus) will be able to benefit from them. And maybe the company is working towards that exact thing, but is that a practical goal and does it make sense?

The fundamental research being undertaken seems solid; these are definitely hard problems, and solutions to these problems will help advance the field. (Those advances could be especially significant if these techniques and results are published or otherwise shared with the community.) And if the reason to embody this work in a robotic platform is to help inspire that research, then great, I have no issue with that.

But I'm really hesitant to embrace this vision of generalized in-home mobile manipulators doing useful tasks autonomously in a way that's likely to significantly help anyone who's actually watching Everyday Robotics' videos. And maybe this is the whole point of a moonshot vision—to work on something hard that won't pay off for a long time. And again, I have no problem with that. However, if that's the case, Everyday Robots should be careful about how it contextualizes and portrays its efforts (and even its successes), why it's working on a particular set of things, and how outside observers should set our expectations. Over and over, companies have overpromised and underdelivered on helpful and affordable robots. My hope is that Everyday Robots is not in the middle of making the exact same mistake.

Pages