Feed aggregator



It’s hard to beat the energy density of chemical fuels. Batteries are quiet and clean and easy to integrate with electrically powered robots, but they’re 20 to 50 times less energy dense than a chemical fuel like methanol or butane. This is fine for most robots that can afford to just carry around a whole bunch of batteries, but as you start looking at robots that are insect-size or smaller, batteries simply don’t scale down very well. And it’s not just the batteries—electric actuators don’t scale down well either, especially if you’re looking for something that can generate a lot of power.

In a paper published 14 September in the journal Science, researchers from Cornell have tackled the small-scale actuation problem with what is essentially a very tiny, very soft internal-combustion engine. Methane vapor and oxygen are injected into a soft combustion chamber, where an itty-bitty li’l spark ignites the mixture. In half a millisecond, the top of the chamber balloons upward like a piston, generating forces of 9.5 newtons through a cycle that can repeat 100 times every second. Put two of these actuators together (driving two legs a piece) and you’ve got an exceptionally powerful soft quadruped robot.

Each of the two actuators powering this robot weighs just 325 milligrams and is about a quarter of the size of a U.S. penny. Part of the reason that they can be so small is that most of the associated components are off-board, including the fuel itself, the system that mixes and delivers the fuel, and the electrical source for the spark generator. But even without all of that stuff, the actuator has a bunch going on that enables it to operate continuously at high cycle frequencies without melting.

A view of the actuator and its component materials along with a diagram of the combustion actuation cycle.Science Robotics

The biggest issue may be that this actuator has to handle actual explosions, meaning that careful design is required to make sure that it doesn’t torch itself every time it goes off. The small combustion volume helps with this, as does the flame-resistant elastomer material and the integrated flame arrestor. Despite the violence inherent to how this actuator works, it’s actually very durable, and the researchers estimate that it can operate continuously for more than 750,000 cycles (8.5 hours at 50 hertz) without any drop in performance.

“What is interesting is just how powerful small-scale combustion is,” says Robert F. Shepherd, who runs the Organic Robotics Lab at Cornell. We covered some of Shepherd’s work on combustion-powered robots nearly a decade ago, with this weird pink jumping thing at IROS 2014. But going small has both challenges and benefits, Shepherd tells us. “We operate in the lower limit of what volumes of gases are combustible. It’s an interesting place for science, and the engineering outcomes are also useful.”

The first of those engineering outcomes is a little insect-scale quadrupedal robot that utilizes two of these soft combustion actuators to power a pair of legs each. The robot is 29 millimeters long and weighs just 1.6 grams, but it can jump a staggering 59 centimeters straight up and walk while carrying 22 times its own weight. For an insect-scale robot, Shepherd says, this is “near insect level performance, jumping extremely high, very quickly, and carrying large loads.”

Cornell University

It’s a little bit hard to see how the quadruped actually walks, since the actuators move so fast. Each actuator controls one side of the robot, with one combustion chamber connected to chambers at each foot with elastomer membranes. An advantage of this actuation system is that since the power source is gas pressure, you can implement that pressure somewhere besides the combustion chamber itself. Firing both actuators together moves the robot forward, while firing one side or the other can rotate the robot, providing some directional control.

“It took a lot of care, iterations, and intelligence to come up with this steerable, insect-scale robot,” Shepherd told us. “Does it have to have legs? No. It could be a speedy slug, or a flapping bee. The amplitudes and frequencies possible with this system allow for all of these possibilities. In fact, the real issue we have is making things move slowly.”

Getting these actuators to slow down a bit is one of the things that the researchers are looking at next. By trading speed for force, the idea is to make robots that can walk as well as run and jump. And of course finding a way to untether these systems is a natural next step. Some of the other stuff that they’re thinking about is pretty wild, as Shepherd tells us: “One idea we want to explore in the future is using aggregates of these small and powerful actuators as large, variable recruitment musculature in large robots. Putting thousands of these actuators in bundles over a rigid endoskeleton could allow for dexterous and fast land-based hybrid robots.” Personally, I’m having trouble even picturing a robot like that, but that’s what’s exciting about it, right? A large robot with muscles powered by thousands of tiny explosions—wow.

Powerful, soft combustion actuators for insect-scale robots, by Cameron A. Aubin, Ronald H. Heisser, Ofek Peretz, Julia Timko, Jacqueline Lo, E. Farrell Helbling, Sadaf Sobhani, Amir D. Gat, and Robert F. Shepherd from Cornell, is published in Science.



This sponsored article is brought to you by NYU Tandon School of Engineering.

To address today’s health challenges, especially in our aging society, we must become more intelligent in our approaches. Clinicians now have access to a range of advanced technologies designed to assist early diagnosis, evaluate prognosis, and enhance patient health outcomes, including telemedicine, medical robots, powered prosthetics, exoskeletons, and AI-powered smart wearables. However, many of these technologies are still in their infancy.

The belief that advancing technology can improve human health is central to research related to medical device technologies. This forms the heart of research for Prof. S. Farokh Atashzar who directs the Medical Robotics and Interactive Intelligent Technologies (MERIIT) Lab at the NYU Tandon School of Engineering.

Atashzar is an Assistant Professor of Electrical and Computer Engineering and Mechanical and Aerospace Engineering at NYU Tandon. He is also a member of NYU WIRELESS, a consortium of researchers dedicated to the next generation of wireless technology, as well as the Center for Urban Science and Progress (CUSP), a center of researchers dedicated to all things related to the future of modern urban life.

Atashzar’s work is dedicated to developing intelligent, interactive robotic, and AI-driven assistive machines that can augment human sensorimotor capabilities and allow our healthcare system to go beyond natural competences and overcome physiological and pathological barriers.

Stroke detection and rehabilitation

Stroke is the leading cause of age-related motor disabilities and is becoming more prevalent in younger populations as well. But while there is a burgeoning marketplace for rehabilitation devices that claim to accelerate recovery, including robotic rehabilitation systems, recommendations for how and when to use them are based mostly on subjective evaluation of the sensorimotor capacities of patients in need.

Atashzar is working in collaboration with John-Ross Rizzo, associate professor of Biomedical Engineering at NYU Tandon and Ilse Melamid Associate Professor of rehabilitation medicine at the NYU School of Medicine and Dr. Ramin Bighamian from the U.S. Food and Drug Administration to design a regulatory science tool (RST) based on data from biomarkers in order to improve the review processes for such devices and how best to use them. The team is designing and validating a robust recovery biomarker enabling a first-ever stroke rehabilitation RST based on exchanges between regions of the central and peripheral nervous systems.

S. Farokh Atashzar is an Assistant Professor of Electrical and Computer Engineering and Mechanical and Aerospace Engineering at New York University Tandon School of Engineering. He is also a member of NYU WIRELESS, a consortium of researchers dedicated to the next generation of wireless technology, as well as the Center for Urban Science and Progress (CUSP), a center of researchers dedicated to all things related to the future of modern urban life, and directs the MERIIT Lab at NYU Tandon.NYU Tandon

In addition, Atashzar is collaborating with Smita Rao, PT, the inaugural Robert S. Salant Endowed Associate Professor of Physical Therapy. Together, they aim to identify AI-driven computational biomarkers for motor control and musculoskeletal damage and to decode the hidden complex synergistic patterns of degraded muscle activation using data collected from surface electromyography (sEMG) and high-density sEMG. In the past few years, this collaborative effort has been exploring the fascinating world of “Nonlinear Functional Muscle Networks” — a new computational window (rooted in Shannon’s information theory) into human motor control and mobility. This synergistic network orchestrates the “music of mobility,” harmonizing the synchrony between muscles to facilitate fluid movement.

But rehabilitation is only one of the research thrusts at MERIIT lab. If you can prevent strokes from happening or reoccurring, you can head off the problem before it happens. For Atashzar, a big clue could be where you least expect it: in your retina.

Atashzar along with NYU Abu Dhabi Assistant Professor Farah Shamout, are working on a project they call “EyeScore,” an AI-powered technology that uses non-invasive scans of the retina to predict the recurrence of stroke in patients. They use optical coherence tomography — a scan of the back of the retina — and track changes over time using advanced deep learning models. The retina, attached directly to the brain through the optic nerve, can be used as a physiological window for changes in the brain itself.

Atashzar and Shamout are currently formulating their hybrid AI model, pinpointing the exact changes that can predict a stroke and recurrence of strokes. The outcome will be able to analyze these images and flag potentially troublesome developments. And since the scans are already in use in optometrist offices, this life-saving technology could be in the hands of medical professionals sooner than expected.

Preventing downturns

Atashzar is utilizing AI algorithms for uses beyond stroke. Like many researchers, his gaze was drawn to the largest medical event in recent history: COVID-19. In the throes of the COVID-19 pandemic, the very bedrock of global healthcare delivery was shaken. COVID-19 patients, susceptible to swift and severe deterioration, presented a serious problem for caregivers.

Especially in the pandemic’s early days, when our grasp of the virus was tenuous at best, predicting patient outcomes posed a formidable challenge. The merest tweaks in admission protocols held the power to dramatically shift patient fates, underscoring the need for vigilant monitoring. As healthcare systems groaned under the pandemic’s weight and contagion fears loomed, outpatient and nursing center residents were steered toward remote symptom tracking via telemedicine. This cautious approach sought to spare them unnecessary hospital exposure, allowing in-person visits only for those in the throes of grave symptoms.

But while much of the pandemic’s research spotlight fell on diagnosing COVID-19, this study took a different avenue: predicting patient deterioration in the future. Existing studies often juggled an array of data inputs, from complex imaging to lab results, but failed to harness data’s temporal aspects. Enter this research, which prioritized simplicity and scalability, leaning on data easily gathered not only within medical walls but also in the comfort of patients’ homes with the use of simple wearables.

S. Farokh Atashzar and colleagues at NYU Tandon are using deep neural network models to assess COVID data and try to predict patient deterioration in the future.

Atashzar, along with his Co-PI of the project Yao Wang, Professor of Biomedical Engineering and Electrical and Computer Engineering at NYU Tandon, used a novel deep neural network model to assess COVID data, leveraging time series data on just three vital signs to foresee COVID-19 patient deterioration for some 37,000 patients. The ultimate prize? A streamlined predictive model capable of aiding clinical decision-making for a wide spectrum of patients. Oxygen levels, heartbeats, and temperatures formed the trio of vital signs under scrutiny, a choice propelled by the ubiquity of wearable tech like smartwatches. A calculated exclusion of certain signs, like blood pressure, followed, due to their incompatibility with these wearables.

The researchers utilized real-world data from NYU Langone Health’s archives spanning January 2020 to September 2022 lent authenticity. Predicting deterioration within timeframes of 3 to 24 hours, the model analyzed vital sign data from the preceding 24 hours. This crystal ball aimed to forecast outcomes ranging from in-hospital mortality to intensive care unit admissions or intubations.

“In a situation where a hospital is overloaded, getting a CT scan for every single patient would be very difficult or impossible, especially in remote areas when the healthcare system is overstretched,” says Atashzar. “So we are minimizing the need for data, while at the same time, maximizing the accuracy for prediction. And that can help with creating better healthcare access in remote areas and in areas with limited healthcare.”

In addition to addressing the pandemic at the micro level (individuals), Atashzar and his team are also working on algorithmic solutions that can assist the healthcare system at the meso and macro level. In another effort related to COVID-19, Atashzar and his team are developing novel probabilistic models that can better predict the spread of disease when taking into account the effects of vaccination and mutation of the virus. Their efforts go beyond the classic small-scale models that were previously used for small epidemics. They are working on these large-scale complex models in order to help governments better prepare for pandemics and mitigate rapid disease spread. Atashzar is drawing inspiration from his active work with control algorithms used in complex networks of robotic systems. His team is now utilizing similar techniques to develop new algorithmic tools for controlling spread in the networked dynamic models of human society.

A state-of-the-art human-machine interface module with wearable controller is one of many multi-modal technologies tested in S. Farokh Atashzar’s MERIIT Lab at NYU Tandon.NYU Tandon

Where minds meet machines

These projects represent only a fraction of Atashzar’s work. In the MERIIT lab, he and his students build cyber-physical systems that augment the functionality of the next-generation medical robotic systems. They delve into haptics and robotics for a wide range of medical applications. Examples include telesurgery and telerobotic rehabilitation, which are built upon the capabilities of next-generation telecommunications. The team is specifically interested in the application of 5G-based tactile internet in medical robotics.

Recently, he received a donation from the Intuitive Foundation: a Da Vinci research kit. This state-of-the-art surgical system will allow his team to explore ways for a surgeon in one location to operate on a patient in another—whether they are in a different city, region, or even continent. While several researchers have investigated this vision in the past decade, Atashzar is specifically concentrating on connecting the power of the surgeon’s mind with the autonomy of surgical robots - promoting discussions on ways to share the surgical autonomy between the intelligence of machines and the mind of surgeons. This approach aims to reduce mental fatigue and cognitive load on surgeons while reintroducing the sense of haptics lost in traditional surgical robotic systems.

Atashzar poses with NYU Tandon’s Da Vinci research kit. This state-of-the-art surgical system will allow his team to explore ways for a surgeon in one location to operate on a patient in another—whether they are in a different city, region, or even continent.NYU Tandon

In a related line of research, the MERIIT lab is also focusing on cutting-edge human-machine interface technologies that enable neuro-to-device capabilities. These technologies have direct applications in exoskeletal devices, next-generation prosthetics, rehabilitation robots, and possibly the upcoming wave of augmented reality systems in our smart and connected society. One common significant challenge of such systems which is focused by the team is predicting the intended actions of the human users through processing signals generated by functional behavior of motor neurons.

By solving this challenge using advanced AI modules in real-time, the team can decode a user’s motor intentions and predict the intended gestures for controlling robots and virtual reality systems in an agile and robust manner. Some practical challenges include ensuring the generalizability, scalability, and robustness of these AI-driven solutions, given the variability of human neurophysiology and heavy reliance of classic models on data. Powered by such predictive models, the team is advancing the complex control of human-centric machines and robots. They are also crafting algorithms that take into account human physiology and biomechanics. This requires conducting transdisciplinary solutions bridging AI and nonlinear control theories.

Atashzar’s work dovetails perfectly with the work of other researchers at NYU Tandon, which prizes interdisciplinary work without the silos of traditional departments.

“Dr. Atashzar shines brightly in the realm of haptics for telerobotic medical procedures, positioning him as a rising star in his research community,” says Katsuo Kurabayashi, the new chair of the Mechanical and Aerospace Engineering department at NYU Tandon. “His pioneering research carries the exciting potential to revolutionize rehabilitation therapy, facilitate the diagnosis of neuromuscular diseases, and elevate the field of surgery. This holds the key to ushering in a new era of sophisticated remote human-machine interactions and leveraging machine learning-driven sensor signal interpretations.”

This commitment to human health, through the embrace of new advances in biosignals, robotics, and rehabilitation, is at the heart of Atashzar’s enduring work, and his unconventional approaches to age-old problem make him a perfect example of the approach to engineering embraced at NYU Tandon.



This sponsored article is brought to you by NYU Tandon School of Engineering.

To address today’s health challenges, especially in our aging society, we must become more intelligent in our approaches. Clinicians now have access to a range of advanced technologies designed to assist early diagnosis, evaluate prognosis, and enhance patient health outcomes, including telemedicine, medical robots, powered prosthetics, exoskeletons, and AI-powered smart wearables. However, many of these technologies are still in their infancy.

The belief that advancing technology can improve human health is central to research related to medical device technologies. This forms the heart of research for Prof. S. Farokh Atashzar who directs the Medical Robotics and Interactive Intelligent Technologies (MERIIT) Lab at the NYU Tandon School of Engineering.

Atashzar is an Assistant Professor of Electrical and Computer Engineering and Mechanical and Aerospace Engineering at NYU Tandon. He is also a member of NYU WIRELESS, a consortium of researchers dedicated to the next generation of wireless technology, as well as the Center for Urban Science and Progress (CUSP), a center of researchers dedicated to all things related to the future of modern urban life.

Atashzar’s work is dedicated to developing intelligent, interactive robotic, and AI-driven assistive machines that can augment human sensorimotor capabilities and allow our healthcare system to go beyond natural competences and overcome physiological and pathological barriers.

Stroke detection and rehabilitation

Stroke is the leading cause of age-related motor disabilities and is becoming more prevalent in younger populations as well. But while there is a burgeoning marketplace for rehabilitation devices that claim to accelerate recovery, including robotic rehabilitation systems, recommendations for how and when to use them are based mostly on subjective evaluation of the sensorimotor capacities of patients in need.

Atashzar is working in collaboration with John-Ross Rizzo, associate professor of Biomedical Engineering at NYU Tandon and Ilse Melamid Associate Professor of rehabilitation medicine at the NYU School of Medicine and Dr. Ramin Bighamian from the U.S. Food and Drug Administration to design a regulatory science tool (RST) based on data from biomarkers in order to improve the review processes for such devices and how best to use them. The team is designing and validating a robust recovery biomarker enabling a first-ever stroke rehabilitation RST based on exchanges between regions of the central and peripheral nervous systems.

S. Farokh Atashzar is an Assistant Professor of Electrical and Computer Engineering and Mechanical and Aerospace Engineering at New York University Tandon School of Engineering. He is also a member of NYU WIRELESS, a consortium of researchers dedicated to the next generation of wireless technology, as well as the Center for Urban Science and Progress (CUSP), a center of researchers dedicated to all things related to the future of modern urban life, and directs the MERIIT Lab at NYU Tandon.NYU Tandon

In addition, Atashzar is collaborating with Smita Rao, PT, the inaugural Robert S. Salant Endowed Associate Professor of Physical Therapy. Together, they aim to identify AI-driven computational biomarkers for motor control and musculoskeletal damage and to decode the hidden complex synergistic patterns of degraded muscle activation using data collected from surface electromyography (sEMG) and high-density sEMG. In the past few years, this collaborative effort has been exploring the fascinating world of “Nonlinear Functional Muscle Networks” — a new computational window (rooted in Shannon’s information theory) into human motor control and mobility. This synergistic network orchestrates the “music of mobility,” harmonizing the synchrony between muscles to facilitate fluid movement.

But rehabilitation is only one of the research thrusts at MERIIT lab. If you can prevent strokes from happening or reoccurring, you can head off the problem before it happens. For Atashzar, a big clue could be where you least expect it: in your retina.

Atashzar along with NYU Abu Dhabi Assistant Professor Farah Shamout, are working on a project they call “EyeScore,” an AI-powered technology that uses non-invasive scans of the retina to predict the recurrence of stroke in patients. They use optical coherence tomography — a scan of the back of the retina — and track changes over time using advanced deep learning models. The retina, attached directly to the brain through the optic nerve, can be used as a physiological window for changes in the brain itself.

Atashzar and Shamout are currently formulating their hybrid AI model, pinpointing the exact changes that can predict a stroke and recurrence of strokes. The outcome will be able to analyze these images and flag potentially troublesome developments. And since the scans are already in use in optometrist offices, this life-saving technology could be in the hands of medical professionals sooner than expected.

Preventing downturns

Atashzar is utilizing AI algorithms for uses beyond stroke. Like many researchers, his gaze was drawn to the largest medical event in recent history: COVID-19. In the throes of the COVID-19 pandemic, the very bedrock of global healthcare delivery was shaken. COVID-19 patients, susceptible to swift and severe deterioration, presented a serious problem for caregivers.

Especially in the pandemic’s early days, when our grasp of the virus was tenuous at best, predicting patient outcomes posed a formidable challenge. The merest tweaks in admission protocols held the power to dramatically shift patient fates, underscoring the need for vigilant monitoring. As healthcare systems groaned under the pandemic’s weight and contagion fears loomed, outpatient and nursing center residents were steered toward remote symptom tracking via telemedicine. This cautious approach sought to spare them unnecessary hospital exposure, allowing in-person visits only for those in the throes of grave symptoms.

But while much of the pandemic’s research spotlight fell on diagnosing COVID-19, this study took a different avenue: predicting patient deterioration in the future. Existing studies often juggled an array of data inputs, from complex imaging to lab results, but failed to harness data’s temporal aspects. Enter this research, which prioritized simplicity and scalability, leaning on data easily gathered not only within medical walls but also in the comfort of patients’ homes with the use of simple wearables.

S. Farokh Atashzar and colleagues at NYU Tandon are using deep neural network models to assess COVID data and try to predict patient deterioration in the future.

Atashzar, along with his Co-PI of the project Yao Wang, Professor of Biomedical Engineering and Electrical and Computer Engineering at NYU Tandon, used a novel deep neural network model to assess COVID data, leveraging time series data on just three vital signs to foresee COVID-19 patient deterioration for some 37,000 patients. The ultimate prize? A streamlined predictive model capable of aiding clinical decision-making for a wide spectrum of patients. Oxygen levels, heartbeats, and temperatures formed the trio of vital signs under scrutiny, a choice propelled by the ubiquity of wearable tech like smartwatches. A calculated exclusion of certain signs, like blood pressure, followed, due to their incompatibility with these wearables.

The researchers utilized real-world data from NYU Langone Health’s archives spanning January 2020 to September 2022 lent authenticity. Predicting deterioration within timeframes of 3 to 24 hours, the model analyzed vital sign data from the preceding 24 hours. This crystal ball aimed to forecast outcomes ranging from in-hospital mortality to intensive care unit admissions or intubations.

“In a situation where a hospital is overloaded, getting a CT scan for every single patient would be very difficult or impossible, especially in remote areas when the healthcare system is overstretched,” says Atashzar. “So we are minimizing the need for data, while at the same time, maximizing the accuracy for prediction. And that can help with creating better healthcare access in remote areas and in areas with limited healthcare.”

In addition to addressing the pandemic at the micro level (individuals), Atashzar and his team are also working on algorithmic solutions that can assist the healthcare system at the meso and macro level. In another effort related to COVID-19, Atashzar and his team are developing novel probabilistic models that can better predict the spread of disease when taking into account the effects of vaccination and mutation of the virus. Their efforts go beyond the classic small-scale models that were previously used for small epidemics. They are working on these large-scale complex models in order to help governments better prepare for pandemics and mitigate rapid disease spread. Atashzar is drawing inspiration from his active work with control algorithms used in complex networks of robotic systems. His team is now utilizing similar techniques to develop new algorithmic tools for controlling spread in the networked dynamic models of human society.

A state-of-the-art human-machine interface module with wearable controller is one of many multi-modal technologies tested in S. Farokh Atashzar’s MERIIT Lab at NYU Tandon.NYU Tandon

Where minds meet machines

These projects represent only a fraction of Atashzar’s work. In the MERIIT lab, he and his students build cyber-physical systems that augment the functionality of the next-generation medical robotic systems. They delve into haptics and robotics for a wide range of medical applications. Examples include telesurgery and telerobotic rehabilitation, which are built upon the capabilities of next-generation telecommunications. The team is specifically interested in the application of 5G-based tactile internet in medical robotics.

Recently, he received a donation from the Intuitive Foundation: a Da Vinci research kit. This state-of-the-art surgical system will allow his team to explore ways for a surgeon in one location to operate on a patient in another—whether they are in a different city, region, or even continent. While several researchers have investigated this vision in the past decade, Atashzar is specifically concentrating on connecting the power of the surgeon’s mind with the autonomy of surgical robots - promoting discussions on ways to share the surgical autonomy between the intelligence of machines and the mind of surgeons. This approach aims to reduce mental fatigue and cognitive load on surgeons while reintroducing the sense of haptics lost in traditional surgical robotic systems.

Atashzar poses with NYU Tandon’s Da Vinci research kit. This state-of-the-art surgical system will allow his team to explore ways for a surgeon in one location to operate on a patient in another—whether they are in a different city, region, or even continent.NYU Tandon

In a related line of research, the MERIIT lab is also focusing on cutting-edge human-machine interface technologies that enable neuro-to-device capabilities. These technologies have direct applications in exoskeletal devices, next-generation prosthetics, rehabilitation robots, and possibly the upcoming wave of augmented reality systems in our smart and connected society. One common significant challenge of such systems which is focused by the team is predicting the intended actions of the human users through processing signals generated by functional behavior of motor neurons.

By solving this challenge using advanced AI modules in real-time, the team can decode a user’s motor intentions and predict the intended gestures for controlling robots and virtual reality systems in an agile and robust manner. Some practical challenges include ensuring the generalizability, scalability, and robustness of these AI-driven solutions, given the variability of human neurophysiology and heavy reliance of classic models on data. Powered by such predictive models, the team is advancing the complex control of human-centric machines and robots. They are also crafting algorithms that take into account human physiology and biomechanics. This requires conducting transdisciplinary solutions bridging AI and nonlinear control theories.

Atashzar’s work dovetails perfectly with the work of other researchers at NYU Tandon, which prizes interdisciplinary work without the silos of traditional departments.

“Dr. Atashzar shines brightly in the realm of haptics for telerobotic medical procedures, positioning him as a rising star in his research community,” says Katsuo Kurabayashi, the new chair of the Mechanical and Aerospace Engineering department at NYU Tandon. “His pioneering research carries the exciting potential to revolutionize rehabilitation therapy, facilitate the diagnosis of neuromuscular diseases, and elevate the field of surgery. This holds the key to ushering in a new era of sophisticated remote human-machine interactions and leveraging machine learning-driven sensor signal interpretations.”

This commitment to human health, through the embrace of new advances in biosignals, robotics, and rehabilitation, is at the heart of Atashzar’s enduring work, and his unconventional approaches to age-old problem make him a perfect example of the approach to engineering embraced at NYU Tandon.

This paper focuses on the topic of “everyday life” as it is addressed in Human-Robot Interaction (HRI) research. It starts from the argument that while human daily life with social robots has been increasingly discussed and studied in HRI, the concept of everyday life lacks clarity or systematic analysis, and it plays only a secondary role in supporting the study of the key HRI topics. In order to help conceptualise everyday life as a research theme in HRI in its own right, we provide an overview of the Social Science and Humanities (SSH) perspectives on everyday life and lived experiences, particularly in sociology, and identify the key elements that may serve to further develop and empirically study such a concept in HRI. We propose new angles of analysis that may help better explore unique aspects of human engagement with social robots. We look at the everyday not just as a reality as we know it (i.e., the realm of the “ordinary”) but also as the future that we need to envision and strive to materialise (i.e., the transformation that will take place through the “extraordinary” that comes with social robots). Finally, we argue that HRI research would benefit not only from engaging with a systematic conceptualisation but also critique of the contemporary everyday life with social robots. This is how HRI studies could play an important role in challenging the current ways of understanding of what makes different aspects of the human world “natural” and ultimately help bringing a social change towards what we consider a “good life.”

Abdominal palpation is one of the basic but important physical examination methods used by physicians. Visual, auditory, and haptic feedback from the patients are known to be the main sources of feedback they use in the diagnosis. However, learning to interpret this feedback and making accurate diagnosis require several years of training. Many abdominal palpation training simulators have been proposed to date, but very limited attempts have been reported in integrating vocal pain expressions into physical abdominal palpation simulators. Here, we present a vocal pain expression augmentation for a robopatient. The proposed robopatient is capable of providing real-time facial and vocal pain expressions based on the exerted palpation force and position on the abdominal phantom of the robopatient. A pilot study is conducted to test the proposed system, and we show the potential of integrating vocal pain expressions to the robopatient. The platform has also been tested by two clinical experts with prior experience in abdominal palpation. Their evaluations on functionality and suggestions for improvements are presented. We highlight the advantages of the proposed robopatient with real-time vocal and facial pain expressions as a controllable simulator platform for abdominal palpation training studies. Finally, we discuss the limitations of the proposed approach and suggest several future directions for improvements.

Introduction: Handwriting is a complex task that requires coordination of motor, sensory, cognitive, memory, and linguistic skills to master. The extent these processes are involved depends on the complexity of the handwriting task. Evaluating the difficulty of a handwriting task is a challenging problem since it relies on subjective judgment of experts.

Methods: In this paper, we propose a machine learning approach for evaluating the difficulty level of handwriting tasks. We propose two convolutional neural network (CNN) models for single- and multilabel classification where single-label classification is based on the mean of expert evaluation while the multilabel classification predicts the distribution of experts’ assessment. The models are trained with a dataset containing 117 spatio-temporal features from the stylus and hand kinematics, which are recorded for all letters of the Arabic alphabet.

Results: While single- and multilabel classification models achieve decent accuracy (96% and 88% respectively) using all features, the hand kinematics features do not significantly influence the performance of the models.

Discussion: The proposed models are capable of extracting meaningful features from the handwriting samples and predicting their difficulty levels accurately. The proposed approach has the potential to be used to personalize handwriting learning tools and provide automatic evaluation of the quality of handwriting.

Introduction: Complicated diverticulitis is a common abdominal emergency that often requires a surgical intervention. The systematic review and meta-analysis below compare the benefits and harms of robotic vs. laparoscopic surgery in patients with complicated colonic diverticular disease.

Methods: The following databases were searched before 1 March 2023: Cochrane Library, PubMed, Embase, CINAHL, and ClinicalTrials.gov. The internal validity of the selected non-randomized studies was assessed using the ROBINS-I tool. The meta-analysis and trial sequential analysis were performed using RevMan 5.4 (Cochrane Collaboration, London, United Kingdom) and Copenhagen Trial Unit Trial Sequential Analysis (TSA) software (Copenhagen Trial Unit, Center for Clinical Intervention Research, Rigshospitalet, Copenhagen, Denmark), respectively.

Results: We found no relevant randomized controlled trials in the searched databases. Therefore, we analyzed 5 non-randomized studies with satisfactory internal validity and similar designs comprising a total of 442 patients (184 (41.6%) robotic and 258 (58.4%) laparoscopic interventions). The analysis revealed that robotic surgery for complicated diverticulitis (CD) took longer than laparoscopy (MD = 42 min; 95% CI: [-16, 101]). No statistically significant differences were detected between the groups regarding intraoperative blood loss (MD = −9 mL; 95% CI: [–26, 8]) and the rate of conversion to open surgery (2.17% or 4/184 for robotic surgery vs. 6.59% or 17/258 for laparoscopy; RR = 0.63; 95% CI: [0.10, 4.00]). The type of surgery did not affect the length of in-hospital stay (MD = 0.18; 95% CI: [–0.60, 0.97]) or the rate of postoperative complications (14.1% or 26/184 for robotic surgery vs. 19.8% or 51/258 for laparoscopy; RR = 0.81; 95% CI: [0.52, 1.26]). No deaths were reported in either group.

Discussion: The meta-analysis suggests that robotic surgery is an appropriate option for managing complicated diverticulitis. It is associated with a trend toward a lower rate of conversion to open surgery and fewer postoperative complications; however, this trend does not reach the level of statistical significance. Since no high quality RCTs were available, this meta-analysis isnot able to provide reliable conclusion, but only a remarkable lack of proper evidence supporting robotic technology. The need for further evidence-based trials is important.



Today, iRobot is announcing the newest, fanciest, and most expensive Roomba yet. The Roomba Combo j9+ trades a dock for what can only be described as a small indoor robot garage, which includes a robot-emptying vacuum system that can hold two months of dry debris along with a water reservoir that can provide up to 30 days of clean water to refill the robot’s mopping tank. Like all of iRobot’s new flagship products, the Combo j9+ is very expensive at just under US $1,400. But if nothing else, it shows us where iRobot is headed—toward a single home robot that can do everything without you having to even think about it. Almost.

The j9+ (I’m going to stop saying “Combo” every time, but that’s the one I’m talking about) is essentially an upgraded version of the j7+, which was introduced a year ago. It’s a Roomba vacuum that includes an integrated tank for clean water, and on hard floors, the robot can rotate a fabric mopping pad from on top of its head to under its butt to mop up water that it squirts onto the floor underneath itself. On carpet, the mopping pad gets rotated back up, ensuring that your carpet doesn’t get all moppy.

The biggest difference with the j9+ is that rather than having to manually fill the robot’s clean-water tank before every mopping session, you can rely on a dock that includes a huge 3-liter clean water tank that can keep the robot topped off for a month. This also means that the robot can mop more effectively, since it can use more water when it needs to and then return to the dock midcycle to replenish if necessary.

This all does turn the dock into a bit of a monster. It’s a dock in the space-dock sense, not the boat-dock sense—it’s basically a garage for your Roomba, nothing like the low-profile charging docks that Roombas started out with. iRobot is obviously aware of this, so they’ve put some effort into making the dock look nice, and all of the guts can now be accessed from the front, making the top a usable surface.

The Combo j9+ comes with a beefy docking system that stores a month’s worth of clean water.iRobot

iRobot is not the only company offering hybrid vacuuming and mopping robots with beefy docks. But these have come with some pretty significant compromises, like with robots that just lift the mopping pad up when they encounter carpet rather than moving the pad out of the way entirely. This invariably results in a mopping pad dripping dirty water onto your carpet, which is not great. In iRobot’s internal testing, “we’ve seen competitive robots get materially worse,” says iRobot CEO Colin Angle. iRobot is hardly an unbiased party here, but there’s a reason that Roombas tend to be more expensive than their competitors, and iRobot argues that its focus on long-term reliability in the semi-structured environment of the home is what makes its robots worth the money.

Mapping and localization is a good example of this, Angle explains, which is why iRobot relies on vision rather than lasers. “Lasers are the fastest way to create a map, but a geometry-based solution is very brittle to a changing environment. They can’t handle a general rearranging of furniture in a room.” iRobot’s latest Roombas use cameras that look up toward the ceiling of rooms, tracking visual landmarks that don’t change very often: When was the last time you rearranged your ceiling? This allows iRobot to offer map stability that’s generational across robots.

iRobot did experiment with a depth sensor on the 2019 Roomba S9, but that technology hasn’t made it into a Roomba since. “I am currently happy with one single camera,” Angle tells us. “I don’t feel like anything we’re doing on the robot is constrained by not having a 3D sensor. And I don’t yet have arms on the robot; certainly if we’re doing manipulation, depth would become very important, but for the moment in time, I think there’s a lot more you can get out of a monocular camera. 3D is on our road map for when we’re going to do a step-change in functionality that doesn’t exist in the market today.”

So what’s left to automate here? What is the next generation Roomba going to offer that these latest ones don’t, besides maybe arms? The obvious thing is something that other robotic vacuum companies already offer: cleaning the grungy mopping pad by using a pad-washing system within the dock. “It’s a great idea,” says Angle, but iRobot has not been able to come up with a system that can do this to his satisfaction. You have to wash a robot’s mopping pad with something, like some kind of cleaning fluid, and then that used cleaning fluid has to go somewhere. So now you’re talking about yet another fluid reservoir (or two) that the user has to manage plus an even larger dock to hold all of it. “We don’t have a solution,” Angle says, although my assumption is that they’re working hard on something.

The water-related endpoint for floor care robots seems to be plumbing integration; a system that can provide clean water and accept dirty water on-demand. A company called SwitchBot is already attempting to do this with a water docking station that links into undersink plumbing. It’s more functional than elegant, because I can promise you that zero people have a house designed around robot-accessible plumbing, but my guess is that new houses are going to start to get increasingly robot-optimized.

In the meantime, dealing with a dirty mopping pad is done the same way as with the previous model, the j7+: You remove the pad, which I promise is super easy to do, drop it in the laundry, and replace it with a clean one. It means that you have to physically interact with the robot at least once for every mopping cycle, which is something that iRobot is trying really hard to get away from, but it’s really not that big of an ask, all things considered.

One issue I foresee as Roombas get more and more hands-off is that it’ll get harder and harder to convince people to do maintenance on them. Roombas are arguably some of the most rugged robots ever made, considering that they live and work in semi-structured environments supervised by untrained users. But their jobs are based around spinning mechanical components in contact with the ground, and if you have pets or live with someone with long hair, you know what the underbelly of a Roomba can turn into. A happy Roomba is a Roomba that gets its bearings cleaned out from time to time, and back when we all had to empty our Roomba’s dustbin after every cleaning cycle, it was easy to just flip the robot over and do a quick hair extraction. But with Roombas now running unsupervised for weeks or months at a time, I worry for the health of their innards.

While it’s tempting to just focus on the new hardware here, what makes robots actually useful is increasingly dependent on software. The j9+ does things that are obvious in retrospect, like prioritizing what rooms to clean based on historical dirt measurements, doing a deeper cleaning of bathroom floors relative to hardwood floors, and using a new back-and-forth “scrubbing” trajectory. That last thing is actually not new at all; Evolution Robotics’ Mint was mopping that way back in 2010. But since iRobot acquired that company in 2012, we’ll give it a pass. And of course, the j9+ can still recognize and react to 80-something household objects, from wayward socks to electrical cords.

Mopping pad up!iRobot

I asked Angle what frustrates him the most about how people perceive robot vacuums:

“I wish customers didn’t have a honeymoon period where the robot’s ability to live up to expectations wasn’t ignored,” he told me. Angle explains that when consumers first get a robot, for at least the first few weeks, they cut it plenty of slack, frequently taking the blame for the robot getting lost or stuck. “During the early days of living with a robot, people think the robot is so much smarter than it actually is. In fact, if I’m talking to somebody about what they think their robot knows about every room in their house, it’s like, I’m sorry, but even with unlimited resources and time I have no idea how I’d get a Roomba to learn those things.” The problem with this, Angle continues, is that the industry is increasingly focused on optimizing for this honeymoon period, meaning that gimmicky features are given more weight than long-term reliability. For iRobot, which is playing the long game (just ask my Roomba 560 from 2010!) this may put them at a disadvantage in the trigger-happy consumer market.

The Roomba Combo j9+ is available to ship 1 October for $1,399.99. There’s also a noncombo Roomba j9+, which includes a mopping function in the form of a swappable bin with a mopping pad attached and comes with a much smaller dock, for $899.99. If those prices seem excessive, that’s totally reasonable, because again, iRobot’s latest and greatest robots are always at a premium—but all of these new features will eventually trickle down into Roombas that are affordable for the rest of us.



Today, iRobot is announcing the newest, fanciest, and most expensive Roomba yet. The Roomba Combo j9+ trades a dock for what can only be described as a small indoor robot garage, which includes a robot-emptying vacuum system that can hold two months of dry debris along with a water reservoir that can provide up to 30 days of clean water to refill the robot’s mopping tank. Like all of iRobot’s new flagship products, the Combo j9+ is very expensive at just under US $1,400. But if nothing else, it shows us where iRobot is headed—toward a single home robot that can do everything without you having to even think about it. Almost.

The j9+ (I’m going to stop saying “Combo” every time, but that’s the one I’m talking about) is essentially an upgraded version of the j7+, which was introduced a year ago. It’s a Roomba vacuum that includes an integrated tank for clean water, and on hard floors, the robot can rotate a fabric mopping pad from on top of its head to under its butt to mop up water that it squirts onto the floor underneath itself. On carpet, the mopping pad gets rotated back up, ensuring that your carpet doesn’t get all moppy.

The biggest difference with the j9+ is that rather than having to manually fill the robot’s clean-water tank before every mopping session, you can rely on a dock that includes a huge 3-liter clean water tank that can keep the robot topped off for a month. This also means that the robot can mop more effectively, since it can use more water when it needs to and then return to the dock midcycle to replenish if necessary.

This all does turn the dock into a bit of a monster. It’s a dock in the space-dock sense, not the boat-dock sense—it’s basically a garage for your Roomba, nothing like the low-profile charging docks that Roombas started out with. iRobot is obviously aware of this, so they’ve put some effort into making the dock look nice, and all of the guts can now be accessed from the front, making the top a usable surface.

The Combo j9+ comes with a beefy docking system that stores a month’s worth of clean water.iRobot

iRobot is not the only company offering hybrid vacuuming and mopping robots with beefy docks. But these have come with some pretty significant compromises, like with robots that just lift the mopping pad up when they encounter carpet rather than moving the pad out of the way entirely. This invariably results in a mopping pad dripping dirty water onto your carpet, which is not great. In iRobot’s internal testing, “we’ve seen competitive robots get materially worse,” says iRobot CEO Colin Angle. iRobot is hardly an unbiased party here, but there’s a reason that Roombas tend to be more expensive than their competitors, and iRobot argues that its focus on long-term reliability in the semi-structured environment of the home is what makes its robots worth the money.

Mapping and localization is a good example of this, Angle explains, which is why iRobot relies on vision rather than lasers. “Lasers are the fastest way to create a map, but a geometry-based solution is very brittle to a changing environment. They can’t handle a general rearranging of furniture in a room.” iRobot’s latest Roombas use cameras that look up toward the ceiling of rooms, tracking visual landmarks that don’t change very often: When was the last time you rearranged your ceiling? This allows iRobot to offer map stability that’s generational across robots.

iRobot did experiment with a depth sensor on the 2019 Roomba S9, but that technology hasn’t made it into a Roomba since. “I am currently happy with one single camera,” Angle tells us. “I don’t feel like anything we’re doing on the robot is constrained by not having a 3D sensor. And I don’t yet have arms on the robot; certainly if we’re doing manipulation, depth would become very important, but for the moment in time, I think there’s a lot more you can get out of a monocular camera. 3D is on our road map for when we’re going to do a step-change in functionality that doesn’t exist in the market today.”

So what’s left to automate here? What is the next generation Roomba going to offer that these latest ones don’t, besides maybe arms? The obvious thing is something that other robotic vacuum companies already offer: cleaning the grungy mopping pad by using a pad-washing system within the dock. “It’s a great idea,” says Angle, but iRobot has not been able to come up with a system that can do this to his satisfaction. You have to wash a robot’s mopping pad with something, like some kind of cleaning fluid, and then that used cleaning fluid has to go somewhere. So now you’re talking about yet another fluid reservoir (or two) that the user has to manage plus an even larger dock to hold all of it. “We don’t have a solution,” Angle says, although my assumption is that they’re working hard on something.

The water-related endpoint for floor care robots seems to be plumbing integration; a system that can provide clean water and accept dirty water on-demand. A company called SwitchBot is already attempting to do this with a water docking station that links into undersink plumbing. It’s more functional than elegant, because I can promise you that zero people have a house designed around robot-accessible plumbing, but my guess is that new houses are going to start to get increasingly robot-optimized.

In the meantime, dealing with a dirty mopping pad is done the same way as with the previous model, the j7+: You remove the pad, which I promise is super easy to do, drop it in the laundry, and replace it with a clean one. It means that you have to physically interact with the robot at least once for every mopping cycle, which is something that iRobot is trying really hard to get away from, but it’s really not that big of an ask, all things considered.

One issue I foresee as Roombas get more and more hands-off is that it’ll get harder and harder to convince people to do maintenance on them. Roombas are arguably some of the most rugged robots ever made, considering that they live and work in semi-structured environments supervised by untrained users. But their jobs are based around spinning mechanical components in contact with the ground, and if you have pets or live with someone with long hair, you know what the underbelly of a Roomba can turn into. A happy Roomba is a Roomba that gets its bearings cleaned out from time to time, and back when we all had to empty our Roomba’s dustbin after every cleaning cycle, it was easy to just flip the robot over and do a quick hair extraction. But with Roombas now running unsupervised for weeks or months at a time, I worry for the health of their innards.

While it’s tempting to just focus on the new hardware here, what makes robots actually useful is increasingly dependent on software. The j9+ does things that are obvious in retrospect, like prioritizing what rooms to clean based on historical dirt measurements, doing a deeper cleaning of bathroom floors relative to hardwood floors, and using a new back-and-forth “scrubbing” trajectory. That last thing is actually not new at all; Evolution Robotics’ Mint was mopping that way back in 2010. But since iRobot acquired that company in 2012, we’ll give it a pass. And of course, the j9+ can still recognize and react to 80-something household objects, from wayward socks to electrical cords.

Mopping pad up!iRobot

I asked Angle what frustrates him the most about how people perceive robot vacuums:

“I wish customers didn’t have a honeymoon period where the robot’s ability to live up to expectations wasn’t ignored,” he told me. Angle explains that when consumers first get a robot, for at least the first few weeks, they cut it plenty of slack, frequently taking the blame for the robot getting lost or stuck. “During the early days of living with a robot, people think the robot is so much smarter than it actually is. In fact, if I’m talking to somebody about what they think their robot knows about every room in their house, it’s like, I’m sorry, but even with unlimited resources and time I have no idea how I’d get a Roomba to learn those things.” The problem with this, Angle continues, is that the industry is increasingly focused on optimizing for this honeymoon period, meaning that gimmicky features are given more weight than long-term reliability. For iRobot, which is playing the long game (just ask my Roomba 560 from 2010!) this may put them at a disadvantage in the trigger-happy consumer market.

The Roomba Combo j9+ is available to ship 1 October for $1,399.99. There’s also a noncombo Roomba j9+, which includes a mopping function in the form of a swappable bin with a mopping pad attached and comes with a much smaller dock, for $899.99. If those prices seem excessive, that’s totally reasonable, because again, iRobot’s latest and greatest robots are always at a premium—but all of these new features will eventually trickle down into Roombas that are affordable for the rest of us.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.Cybathlon Challenges: 02 February 2024, ZURICH

Enjoy today’s videos!

We leverage tensegrity structures as wheels for a mobile robot that can actively change its shape by expanding or collapsing the wheels. Besides the shape-changing capability, using tensegrity as wheels offers several advantages over traditional wheels of similar size, such as shock-absorbing capability without added mass since tensegrity wheels are both lightweight and highly compliant. The robot can also jump onto obstacles up to 300 millimeters high with a bistable mechanism that can gradually store but quickly release energy.

[ Adaptive Robotics Lab ]

Meet GE Aerospace’s Sensiworm (Soft ElectroNics Skin-Innervated Robotic Worm), a highly intelligent, acutely sensitive soft robot that could serve as extra sets of eyes and ears for Aerospace service operators inside the engine. Deploying self-propelling, compliant robots like Sensiworm would give operators virtually unfettered access in the future to perform inspections without having to disassemble the engine.

[ GE ]

Why not Zoidberg?

[ Boston Dynamics ]

Traditional AI methods need several weeks, days, or hours to let a walking robot learn to walk. This becomes impractical. This study overcomes the problem by introducing a novel bio-inspired integrative approach to develop neural locomotion control that enables a stick insect-like walking robot to learn how to walk within 20 seconds! The study not only proposes a solution for neural locomotion control but also enables insights into the neural equipment of the biological template. It also provides guidance for further developing advanced bio-inspired theory and simulations.

[ VISTEC ]

Thanks, Poramate!

At Hello Robotics, we are redefining the way humans and robots interact. Our latest creation, MAKI Pro, embodies our belief in empathic design—a principle that prioritizes the emotional and social dimensions of technology. MAKI Pro offers unique features such as animatronic eyes for enhanced eye contact, an embedded PC, and 17 points of articulation. Its speech capabilities are also powered by ChatGPT, adding an element of interaction that’s more natural. The compact design allows for easy placement on a desktop.

[ Hello Robotics ]

Thanks Tim!

During the RoboNav project, autonomous driving tests were conducted in the Seetaler Alps in Austria. The tracked Mattro Rovo3 robot autonomously navigates to the selected goal within the operational area, considering alternative paths and making real-time decisions to avoid obstacles.

[ RoboNav ] via [ ARTI ]

Thanks Lena!

NASA’s Moon rover prototype completed lunar lander egress tests.

[ NASA ]

In the early days of Hello Robot, Aaron Edsinger and Charlie Kemp created several prototype robots and tested them. This video from November 24, 2017 was taken as Charlie remotely operated a prototype robot in his unoccupied Atlanta home from rural Tennessee to take care of his family’s cat. Charlie remotely operated the robot on November 23, 24, 25, and 26. He successfully set out fresh food and water for the cat, put dirty dishes in the sink, threw away empty cat food cans, and checked the kitty litter.

[ Hello Robot ]

For a robot that looks nothing at all like a bug, this robot really does remind me of a bug.

[ Zarrouk Lab ]

Teaching quadrupedal robots to shove stuff, which actually seems like it might be more useful than it sounds.

[ RaiLab Kaist ]

The KUKA Innovation Award has been held annually since 2014 and is addressed to developers, graduates and research teams from universities or companies. For this year’s award, the applicants were asked to use open interfaces in our newly introduced robot operating system iiQKA and to add their own hardware and software components. Ultimately Team JARVIS from the Merlin Laboratory of the Italian Politecnico di Milano was able to assert itself as winner. With its Plug & Play method for programming collaborative robotics applications, which is fully integrated into the iiQKA ecosystem, it convinced the jury.

[ Kuka ]

Once a year, the FZI Research Center for Information Technology (FZI Forschungszentrum Informatik) offers a practical course for students at the Karlsruhe Institute of Technology (KIT) to learn about Biologically Motivated Robots. During the practical course, student teams develop solutions for a hide-and-seek challenge in which mobile robots (Boston Dynamics Spot, ANYbotics ANYmal, Clearpath Robotics Husky) must autonomously hide and find each other.

[ FZI ]

A couple of IROS 35th Anniversary plenary talks from Kyoto last year, featuring Marc Raibert and Roland Siegwart.

[ IROS ]

Are robots on the verge of becoming human-like and taking over most jobs? When will self-driving cars be cost-effective? What challenges in robotics will be solved by Large Language Models and generative AI?
Although renowned roboticist Ruzena Bajcsy recently retired from Berkeley, she will return to discuss her insights on how robotics research has evolved over the past half-century with five senior colleagues who have combined research experience of over 200 years.

[ Berkeley ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.Cybathlon Challenges: 02 February 2024, ZURICH

Enjoy today’s videos!

We leverage tensegrity structures as wheels for a mobile robot that can actively change its shape by expanding or collapsing the wheels. Besides the shape-changing capability, using tensegrity as wheels offers several advantages over traditional wheels of similar size, such as shock-absorbing capability without added mass since tensegrity wheels are both lightweight and highly compliant. The robot can also jump onto obstacles up to 300 millimeters high with a bistable mechanism that can gradually store but quickly release energy.

[ Adaptive Robotics Lab ]

Meet GE Aerospace’s Sensiworm (Soft ElectroNics Skin-Innervated Robotic Worm), a highly intelligent, acutely sensitive soft robot that could serve as extra sets of eyes and ears for Aerospace service operators inside the engine. Deploying self-propelling, compliant robots like Sensiworm would give operators virtually unfettered access in the future to perform inspections without having to disassemble the engine.

[ GE ]

Why not Zoidberg?

[ Boston Dynamics ]

Traditional AI methods need several weeks, days, or hours to let a walking robot learn to walk. This becomes impractical. This study overcomes the problem by introducing a novel bio-inspired integrative approach to develop neural locomotion control that enables a stick insect-like walking robot to learn how to walk within 20 seconds! The study not only proposes a solution for neural locomotion control but also enables insights into the neural equipment of the biological template. It also provides guidance for further developing advanced bio-inspired theory and simulations.

[ VISTEC ]

Thanks, Poramate!

At Hello Robotics, we are redefining the way humans and robots interact. Our latest creation, MAKI Pro, embodies our belief in empathic design—a principle that prioritizes the emotional and social dimensions of technology. MAKI Pro offers unique features such as animatronic eyes for enhanced eye contact, an embedded PC, and 17 points of articulation. Its speech capabilities are also powered by ChatGPT, adding an element of interaction that’s more natural. The compact design allows for easy placement on a desktop.

[ Hello Robotics ]

Thanks Tim!

During the RoboNav project, autonomous driving tests were conducted in the Seetaler Alps in Austria. The tracked Mattro Rovo3 robot autonomously navigates to the selected goal within the operational area, considering alternative paths and making real-time decisions to avoid obstacles.

[ RoboNav ] via [ ARTI ]

Thanks Lena!

NASA’s Moon rover prototype completed lunar lander egress tests.

[ NASA ]

In the early days of Hello Robot, Aaron Edsinger and Charlie Kemp created several prototype robots and tested them. This video from November 24, 2017 was taken as Charlie remotely operated a prototype robot in his unoccupied Atlanta home from rural Tennessee to take care of his family’s cat. Charlie remotely operated the robot on November 23, 24, 25, and 26. He successfully set out fresh food and water for the cat, put dirty dishes in the sink, threw away empty cat food cans, and checked the kitty litter.

[ Hello Robot ]

For a robot that looks nothing at all like a bug, this robot really does remind me of a bug.

[ Zarrouk Lab ]

Teaching quadrupedal robots to shove stuff, which actually seems like it might be more useful than it sounds.

[ RaiLab Kaist ]

The KUKA Innovation Award has been held annually since 2014 and is addressed to developers, graduates and research teams from universities or companies. For this year’s award, the applicants were asked to use open interfaces in our newly introduced robot operating system iiQKA and to add their own hardware and software components. Ultimately Team JARVIS from the Merlin Laboratory of the Italian Politecnico di Milano was able to assert itself as winner. With its Plug & Play method for programming collaborative robotics applications, which is fully integrated into the iiQKA ecosystem, it convinced the jury.

[ Kuka ]

Once a year, the FZI Research Center for Information Technology (FZI Forschungszentrum Informatik) offers a practical course for students at the Karlsruhe Institute of Technology (KIT) to learn about Biologically Motivated Robots. During the practical course, student teams develop solutions for a hide-and-seek challenge in which mobile robots (Boston Dynamics Spot, ANYbotics ANYmal, Clearpath Robotics Husky) must autonomously hide and find each other.

[ FZI ]

A couple of IROS 35th Anniversary plenary talks from Kyoto last year, featuring Marc Raibert and Roland Siegwart.

[ IROS ]

Are robots on the verge of becoming human-like and taking over most jobs? When will self-driving cars be cost-effective? What challenges in robotics will be solved by Large Language Models and generative AI?
Although renowned roboticist Ruzena Bajcsy recently retired from Berkeley, she will return to discuss her insights on how robotics research has evolved over the past half-century with five senior colleagues who have combined research experience of over 200 years.

[ Berkeley ]



How do you land on an asteroid? A lot of very talented engineers have thought about it. Putting a robotic spacecraft down safely on a moon or planet is hard enough, with the pull of gravity to keep you humble. But when it comes to an asteroid, where gravity may be a few millionths of what it is on Earth, is “landing” even the right word?

NASA’s OSIRIS-REx mission is due back on Earth on 24 September after a seven-year voyage to sample the regolith of the asteroid 101955 Bennu—and in that case, mission managers decided not even to risk touching down on Bennu’s rocky crust. “We don’t want to deal with the uncertainty of the actual contact with the surface any longer than necessary,” said Mike Moreau, the deputy mission manager, back in 2020. They devised a scheme to poke the asteroid with a long sampling arm; the ship spent more than two years orbiting Bennu and all of 16 seconds touching it.

Maybe landing is a job for a softbot—a shape-shifting articulated spacecraft of the sort that Jay McMahon and colleagues at the University of Colorado in Boulder have been working on for more than six years. You can call them AoES—short for Area-of-Effect Softbots. The renderings of one resemble a water lily.

That’s not entirely by accident. A bit like a floating lily, the softbot has a lot of surface area relative to its mass. So, if there isn’t much gravity to work with, it can maneuver using much smaller forces—such as electro-adhesion, solar radiation, and van der Waals attraction between molecules. (If you’re not familiar with van der Waals forces, think of a gecko sticking to a wall.)

“There are electrostatic forces that will act and are not insignificant in the asteroid environment,” says McMahon. “It’s just a weird place, where gravity is so weak that those forces that exist on Earth, which we basically ignore because they’re so insignificant—you can take advantage of them in interesting ways.”

It’s important to say, before we go further, that space softbots are a long-term idea, on the back burner for now. McMahon’s team got some funding in 2017 from NIAC, the NASA Innovative Advanced Concepts program; more recently they’ve been researching whether they can apply some of their technology to on-orbit servicing of satellites or removal of space junk. McMahon has also been a scientist on other missions, including OSIRIS-REx and DART, which famously crashed into a small asteroid last year to change the asteroid’s orbital path.

A problem with small asteroids is that many of them—perhaps most—aren’t solid boulders. If they are less than 10 kilometers in diameter, the chances are high that they are so-called rubble piles—agglomerations of rock, metal, and perhaps ice that are held together, in part, by the same weak forces AoES probes would use to explore them. Rubble piles are risky for spacecraft: When OSIRIS-REx gently bumped the surface of Bennu with its sampling arm, scientists were surprised to see that it broke right through with minimal resistance, sending a shower of rock and dirt in all directions.

An even softer approach may be in order, if you want to set a robot down on a rubble pile asteroid. If you’re going to explore the asteroid or perhaps mine it—you need a way to approach it, then settle on the surface without making a mess of it. Early missions tried harpoons and thrusters, and had a rough time.

In this rendering, the softbot spreads its limbs to stick to the asteroid while it digs up debris.The University of Colorado Boulder

“You need to find a way to hold yourself down, but you also need to find a way to not sink in if it’s too soft,” says McMahon. “And so that’s where this big-area idea came from.”

McMahon and his team wrote in a 2018 report for NASA that they can envision a softbot, or a fleet of them, flown into orbit around an asteroid by a mother ship. The petals might be made partly of silicone elastomers, flexible material that has been used on previous spacecraft. In early renditions the petals were a large disc; the flower design turned out to be more efficient. When they’re spread out straight (perhaps extending a few meters), they could act as a solar sail of sorts, slowly guiding the softbot to the surface and curling up to cushion the landing if necessary. Then they could change shape to conform to the asteroid’s own, perhaps attracting themselves to it naturally with van der Waals forces, supplemented with a small electrical charge.

The charge need not be very strong; more important is that the petals be large enough that, when spread out over the surface, they cumulatively create a good grip. McMahon and his colleagues suggest the charge could be turned on and off with HASEL (short for Hydraulically Amplified Self-Healing Electrostatic) actuators, perhaps only affecting one part of a petal at a time.

What about actually digging into the asteroid or kicking up rock for the mother ship to recover? The limbs should hold the spacecraft down while a sampling tool does its work. What if your spacecraft lands in a bad place, or you want to move on to another part of the asteroid? Bend the petals and the softbot can crawl along, a little like a caterpillar. If necessary, the spacecraft can slowly “hop” from one spot to another, straightening its petals again as solar sails to steer. Importantly, they operate without using much fuel, which is heavy, limited in quantity, and probably not something you want contaminating the asteroid.

Though the Colorado team was very thorough in designing their softbot concept, there are obviously countless details still to be worked out—issues of guidance, navigation, power, mass and many others, to say nothing of the economics and political maneuvering needed to launch a new technology. AoES vehicles as currently designed may never fly, but ideas from them may find their way into spacecraft of the future. In McMahon’s words, “This concept elegantly overcomes many of the difficulties.”



How do you land on an asteroid? A lot of very talented engineers have thought about it. Putting a robotic spacecraft down safely on a moon or planet is hard enough, with the pull of gravity to keep you humble. But when it comes to an asteroid, where gravity may be a few millionths of what it is on Earth, is “landing” even the right word?

NASA’s OSIRIS-REx mission is due back on Earth on 24 September after a seven-year voyage to sample the regolith of the asteroid 101955 Bennu—and in that case, mission managers decided not even to risk touching down on Bennu’s rocky crust. “We don’t want to deal with the uncertainty of the actual contact with the surface any longer than necessary,” said Mike Moreau, the deputy mission manager, back in 2020. They devised a scheme to poke the asteroid with a long sampling arm; the ship spent more than two years orbiting Bennu and all of 16 seconds touching it.

Maybe landing is a job for a softbot—a shape-shifting articulated spacecraft of the sort that Jay McMahon and colleagues at the University of Colorado in Boulder have been working on for more than six years. You can call them AoES—short for Area-of-Effect Softbots. The renderings of one resemble a water lily.

That’s not entirely by accident. A bit like a floating lily, the softbot has a lot of surface area relative to its mass. So, if there isn’t much gravity to work with, it can maneuver using much smaller forces—such as electro-adhesion, solar radiation, and van der Waals attraction between molecules. (If you’re not familiar with van der Waals forces, think of a gecko sticking to a wall.)

“There are electrostatic forces that will act and are not insignificant in the asteroid environment,” says McMahon. “It’s just a weird place, where gravity is so weak that those forces that exist on Earth, which we basically ignore because they’re so insignificant—you can take advantage of them in interesting ways.”

It’s important to say, before we go further, that space softbots are a long-term idea, on the back burner for now. McMahon’s team got some funding in 2017 from NIAC, the NASA Innovative Advanced Concepts program; more recently they’ve been researching whether they can apply some of their technology to on-orbit servicing of satellites or removal of space junk. McMahon has also been a scientist on other missions, including OSIRIS-REx and DART, which famously crashed into a small asteroid last year to change the asteroid’s orbital path.

A problem with small asteroids is that many of them—perhaps most—aren’t solid boulders. If they are less than 10 kilometers in diameter, the chances are high that they are so-called rubble piles—agglomerations of rock, metal, and perhaps ice that are held together, in part, by the same weak forces AoES probes would use to explore them. Rubble piles are risky for spacecraft: When OSIRIS-REx gently bumped the surface of Bennu with its sampling arm, scientists were surprised to see that it broke right through with minimal resistance, sending a shower of rock and dirt in all directions.

An even softer approach may be in order, if you want to set a robot down on a rubble pile asteroid. If you’re going to explore the asteroid or perhaps mine it—you need a way to approach it, then settle on the surface without making a mess of it. Early missions tried harpoons and thrusters, and had a rough time.

In this rendering, the softbot spreads its limbs to stick to the asteroid while it digs up debris.The University of Colorado Boulder

“You need to find a way to hold yourself down, but you also need to find a way to not sink in if it’s too soft,” says McMahon. “And so that’s where this big-area idea came from.”

McMahon and his team wrote in a 2018 report for NASA that they can envision a softbot, or a fleet of them, flown into orbit around an asteroid by a mother ship. The petals might be made partly of silicone elastomers, flexible material that has been used on previous spacecraft. In early renditions the petals were a large disc; the flower design turned out to be more efficient. When they’re spread out straight (perhaps extending a few meters), they could act as a solar sail of sorts, slowly guiding the softbot to the surface and curling up to cushion the landing if necessary. Then they could change shape to conform to the asteroid’s own, perhaps attracting themselves to it naturally with van der Waals forces, supplemented with a small electrical charge.

The charge need not be very strong; more important is that the petals be large enough that, when spread out over the surface, they cumulatively create a good grip. McMahon and his colleagues suggest the charge could be turned on and off with HASEL (short for Hydraulically Amplified Self-Healing Electrostatic) actuators, perhaps only affecting one part of a petal at a time.

What about actually digging into the asteroid or kicking up rock for the mother ship to recover? The limbs should hold the spacecraft down while a sampling tool does its work. What if your spacecraft lands in a bad place, or you want to move on to another part of the asteroid? Bend the petals and the softbot can crawl along, a little like a caterpillar. If necessary, the spacecraft can slowly “hop” from one spot to another, straightening its petals again as solar sails to steer. Importantly, they operate without using much fuel, which is heavy, limited in quantity, and probably not something you want contaminating the asteroid.

Though the Colorado team was very thorough in designing their softbot concept, there are obviously countless details still to be worked out—issues of guidance, navigation, power, mass and many others, to say nothing of the economics and political maneuvering needed to launch a new technology. AoES vehicles as currently designed may never fly, but ideas from them may find their way into spacecraft of the future. In McMahon’s words, “This concept elegantly overcomes many of the difficulties.”

Chemical Artificial Intelligence (CAI) is a brand-new research line that exploits molecular, supramolecular, and systems chemistry in wetware (i.e., in fluid solutions) to imitate some performances of human intelligence and promote unconventional robotics based on molecular assemblies, which act in the microscopic world, otherwise tough to be accessed by humans. It is undoubtedly worth spreading the news that AI researchers can rely on the help of chemists and biotechnologists to reach the ambitious goals of building intelligent systems from scratch. This article reports the first attempt at building a Chemical Artificial Intelligence knowledge map and describes the basic intelligent functions that can be implemented through molecular and supramolecular chemistry. Chemical Artificial Intelligence provides new tools and concepts to mimic human intelligence because it shares, with biological intelligence, the same principles and materials. It enables peculiar dynamics, possibly not accessible in software and hardware domains. Moreover, the development of Chemical Artificial Intelligence will contribute to a deeper understanding of the strict link between intelligence and life, which are two of the most remarkable emergent properties shown by the Complex Systems we call biological organisms.

Introduction: Utilizing anthropomorphic features in industrial robots is a prevalent strategy aimed at enhancing their perception as collaborative team partners and promoting increased tolerance for failures. Nevertheless, recent research highlights the presence of potential drawbacks associated with this approach. It is still widely unknown, how anthropomorphic framing influences the dynamics of trust especially, in context of different failure experiences.

Method: The current laboratory study wanted to close this research gap. To do so, fifty-one participants interacted with a robot that was either anthropomorphically or technically framed. In addition, each robot produced either a comprehensible or an incomprehensible failure.

Results: The analysis revealed no differences in general trust towards the technically and anthropomorphically framed robot. Nevertheless, the anthropomorphic robot was perceived as more transparent than the technical robot. Furthermore, the robot’s purpose was perceived as more positive after experiencing a comprehensible failure.

Discussion: The perceived higher transparency of anthropomorphically framed robots might be a double-edged sword, as the actual transparency did not differ between both conditions. In general, the results show that it is essential to consider trust multi-dimensionally, as a uni-dimensional approach which is often focused on performance might overshadow important facets of trust like transparency and purpose.



Yesterday, Clearpath Robotics of Kitchener, Ontario in Canada (and Clearpath’s mobile logistics robot division OTTO Motors) announced that it was being acquired by the Milwaukee-based Rockwell Automation for an undisclosed amount.

The press release (which comes from Rockwell, not Clearpath) is focused exclusively on robotics for industrial applications. That is, on OTTO Motors’ Autonomous Mobile Robots (AMRs) in the context of production logistics. If you take a look at what Rockwell does, this makes sense, because as an automation company, they’re not typically doing what most of us would think of as “robotics” in the sense that the mechanical systems that they automate don’t do the kind of dynamic decision making that (in my opinion) distinguishes robots from machines. So, the OTTO Motors AMRs (and the people at OTTO who get them to autonomously behave themselves) provide an important and forward-looking addition to what Rockwell is able to offer in an industrial context.

That’s all fine and dandy as far as OTTO Motors goes. What worries me, though, is that there’s zero mention of Clearpath’s well-known and much loved family of yellow and black research robots. This includes the Husky UGV, arguably the standard platform for mobile robotics research and development, as well as the slightly less yellow but just as impactful Turtlebot 4, announced barely a year ago in partnership with iRobot and Open Robotics.

With iRobot, Open Robotics, and now Clearpath all getting partially or wholly subsumed (or consumed?) by other companies that have their own priorities, it’s hard not to be concerned about what’s going to happen to these hardware and software platforms (including Turtlebot and ROS) that have provided the foundation for so much robotics research and education. Clearpath in particular has been a pillar of the ROS community since there’s been a ROS community, and it’s unclear how things are going to change going forward.

We’ve reached out to Clearpath to hopefully get a little bit of clarity on all this stuff, and we’ll have an update as soon as we can.

Pages