Feed aggregator

Soft robotics and wearable devices are promising technologies due to their flexibility. As human-soft robot interaction technologies advance, the interest in stretchable sensor devices has increased. Currently, the main challenge in developing stretchable sensors is preparing high-quality sensors via a simple and cost-effective method. This study introduces the do-it-yourself (DIY)-approach to fabricate a carbon nanotube (CNT) powder-based stretchable sensor. The fabrication strategy utilizes an automatic brushing machine to pattern CNT powder on the elastomer. The elastomer ingredients are optimized to increase the elastomer compatibility with the brushing method. We found that polydimethylsiloxane-polyethyleneimine (PDMS-PEIE) is 50% more stretchable and 63% stickier than previously reported PDMS 30-1. With these improved elastomer characteristics, PDMS-PEIE/multiwalled CNT (PDMS-PEIE/MWCNT-1) strain sensor can realize a gauge factor of 6.2–8.2 and a responsivity up to 25 ms. To enhance the compatibility of the powder-based stretchable sensor for a wearable device, the sensor is laminated using a thin Ecoflex membrane. Additionally, system integration of the stretchable sensors are demonstrated by embedding it into a cotton-glove and a microcontroller to control a virtual hand. This cost-effective DIY-approach are expected to greatly contribute to the development of wearable devices since the technology is simple, economical, and reliable.



It turns out that you don't need a lot of hardware to make a flying robot. Flying robots are usually way, way, way over-engineered, with ridiculously over the top components like two whole wings or an obviously ludicrous four separate motors. Maybe that kind of stuff works for people with more funding than they know what to do with, but for anyone trying to keep to a reasonable budget, all it actually takes to make a flying robot is one single airfoil plus an attached fixed-pitch propeller. And if you make that airfoil flexible, you can even fold the entire thing up into a sort of flying robotic swiss roll.

This type of drone is called a monocopter, and the design is very generally based on samara seeds, which are those single-wing seed pods that spin down from maple trees. The ability to spin slows the seeds' descent to the ground, allowing them to spread farther from the tree. It's an inherently stable design, meaning that it'll spin all by itself and do so in a stable and predictable way, which is a nice feature for a drone to have—if everything completely dies, it'll just spin itself gently down to a landing by default.

The monocopter we're looking at here, called F-SAM, comes from the Singapore University of Technology & Design, and we've written about some of their flying robots in the past, including this transformable hovering rotorcraft. F-SAM stands for Foldable Single Actuator Monocopter, and as you might expect, it's a monocopter that can fold up and uses just one single actuator for control.

There may not be a lot going on here hardware-wise, but that's part of the charm of this design. The one actuator gives complete directional control: increasing the throttle increases the RPM of the aircraft, causing it to gain altitude, which is pretty straightforward. Directional control is trickier, but not much trickier, requiring repetitive pulsing of the motor at a point during the aircraft's spin when it's pointed in the direction you want it to go. F-SAM is operating in a motion-capture environment in the video to explore its potential for precision autonomy, but it's not restricted to that environment, and doesn't require external sensing for control.

While F-SAM's control board was custom designed and the wing requires some fabrication, the rest of the parts are cheap and off the shelf. The total weight of F-SAM is just 69g, of which nearly 40% is battery, yielding a flight time of about 16 minutes. If you look closely, you'll also see a teeny little carbon fiber leg of sorts that keeps the prop up above the ground, enabling the ground takeoff behavior without contacting the ground.

You can find the entire F-SAM paper open access here, but we also asked the authors a couple of extra questions.

IEEE Spectrum: It looks like you explored different materials and combinations of materials for the flexible wing structure. Why did you end up with this mix of balsa wood and plastic?

Shane Kyi Hla Win: The wing structure of a monocopter requires rigidity in order to be controllable in flight. Although it is possible for the monocopter to fly with more flexible materials we tested, such as flexible plastic or polymide flex, they allow the wing to twist freely mid-flight making cyclic control effort from the motor less effective. The balsa laminated with plastic provides enough rigidity for an effective control, while allowing folding in a pre-determined triangular fold.

Can F-SAM fly outdoors? What is required to fly it outside of a motion capture environment?

Yes it can fly outdoors. It is passively stable so it does not require a closed-loop control for its flight. The motion capture environment provides its absolute position for station-holding and waypoint flights when indoors. For outdoor flight, an electronic compass provides the relative heading for the basic cyclic control. We are working on a prototype with an integrated GPS for outdoor autonomous flights.

Would you be able to add a camera or other sensors to F-SAM?

A camera can be added (we have done this before), but due to its spinning nature, images captured can come out blurry. 360 cameras are becoming lighter and smaller and we may try putting one on F-SAM or other monocopters we have. Other possible sensors to include are LiDAR sensor or ToF sensor. With LiDAR, the platform has an advantage as it is already spinning at a known RPM. A conventional LiDAR system requires a dedicated actuator to create a spinning motion. As a rotating platform, F-SAM already possesses the natural spinning dynamics, hence making LiDAR integration lightweight and more efficient.

Your paper says that "in the future, we may look into possible launching of F-SAM directly from the container, without the need for human intervention." Can you describe how this would happen?

Currently, F-SAM can be folded into a compact form and stored inside a container. However, it still requires a human to unfold it and either hand-launch it or put it on the floor to fly off. In the future, we envision that F-SAM is put inside a container which has the mechanism (such as pressured gas) to catapult the folded unit into the air, which can begin unfolding immediately due to elastic materials used. The motor can initiate the spin which allows the wing to straighten out due to centrifugal forces.

Do you think F-SAM would make a good consumer drone?

F-SAM could be a good toy but it may not be a good alternative to quadcopters if the objective is conventional aerial photography or videography. However, it can be a good contender for single-use GPS-guided reconnaissance missions. As it uses only one actuator for its flight, it can be made relatively cheaply. It is also very silent during its flight and easily camouflaged once landed. Various lightweight sensors can be integrated onto the platform for different types of missions, such as climate monitoring. F-SAM units can be deployed from the air, as they can also autorotate on their way down, while also flying at certain periods for extended meteorological data collection in the air.

What are you working on next?

We have a few exciting projects on hand, most of which focus on 'do more with less' theme. This means our projects aim to achieve multiple missions and flight modes while using as few actuators as possible. Like F-SAM which uses only one actuator to achieve controllable flight, another project we are working on is the fully autorotating version, named Samara Autorotating Wing (SAW). This platform, published earlier this year in IEEE Transactions on Robotics , is able to achieve two flight modes (autorotation and diving) with just one actuator. It is ideal for deploying single-use sensors to remote locations. For example, we can use the platform to deploy sensors for forest monitoring or wildfire alert system. The sensors can land on tree canopies, and once landed the wing provides the necessary area for capturing solar energy for persistent operation over several years. Another interesting scenario is using the autorotating platform to guide the radiosondes back to the collection point once its journey upwards is completed. Currently, many radiosondes are sent up with hydrogen balloons from weather stations all across the world (more than 20,000 annually from Australia alone) and once the balloon reaches a high altitude and bursts, the sensors drop back onto the earth and no effort is spent to retrieve these sensors. By guiding these sensors back to a collection point, millions of dollars can be saved every year—and also [it helps] save the environment by polluting less.



It turns out that you don't need a lot of hardware to make a flying robot. Flying robots are usually way, way, way over-engineered, with ridiculously over the top components like two whole wings or an obviously ludicrous four separate motors. Maybe that kind of stuff works for people with more funding than they know what to do with, but for anyone trying to keep to a reasonable budget, all it actually takes to make a flying robot is one single airfoil plus an attached fixed-pitch propeller. And if you make that airfoil flexible, you can even fold the entire thing up into a sort of flying robotic swiss roll.

This type of drone is called a monocopter, and the design is very generally based on samara seeds, which are those single-wing seed pods that spin down from maple trees. The ability to spin slows the seeds' descent to the ground, allowing them to spread farther from the tree. It's an inherently stable design, meaning that it'll spin all by itself and do so in a stable and predictable way, which is a nice feature for a drone to have—if everything completely dies, it'll just spin itself gently down to a landing by default.

The monocopter we're looking at here, called F-SAM, comes from the Singapore University of Technology & Design, and we've written about some of their flying robots in the past, including this transformable hovering rotorcraft. F-SAM stands for Foldable Single Actuator Monocopter, and as you might expect, it's a monocopter that can fold up and uses just one single actuator for control.

There may not be a lot going on here hardware-wise, but that's part of the charm of this design. The one actuator gives complete directional control: increasing the throttle increases the RPM of the aircraft, causing it to gain altitude, which is pretty straightforward. Directional control is trickier, but not much trickier, requiring repetitive pulsing of the motor at a point during the aircraft's spin when it's pointed in the direction you want it to go. F-SAM is operating in a motion-capture environment in the video to explore its potential for precision autonomy, but it's not restricted to that environment, and doesn't require external sensing for control.

While F-SAM's control board was custom designed and the wing requires some fabrication, the rest of the parts are cheap and off the shelf. The total weight of F-SAM is just 69g, of which nearly 40% is battery, yielding a flight time of about 16 minutes. If you look closely, you'll also see a teeny little carbon fiber leg of sorts that keeps the prop up above the ground, enabling the ground takeoff behavior without contacting the ground.

You can find the entire F-SAM paper open access here, but we also asked the authors a couple of extra questions.

IEEE Spectrum: It looks like you explored different materials and combinations of materials for the flexible wing structure. Why did you end up with this mix of balsa wood and plastic?

Shane Kyi Hla Win: The wing structure of a monocopter requires rigidity in order to be controllable in flight. Although it is possible for the monocopter to fly with more flexible materials we tested, such as flexible plastic or polymide flex, they allow the wing to twist freely mid-flight making cyclic control effort from the motor less effective. The balsa laminated with plastic provides enough rigidity for an effective control, while allowing folding in a pre-determined triangular fold.

Can F-SAM fly outdoors? What is required to fly it outside of a motion capture environment?

Yes it can fly outdoors. It is passively stable so it does not require a closed-loop control for its flight. The motion capture environment provides its absolute position for station-holding and waypoint flights when indoors. For outdoor flight, an electronic compass provides the relative heading for the basic cyclic control. We are working on a prototype with an integrated GPS for outdoor autonomous flights.

Would you be able to add a camera or other sensors to F-SAM?

A camera can be added (we have done this before), but due to its spinning nature, images captured can come out blurry. 360 cameras are becoming lighter and smaller and we may try putting one on F-SAM or other monocopters we have. Other possible sensors to include are LiDAR sensor or ToF sensor. With LiDAR, the platform has an advantage as it is already spinning at a known RPM. A conventional LiDAR system requires a dedicated actuator to create a spinning motion. As a rotating platform, F-SAM already possesses the natural spinning dynamics, hence making LiDAR integration lightweight and more efficient.

Your paper says that "in the future, we may look into possible launching of F-SAM directly from the container, without the need for human intervention." Can you describe how this would happen?

Currently, F-SAM can be folded into a compact form and stored inside a container. However, it still requires a human to unfold it and either hand-launch it or put it on the floor to fly off. In the future, we envision that F-SAM is put inside a container which has the mechanism (such as pressured gas) to catapult the folded unit into the air, which can begin unfolding immediately due to elastic materials used. The motor can initiate the spin which allows the wing to straighten out due to centrifugal forces.

Do you think F-SAM would make a good consumer drone?

F-SAM could be a good toy but it may not be a good alternative to quadcopters if the objective is conventional aerial photography or videography. However, it can be a good contender for single-use GPS-guided reconnaissance missions. As it uses only one actuator for its flight, it can be made relatively cheaply. It is also very silent during its flight and easily camouflaged once landed. Various lightweight sensors can be integrated onto the platform for different types of missions, such as climate monitoring. F-SAM units can be deployed from the air, as they can also autorotate on their way down, while also flying at certain periods for extended meteorological data collection in the air.

What are you working on next?

We have a few exciting projects on hand, most of which focus on 'do more with less' theme. This means our projects aim to achieve multiple missions and flight modes while using as few actuators as possible. Like F-SAM which uses only one actuator to achieve controllable flight, another project we are working on is the fully autorotating version, named Samara Autorotating Wing (SAW). This platform, published earlier this year in IEEE Transactions on Robotics , is able to achieve two flight modes (autorotation and diving) with just one actuator. It is ideal for deploying single-use sensors to remote locations. For example, we can use the platform to deploy sensors for forest monitoring or wildfire alert system. The sensors can land on tree canopies, and once landed the wing provides the necessary area for capturing solar energy for persistent operation over several years. Another interesting scenario is using the autorotating platform to guide the radiosondes back to the collection point once its journey upwards is completed. Currently, many radiosondes are sent up with hydrogen balloons from weather stations all across the world (more than 20,000 annually from Australia alone) and once the balloon reaches a high altitude and bursts, the sensors drop back onto the earth and no effort is spent to retrieve these sensors. By guiding these sensors back to a collection point, millions of dollars can be saved every year—and also [it helps] save the environment by polluting less.



Late last year, Japanese robotics startup GITAI sent their S1 robotic arm up to the International Space Station as part of a commercial airlock extension module to test out some useful space-based autonomy. Everything moves pretty slowly on the ISS, so it wasn't until last month that NASA astronauts installed the S1 arm and GITAI was able to put the system through its paces—or rather, sit in comfy chairs on Earth and watch the arm do most of its tasks by itself, because that's the dream, right?

The good news is that everything went well, and the arm did everything GITAI was hoping it would do. So what's next for commercial autonomous robotics in space? GITAI's CEO tells us what they're working on.

In this technology demonstration, the GITAI S1 autonomous space robot was installed inside the ISS Nanoracks Bishop Airlock and succeeded in executing two tasks: assembling structures and panels for In-Space Assembly (ISA), and operating switches & cables for Intra-Vehicular Activity (IVA).

One of the advantages of working in space is that it's a highly structured environment. Microgravity can be somewhat unpredictable, but you have a very good idea of the characteristics of objects (and even of lighting) because everything that's up there is excessively well defined. So, stuff like using a two-finger gripper for relatively high precision tasks is totally possible, because the variation that the system has to deal with is low. Of course, things can always go wrong, so GITAI also tested teleop procedures from Houston to make sure that having humans in the loop was also an effective way of completing tasks.

Since full autonomy is vastly more difficult than almost full autonomy, occasional teleop is probably going to be critical for space robots of all kinds. We spoke with GITAI CEO Sho Nakanose to learn more about their approach.

IEEE Spectrum: What do you think is the right amount of autonomy for robots working inside of the ISS?

Sho Nakanose: We believe that a combination of 95% autonomous control and 5% remote judgment and remote operation is the most efficient way to work. In this ISS demonstration, all the work was performed with 99% autonomous control and 1% remote decision making. However, in actual operations on the ISS, irregular tasks will occur that cannot be handled by autonomous control, and we believe that such irregular tasks should be handled by remote control from the ground, so we believe that the final ratio of about 5% remote judgment and remote control will be the most efficient.

GITAI will apply the general-purpose autonomous space robotics technology, know-how, and experience acquired through this tech demo to develop extra-vehicular robotics (EVR) that can execute docking, repair, and maintenance tasks for On-Orbit Servicing (OOS) or conduct various activities for lunar exploration and lunar base construction. -Sho Nakanose

I'm sure you did many tests with the system on the ground before sending it to the ISS. How was operating the robot on the ISS different from the testing you had done on Earth?

The biggest difference between experiments on the ground and on the ISS is the microgravity environment, but it was not that difficult to cope with. However, experiments on the ISS, which is an unknown environment that we have never been to before, are subject to a variety of unexpected situations that were extremely difficult to deal with, for example an unexpected communication breakdown occurred due to a failed thruster firing experiment on the Russian module. However, we were able to solve all the problems because the development team had carefully prepared for the irregularities in advance.

It looked like the robot was performing many tasks using equipment designed for humans. Do you think it would be better to design things like screws and control panels to make them easier for robots to see and operate?

Yes, I think so. Unlike the ISS that was built in the past, it is expected that humans and robots will cooperate to work together in the lunar orbiting space station Gateway and the lunar base that will be built in the future. Therefore, it is necessary to devise and implement an interface that is easy to use for both humans and robots. In 2019, GITAI received an order from JAXA to develop guidelines for an interface that is easy for both humans and robots to use on the ISS and Gateway.

What are you working on next?

We are planning to conduct an on-orbit extra-vehicular demonstration in 2023 and a lunar demonstration in 2025. We are also working on space robot development projects for several customers for which we have already received orders.



Late last year, Japanese robotics startup GITAI sent their S1 robotic arm up to the International Space Station as part of a commercial airlock extension module to test out some useful space-based autonomy. Everything moves pretty slowly on the ISS, so it wasn't until last month that NASA astronauts installed the S1 arm and GITAI was able to put the system through its paces—or rather, sit in comfy chairs on Earth and watch the arm do most of its tasks by itself, because that's the dream, right?

The good news is that everything went well, and the arm did everything GITAI was hoping it would do. So what's next for commercial autonomous robotics in space? GITAI's CEO tells us what they're working on.

In this technology demonstration, the GITAI S1 autonomous space robot was installed inside the ISS Nanoracks Bishop Airlock and succeeded in executing two tasks: assembling structures and panels for In-Space Assembly (ISA), and operating switches & cables for Intra-Vehicular Activity (IVA).

One of the advantages of working in space is that it's a highly structured environment. Microgravity can be somewhat unpredictable, but you have a very good idea of the characteristics of objects (and even of lighting) because everything that's up there is excessively well defined. So, stuff like using a two-finger gripper for relatively high precision tasks is totally possible, because the variation that the system has to deal with is low. Of course, things can always go wrong, so GITAI also tested teleop procedures from Houston to make sure that having humans in the loop was also an effective way of completing tasks.

Since full autonomy is vastly more difficult than almost full autonomy, occasional teleop is probably going to be critical for space robots of all kinds. We spoke with GITAI CEO Sho Nakanose to learn more about their approach.

IEEE Spectrum: What do you think is the right amount of autonomy for robots working inside of the ISS?

Sho Nakanose: We believe that a combination of 95% autonomous control and 5% remote judgment and remote operation is the most efficient way to work. In this ISS demonstration, all the work was performed with 99% autonomous control and 1% remote decision making. However, in actual operations on the ISS, irregular tasks will occur that cannot be handled by autonomous control, and we believe that such irregular tasks should be handled by remote control from the ground, so we believe that the final ratio of about 5% remote judgment and remote control will be the most efficient.

GITAI will apply the general-purpose autonomous space robotics technology, know-how, and experience acquired through this tech demo to develop extra-vehicular robotics (EVR) that can execute docking, repair, and maintenance tasks for On-Orbit Servicing (OOS) or conduct various activities for lunar exploration and lunar base construction. -Sho Nakanose

I'm sure you did many tests with the system on the ground before sending it to the ISS. How was operating the robot on the ISS different from the testing you had done on Earth?

The biggest difference between experiments on the ground and on the ISS is the microgravity environment, but it was not that difficult to cope with. However, experiments on the ISS, which is an unknown environment that we have never been to before, are subject to a variety of unexpected situations that were extremely difficult to deal with, for example an unexpected communication breakdown occurred due to a failed thruster firing experiment on the Russian module. However, we were able to solve all the problems because the development team had carefully prepared for the irregularities in advance.

It looked like the robot was performing many tasks using equipment designed for humans. Do you think it would be better to design things like screws and control panels to make them easier for robots to see and operate?

Yes, I think so. Unlike the ISS that was built in the past, it is expected that humans and robots will cooperate to work together in the lunar orbiting space station Gateway and the lunar base that will be built in the future. Therefore, it is necessary to devise and implement an interface that is easy to use for both humans and robots. In 2019, GITAI received an order from JAXA to develop guidelines for an interface that is easy for both humans and robots to use on the ISS and Gateway.

What are you working on next?

We are planning to conduct an on-orbit extra-vehicular demonstration in 2023 and a lunar demonstration in 2025. We are also working on space robot development projects for several customers for which we have already received orders.

Regulating artificial intelligence (AI) has become necessary in light of its deployment in high-risk scenarios. This paper explores the proposal to extend legal personhood to AI and robots, which had not yet been examined through the lens of the general public. We present two studies (N = 3,559) to obtain people’s views of electronic legal personhood vis-à-vis existing liability models. Our study reveals people’s desire to punish automated agents even though these entities are not recognized any mental state. Furthermore, people did not believe automated agents’ punishment would fulfill deterrence nor retribution and were unwilling to grant them legal punishment preconditions, namely physical independence and assets. Collectively, these findings suggest a conflict between the desire to punish automated agents and its perceived impracticability. We conclude by discussing how future design and legal decisions may influence how the public reacts to automated agents’ wrongdoings.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

I don't know how much this little quadruped from DeepRobotics costs, but the video makes it look scarily close to a consumer product.

Jueying Lite2 is an intelligent quadruped robot independently developed by DeepRobotics. Based on advanced control algorithms, it has multiple motion modes such as walking, sliding, jumping, running, and back somersault. It has freely superimposed intelligent modules, capable of autonomous positioning and navigation, real-time obstacle avoidance, and visual recognition. It has a user-oriented design concept, with new functions such as voice interaction, sound source positioning, and safety and collision avoidance, giving users a better interactive experience and safety assurance.

[ DeepRobotics ]

We hope that this video can assist the community in explaining what ROS is, who uses it, and why it is important to those unfamiliar with ROS.

https://vimeo.com/639235111/9aa251fdb6

[ ROS.org ]

Boston Dynamics should know better than to post new videos on Fridays (as opposed to Thursday nights, when I put this post together every week), but if you missed this last week, here you go.

Robot choreography by Boston Dynamics and Monica Thomas.

[ Boston Dynamics ]

DeKonBot 2: for when you want things really, really, really, slowly clean.

[ Fraunhofer ]

Who needs Digit when Cassie is still hard at work!

[ Michigan Robotics ]

I am not making any sort of joke about sausage handling.

[ Soft Robotics ]

A squad of mini rovers traversed the simulated lunar soils of NASA Glenn's SLOPE (Simulated Lunar Operations) lab recently. The shoebox-sized rovers were tested to see if they could navigate the conditions of hard-to-reach places such as craters and caves on the Moon.

[ NASA Glenn ]

This little cyclocopter is cute, but I'm more excited for the teaser at the end of the video.

[ TAMU ]

Fourteen years ago, a team of engineering experts and Virginia Tech students competed in the 2007 DARPA Urban Challenge and propelled Torc to success. We look forward to many more milestones as we work to commercialize autonomous trucks.

[ Torc ]

Blarg not more of this...

Show me the robot prepping those eggs and doing the plating, please.

[ Moley Robotics ]

ETH Zurich's unique non-profit project continues! From 25 to 27 October 2024, the third edition of the CYBATHLON will take place in a global format. To the original six disciplines, two more are added: a race using smart visual assistive technologies and a race using assistive robots. As a platform, CYBATHLON challenges teams from around the world to develop everyday assistive technologies for, and in collaboration with, people with disabilities.

[ Cybathlon ]

Will drone deliveries be a practical part of our future? We visit the test facilities of Wing to check out how their engineers and aircraft designers have developed a drone and drone fleet control system that is actually in operation today in parts of the world.

[ Tested ]

In our third Self-Driven Women event, Waymo engineering leads Allison Thackston, Shilpa Gulati, and Congcong Li talk about some of the toughest and most interesting problems in ML and robotics and how it enables building a scalable driving autonomous driving tech stack. They also discuss their respective career journeys, and answer live questions from the virtual audience.

[ Waymo ]

The Robotics and Automation Society Student Activities Committee (RAS SAC) is proud to present “Transition to a Career in Academia," a panel with robotics thought leaders. This panel is intended for robotics students and engineers interested in learning more about careers in academia after earning their degree. The panel will be moderated by RAS SAC Co-Chair, Marwa ElDinwiny.

[ IEEE RAS ]

This week's CMU RI Seminar is from Siddharth Srivastava at Arizona State, on The Unusual Effectiveness of Abstractions for Assistive AI.

[ CMU RI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

I don't know how much this little quadruped from DeepRobotics costs, but the video makes it look scarily close to a consumer product.

Jueying Lite2 is an intelligent quadruped robot independently developed by DeepRobotics. Based on advanced control algorithms, it has multiple motion modes such as walking, sliding, jumping, running, and back somersault. It has freely superimposed intelligent modules, capable of autonomous positioning and navigation, real-time obstacle avoidance, and visual recognition. It has a user-oriented design concept, with new functions such as voice interaction, sound source positioning, and safety and collision avoidance, giving users a better interactive experience and safety assurance.

[ DeepRobotics ]

We hope that this video can assist the community in explaining what ROS is, who uses it, and why it is important to those unfamiliar with ROS.

https://vimeo.com/639235111/9aa251fdb6

[ ROS.org ]

Boston Dynamics should know better than to post new videos on Fridays (as opposed to Thursday nights, when I put this post together every week), but if you missed this last week, here you go.

Robot choreography by Boston Dynamics and Monica Thomas.

[ Boston Dynamics ]

DeKonBot 2: for when you want things really, really, really, slowly clean.

[ Fraunhofer ]

Who needs Digit when Cassie is still hard at work!

[ Michigan Robotics ]

I am not making any sort of joke about sausage handling.

[ Soft Robotics ]

A squad of mini rovers traversed the simulated lunar soils of NASA Glenn's SLOPE (Simulated Lunar Operations) lab recently. The shoebox-sized rovers were tested to see if they could navigate the conditions of hard-to-reach places such as craters and caves on the Moon.

[ NASA Glenn ]

This little cyclocopter is cute, but I'm more excited for the teaser at the end of the video.

[ TAMU ]

Fourteen years ago, a team of engineering experts and Virginia Tech students competed in the 2007 DARPA Urban Challenge and propelled Torc to success. We look forward to many more milestones as we work to commercialize autonomous trucks.

[ Torc ]

Blarg not more of this...

Show me the robot prepping those eggs and doing the plating, please.

[ Moley Robotics ]

ETH Zurich's unique non-profit project continues! From 25 to 27 October 2024, the third edition of the CYBATHLON will take place in a global format. To the original six disciplines, two more are added: a race using smart visual assistive technologies and a race using assistive robots. As a platform, CYBATHLON challenges teams from around the world to develop everyday assistive technologies for, and in collaboration with, people with disabilities.

[ Cybathlon ]

Will drone deliveries be a practical part of our future? We visit the test facilities of Wing to check out how their engineers and aircraft designers have developed a drone and drone fleet control system that is actually in operation today in parts of the world.

[ Tested ]

In our third Self-Driven Women event, Waymo engineering leads Allison Thackston, Shilpa Gulati, and Congcong Li talk about some of the toughest and most interesting problems in ML and robotics and how it enables building a scalable driving autonomous driving tech stack. They also discuss their respective career journeys, and answer live questions from the virtual audience.

[ Waymo ]

The Robotics and Automation Society Student Activities Committee (RAS SAC) is proud to present “Transition to a Career in Academia," a panel with robotics thought leaders. This panel is intended for robotics students and engineers interested in learning more about careers in academia after earning their degree. The panel will be moderated by RAS SAC Co-Chair, Marwa ElDinwiny.

[ IEEE RAS ]

This week's CMU RI Seminar is from Siddharth Srivastava at Arizona State, on The Unusual Effectiveness of Abstractions for Assistive AI.

[ CMU RI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

I don't know how much this little quadruped from DeepRobotics costs, but the video makes it look scarily close to a consumer product.

Jueying Lite2 is an intelligent quadruped robot independently developed by DeepRobotics. Based on advanced control algorithms, it has multiple motion modes such as walking, sliding, jumping, running, and back somersault. It has freely superimposed intelligent modules, capable of autonomous positioning and navigation, real-time obstacle avoidance, and visual recognition. It has a user-oriented design concept, with new functions such as voice interaction, sound source positioning, and safety and collision avoidance, giving users a better interactive experience and safety assurance.

[ DeepRobotics ]

We hope that this video can assist the community in explaining what ROS is, who uses it, and why it is important to those unfamiliar with ROS.

https://vimeo.com/639235111/9aa251fdb6

[ ROS.org ]

Boston Dynamics should know better than to post new videos on Fridays (as opposed to Thursday nights, when I put this post together every week), but if you missed this last week, here you go.

Robot choreography by Boston Dynamics and Monica Thomas.

[ Boston Dynamics ]

DeKonBot 2: for when you want things really, really, really, slowly clean.

[ Fraunhofer ]

Who needs Digit when Cassie is still hard at work!

[ Michigan Robotics ]

I am not making any sort of joke about sausage handling.

[ Soft Robotics ]

A squad of mini rovers traversed the simulated lunar soils of NASA Glenn's SLOPE (Simulated Lunar Operations) lab recently. The shoebox-sized rovers were tested to see if they could navigate the conditions of hard-to-reach places such as craters and caves on the Moon.

[ NASA Glenn ]

This little cyclocopter is cute, but I'm more excited for the teaser at the end of the video.

[ TAMU ]

Fourteen years ago, a team of engineering experts and Virginia Tech students competed in the 2007 DARPA Urban Challenge and propelled Torc to success. We look forward to many more milestones as we work to commercialize autonomous trucks.

[ Torc ]

Blarg not more of this...

Show me the robot prepping those eggs and doing the plating, please.

[ Moley Robotics ]

ETH Zurich's unique non-profit project continues! From 25 to 27 October 2024, the third edition of the CYBATHLON will take place in a global format. To the original six disciplines, two more are added: a race using smart visual assistive technologies and a race using assistive robots. As a platform, CYBATHLON challenges teams from around the world to develop everyday assistive technologies for, and in collaboration with, people with disabilities.

[ Cybathlon ]

Will drone deliveries be a practical part of our future? We visit the test facilities of Wing to check out how their engineers and aircraft designers have developed a drone and drone fleet control system that is actually in operation today in parts of the world.

[ Tested ]

In our third Self-Driven Women event, Waymo engineering leads Allison Thackston, Shilpa Gulati, and Congcong Li talk about some of the toughest and most interesting problems in ML and robotics and how it enables building a scalable driving autonomous driving tech stack. They also discuss their respective career journeys, and answer live questions from the virtual audience.

[ Waymo ]

The Robotics and Automation Society Student Activities Committee (RAS SAC) is proud to present “Transition to a Career in Academia," a panel with robotics thought leaders. This panel is intended for robotics students and engineers interested in learning more about careers in academia after earning their degree. The panel will be moderated by RAS SAC Co-Chair, Marwa ElDinwiny.

[ IEEE RAS ]

This week's CMU RI Seminar is from Siddharth Srivastava at Arizona State, on The Unusual Effectiveness of Abstractions for Assistive AI.

[ CMU RI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

I don't know how much this little quadruped from DeepRobotics costs, but the video makes it look scarily close to a consumer product.

Jueying Lite2 is an intelligent quadruped robot independently developed by DeepRobotics. Based on advanced control algorithms, it has multiple motion modes such as walking, sliding, jumping, running, and back somersault. It has freely superimposed intelligent modules, capable of autonomous positioning and navigation, real-time obstacle avoidance, and visual recognition. It has a user-oriented design concept, with new functions such as voice interaction, sound source positioning, and safety and collision avoidance, giving users a better interactive experience and safety assurance.

[ DeepRobotics ]

We hope that this video can assist the community in explaining what ROS is, who uses it, and why it is important to those unfamiliar with ROS.

https://vimeo.com/639235111/9aa251fdb6

[ ROS.org ]

Boston Dynamics should know better than to post new videos on Fridays (as opposed to Thursday nights, when I put this post together every week), but if you missed this last week, here you go.

Robot choreography by Boston Dynamics and Monica Thomas.

[ Boston Dynamics ]

DeKonBot 2: for when you want things really, really, really, slowly clean.

[ Fraunhofer ]

Who needs Digit when Cassie is still hard at work!

[ Michigan Robotics ]

I am not making any sort of joke about sausage handling.

[ Soft Robotics ]

A squad of mini rovers traversed the simulated lunar soils of NASA Glenn's SLOPE (Simulated Lunar Operations) lab recently. The shoebox-sized rovers were tested to see if they could navigate the conditions of hard-to-reach places such as craters and caves on the Moon.

[ NASA Glenn ]

This little cyclocopter is cute, but I'm more excited for the teaser at the end of the video.

[ TAMU ]

Fourteen years ago, a team of engineering experts and Virginia Tech students competed in the 2007 DARPA Urban Challenge and propelled Torc to success. We look forward to many more milestones as we work to commercialize autonomous trucks.

[ Torc ]

Blarg not more of this...

Show me the robot prepping those eggs and doing the plating, please.

[ Moley Robotics ]

ETH Zurich's unique non-profit project continues! From 25 to 27 October 2024, the third edition of the CYBATHLON will take place in a global format. To the original six disciplines, two more are added: a race using smart visual assistive technologies and a race using assistive robots. As a platform, CYBATHLON challenges teams from around the world to develop everyday assistive technologies for, and in collaboration with, people with disabilities.

[ Cybathlon ]

Will drone deliveries be a practical part of our future? We visit the test facilities of Wing to check out how their engineers and aircraft designers have developed a drone and drone fleet control system that is actually in operation today in parts of the world.

[ Tested ]

In our third Self-Driven Women event, Waymo engineering leads Allison Thackston, Shilpa Gulati, and Congcong Li talk about some of the toughest and most interesting problems in ML and robotics and how it enables building a scalable driving autonomous driving tech stack. They also discuss their respective career journeys, and answer live questions from the virtual audience.

[ Waymo ]

The Robotics and Automation Society Student Activities Committee (RAS SAC) is proud to present “Transition to a Career in Academia," a panel with robotics thought leaders. This panel is intended for robotics students and engineers interested in learning more about careers in academia after earning their degree. The panel will be moderated by RAS SAC Co-Chair, Marwa ElDinwiny.

[ IEEE RAS ]

This week's CMU RI Seminar is from Siddharth Srivastava at Arizona State, on The Unusual Effectiveness of Abstractions for Assistive AI.

[ CMU RI ]

Improvisation is a hallmark of human creativity and serves a functional purpose in completing everyday tasks with novel resources. This is particularly exhibited in tool-using tasks: When the expected tool for a task is unavailable, humans often are able to replace the expected tool with an atypical one. As robots become more commonplace in human society, we will also expect them to become more skilled at using tools in order to accommodate unexpected variations of tool-using tasks. In order for robots to creatively adapt their use of tools to task variations in a manner similar to humans, they must identify tools that fulfill a set of task constraints that are essential to completing the task successfully yet are initially unknown to the robot. In this paper, we present a high-level process for tool improvisation (tool identification, evaluation, and adaptation), highlight the importance of tooltips in considering tool-task pairings, and describe a method of learning by correction in which the robot learns the constraints from feedback from a human teacher. We demonstrate the efficacy of the learning by correction method for both within-task and across-task transfer on a physical robot.

This paper presents a framework for programming in-contact tasks using learning by demonstration. The framework is demonstrated on an industrial gluing task, showing that a high quality robot behavior can be programmed using a single demonstration. A unified controller structure is proposed for the demonstration and execution of in-contact tasks that eases the transition from admittance controller for demonstration to parallel force/position control for the execution. The proposed controller is adapted according to the geometry of the task constraints, which is estimated online during the demonstration. In addition, the controller gains are adapted to the human behavior during demonstration to improve the quality of the demonstration. The considered gluing task requires the robot to alternate between free motion and in-contact motion; hence, an approach for minimizing contact forces during the switching between the two situations is presented. We evaluate our proposed system in a series of experiments, where we show that we are able to estimate the geometry of a curved surface, that our adaptive controller for demonstration allows users to achieve higher accuracy in a shorter demonstration duration when compared to an off-the-shelf controller for teaching implemented on a collaborative robot, and that our execution controller is able to reduce impact forces and apply a constant process force while adapting to the surface geometry.

Communication apprehension (CA), defined as anxiety in oral communication, and anxiety in eye contact (AEC), defined as the discomfort felt in communication while being stared at by others, limit communication effectiveness. In this study, we examined whether using a teleoperated robot avatar in a video teleconference provides communication support to people with CA and AEC. We propose a robotic telecommunication system in which a user has two options to produce utterance for own responses in online interaction with interviewer i.e., either by a robot avatar that faces the interviewer, or by self. Two imagination-based experiments were conducted, in which a total of 400 participants were asked to watch videos for interview scenes with or without the proposed system; 200 participants for each experiment. The participants then evaluated their impressions by imagining that they were the interviewee. In the first experiment, a video conference with the proposed system was compared with an ordinary video conference, where the interviewer and interviewee faced each other. In the second experiment, it was compared with an ordinary video conference where the interviewer’s attentional focus was directed away from the interviewee. A significant decrease in the expected CA and AEC of participants with the proposed system was observed in both experiments, whereas a significant increase in the expected sense of being attended (SoBA) was observed in the second experiment. This study contributes to the literature in terms of examining the expected impact of using a teleoperated robot avatar for better video conferences, especially for supporting individuals with CA and AEC.

Assessment of minimally invasive surgical skills is a non-trivial task, usually requiring the presence and time of expert observers, including subjectivity and requiring special and expensive equipment and software. Although there are virtual simulators that provide self-assessment features, they are limited as the trainee loses the immediate feedback from realistic physical interaction. The physical training boxes, on the other hand, preserve the immediate physical feedback, but lack the automated self-assessment facilities. This study develops an algorithm for real-time tracking of laparoscopy instruments in the video cues of a standard physical laparoscopy training box with a single fisheye camera. The developed visual tracking algorithm recovers the 3D positions of the laparoscopic instrument tips, to which simple colored tapes (markers) are attached. With such system, the extracted instrument trajectories can be digitally processed, and automated self-assessment feedback can be provided. In this way, both the physical interaction feedback would be preserved and the need for the observance of an expert would be overcome. Real-time instrument tracking with a suitable assessment criterion would constitute a significant step towards provision of real-time (immediate) feedback to correct trainee actions and show them how the action should be performed. This study is a step towards achieving this with a low cost, automated, and widely applicable laparoscopy training and assessment system using a standard physical training box equipped with a fisheye camera.

Principles from human-human physical interaction may be necessary to design more intuitive and seamless robotic devices to aid human movement. Previous studies have shown that light touch can aid balance and that haptic communication can improve performance of physical tasks, but the effects of touch between two humans on walking balance has not been previously characterized. This study examines physical interaction between two persons when one person aids another in performing a beam-walking task. 12 pairs of healthy young adults held a force sensor with one hand while one person walked on a narrow balance beam (2 cm wide x 3.7 m long) and the other person walked overground by their side. We compare balance performance during partnered vs. solo beam-walking to examine the effects of haptic interaction, and we compare hand interaction mechanics during partnered beam-walking vs. overground walking to examine how the interaction aided balance. While holding the hand of a partner, participants were able to walk further on the beam without falling, reduce lateral sway, and decrease angular momentum in the frontal plane. We measured small hand force magnitudes (mean of 2.2 N laterally and 3.4 N vertically) that created opposing torque components about the beam axis and calculated the interaction torque, the overlapping opposing torque that does not contribute to motion of the beam-walker’s body. We found higher interaction torque magnitudes during partnered beam-walking vs. partnered overground walking, and correlation between interaction torque magnitude and reductions in lateral sway. To gain insight into feasible controller designs to emulate human-human physical interactions for aiding walking balance, we modeled the relationship between each torque component and motion of the beam-walker’s body as a mass-spring-damper system. Our model results show opposite types of mechanical elements (active vs. passive) for the two torque components. Our results demonstrate that hand interactions aid balance during partnered beam-walking by creating opposing torques that primarily serve haptic communication, and our model of the torques suggest control parameters for implementing human-human balance aid in human-robot interactions.

This paper presents an observer architecture that can estimate a set of configuration space variables, their rates of change and contact forces of a fabric-reinforced inflatable soft robot. We discretized the continuum robot into a sequence of discs connected by inextensible threads; this allows great flexibility when describing the robot’s behavior. At first, the system dynamics is described by a linear parameter-varying (LPV) model that includes a set of subsystems, each of which corresponds to a particular range of chamber pressure. A real-world challenge we confront is that the physical robot prototype exhibits a hysteresis loop whose directions depend on whether the chamber is inflating or deflating. In this paper we transform the hysteresis model to a semilinear model to avoid backward-in-time definitions, making it suitable for observer and controller design. The final model describing the soft robot, including the discretized continuum and hysteresis behavior, is called the semilinear parameter-varying (SPV) model. The semilinear parameter-varying observer architecture includes a set of sub-observers corresponding to the subsystems for each chamber pressure range in the SPV model. The proposed observer is evaluated through simulations and experiments. Simulation results show that the observer can estimate the configuration space variables and their rate of change with no steady-state error. In addition, experimental results display fast convergence of generalized contact force estimates and good tracking of the robot’s configuration relative to ground-truth motion capture data.

A hybrid exoskeleton comprising a powered exoskeleton and functional electrical stimulation (FES) is a promising technology for restoration of standing and walking functions after a neurological injury. Its shared control remains challenging due to the need to optimally distribute joint torques among FES and the powered exoskeleton while compensating for the FES-induced muscle fatigue and ensuring performance despite highly nonlinear and uncertain skeletal muscle behavior. This study develops a bi-level hierarchical control design for shared control of a powered exoskeleton and FES to overcome these challenges. A higher-level neural network–based iterative learning controller (NNILC) is derived to generate torques needed to drive the hybrid system. Then, a low-level model predictive control (MPC)-based allocation strategy optimally distributes the torque contributions between FES and the exoskeleton’s knee motors based on the muscle fatigue and recovery characteristics of a participant’s quadriceps muscles. A Lyapunov-like stability analysis proves global asymptotic tracking of state-dependent desired joint trajectories. The experimental results on four non-disabled participants validate the effectiveness of the proposed NNILC-MPC framework. The root mean square error (RMSE) of the knee joint and the hip joint was reduced by 71.96 and 74.57%, respectively, in the fourth iteration compared to the RMSE in the 1st sit-to-stand iteration.

Rapid developments in evolutionary computation, robotics, 3D-printing, and material science are enabling advanced systems of robots that can autonomously reproduce and evolve. The emerging technology of robot evolution challenges existing AI ethics because the inherent adaptivity, stochasticity, and complexity of evolutionary systems severely weaken human control and induce new types of hazards. In this paper we address the question how robot evolution can be responsibly controlled to avoid safety risks. We discuss risks related to robot multiplication, maladaptation, and domination and suggest solutions for meaningful human control. Such concerns may seem far-fetched now, however, we posit that awareness must be created before the technology becomes mature.

The shape information of flexible endoscopes or other continuum structures, e.g., intro-vascular catheters, is needed for accurate navigation, motion compensation, and haptic feedback in robotic surgical systems. Existing methods rely on optical fiber sensors, electromagnetic sensors, or expensive medical imaging modalities such as X-ray fluoroscopy, magnetic resonance imaging, and ultrasound to obtain the shape information of these flexible medical devices. Here, we propose to estimate the shape/curvature of a continuum structure by measuring the force required to insert a flexible shaft into the internal channel/pathway of the continuum. We found that there is a consistent correlation between the measured insertion force and curvature of the planar continuum pathway. A testbed was built to insert a flexible shaft into a planar continuum pathway with adjustable shapes. The insertion forces, insertion displacement, and the shapes of the pathway were recorded. A neural network model was developed to model this correlation based on the training data collected on the testbed. The trained model, tested on the testing data, can accurately estimate the curvature magnitudes and the accumulated bending angles of the pathway simply based on the measured insertion force at the proximal end of the shaft. The approach may be used to estimate the curvature magnitudes and accumulated bending angles of flexible endoscopic surgical robots or catheters for accurate motion compensation, haptic force feedback, localization, or navigation. The advantage of this approach is that the employed proximal force can be easily obtained outside the pathway or continuum structure without any embedded sensor in the continuum structure. Future work is needed to further investigate the correlation between insertion forces and the pathway and enhance the capability of the model in estimating more complex shapes, e.g., spatial shapes with multiple bends.

Pages