Feed aggregator

As part of its emerging role as a global regulatory watchdog, the European Commission published a proposal on 21 April for regulations to govern artificial intelligence use in the European Union.

The economic stakes are high: the Commission predicts European public and private investment in AI reaching €20 billion a year this decade, and that was before the additional earmark of up to €134 billion earmarked for digital transitions in Europe’s Covid-19 pandemic recovery fund, some of which the Commission presumes will fund AI, too. Add to that  counting investments in AI outside the EU but which target EU residents, since these rules will apply to any use of AI in the EU, not just by EU-based companies or governments.

Things aren’t going to change overnight: the EU’s AI rules proposal is the result of three years of work by bureaucrats, industry experts, and public consultations and must go through the European Parliament—which requested it—before it can become law. EU member states then often take years to transpose EU-level regulations into their national legal codes. 

The proposal defines four tiers for AI-related activity and differing levels of oversight for each. The first tier is unacceptable risk: some AI uses would be banned outright in public spaces, with specific exceptions granted by national laws and subject to additional oversight and stricter logging and human oversight. The to-be-banned AI activity that has probably garnered the most attention is real-time remote biometric identification, i.e. facial recognition. The proposal also bans subliminal behavior modification and social scoring applications. The proposal suggests fines of up to 6 percent of commercial violators’ global annual revenue.

The proposal next defines a high-risk category, determined by the purpose of the system and the potential and probability of harm. Examples listed in the proposal include job recruiting, credit checks, and the justice system. The rules would require such AI applications to use high-quality datasets, document their traceability, share information with users, and account for human oversight. The EU would create a central registry of such systems under the proposed rules and require approval before deployment.

Limited-risk activities, such as the use of chatbots or deepfakes on a website, will have less oversight but will require a warning label, to allow users to opt in or out. Then finally there is a tier for applications judged to present minimal risk.

As often happens when governments propose dense new rulebooks (this one is 108 pages), the initial reactions from industry and civil society groups seem to be more about the existence and reach of industry oversight than the specific content of the rules. One tech-funded think tank told the Wall Street Journal that it could become “infeasible to build AI in Europe.” In turn, privacy-focused civil society groups such as European Digital Rights (EDRi) said in a statement that the “regulation allows too wide a scope for self-regulation by companies.”

“I think one of the ideas behind this piece of regulation was trying to balance risk and get people excited about AI and regain trust,” says Lisa-Maria Neudert, AI governance researcher at the University of Oxford, England, and the Weizenbaum Institut in Berlin, Germany. A 2019 Lloyds Register Foundation poll found that the global public is about evenly split between fear and excitement about AI. 

“I can imagine it might help if you have an experienced large legal team,” to help with compliance, Neudert says, and it may be “a difficult balance to strike” between rules that remain startup-friendly and succeed in reining in mega-corporations.

AI researchers Mona Sloane and Andrea Renda write in VentureBeat that the rules are weaker on monitoring of how AI plays out after approval and launch, neglecting “a crucial feature of AI-related risk: that it is pervasive, and it is emergent, often evolving in unpredictable ways after it has been developed and deployed.”

Europe has already been learning from the impact its sweeping 2018 General Data Protection Regulation (GDPR) had on global tech and privacy. Yes, some outside websites still serve Europeans a page telling them the website owners can’t be bothered to comply with GDPR, so Europeans can’t see any content. But most have found a way to adapt in order to reach this unified market of 448 million people.

“I don’t think we should generalize [from GDPR to the proposed AI rules], but it’s fair to assume that such a big piece of legislation will have effects beyond the EU,” Neudert says. It will be easier for legislators in other places to follow a template than to replicate the EU’s heavy investment in research, community engagement, and rule-writing.

While tech companies and their industry groups may grumble about the need to comply with the incipient AI rules, Register columnist Rupert Goodwin suggests they’d be better off focusing on forming the industry groups that will shape the implementation and enforcement of the rules in the future: “You may already be in one of the industry organizations for AI ethics or assessment; if not, then consider them the seeds from which influence will grow.”

This paper adds on to the on-going efforts to provide more autonomy to space robots and introduces the concept of programming by demonstration or imitation learning for trajectory planning of manipulators on free-floating spacecraft. A redundant 7-DoF robotic arm is mounted on small spacecraft dedicated for debris removal, on-orbit servicing and assembly, autonomous and rendezvous docking. The motion of robot (or manipulator) arm induces reaction forces on the spacecraft and hence its attitude changes prompting the Attitude Determination and Control System (ADCS) to take large corrective action. The method introduced here is capable of finding the trajectory that minimizes the attitudinal changes thereby reducing the load on ADCS. One of the critical elements in spacecraft trajectory planning and control is the power consumption. The approach introduced in this work carry out trajectory learning offline by collecting data from demonstrations and encoding it as a probabilistic distribution of trajectories. The learned trajectory distribution can be used for planning in previously unseen situations by conditioning the probabilistic distribution. Hence almost no power is required for computations after deployment. Sampling from a conditioned distribution provides several possible trajectories from the same start to goal state. To determine the trajectory that minimizes attitudinal changes, a cost term is defined and the trajectory which minimizes this cost is considered the optimal one.

Model-based optimal control of soft robots may enable compliant, underdamped platforms to operate in a repeatable fashion and effectively accomplish tasks that are otherwise impossible for soft robots. Unfortunately, developing accurate analytical dynamic models for soft robots is time-consuming, difficult, and error-prone. Deep learning presents an alternative modeling approach that only requires a time history of system inputs and system states, which can be easily measured or estimated. However, fully relying on empirical or learned models involves collecting large amounts of representative data from a soft robot in order to model the complex state space–a task which may not be feasible in many situations. Furthermore, the exclusive use of empirical models for model-based control can be dangerous if the model does not generalize well. To address these challenges, we propose a hybrid modeling approach that combines machine learning methods with an existing first-principles model in order to improve overall performance for a sampling-based non-linear model predictive controller. We validate this approach on a soft robot platform and demonstrate that performance improves by 52% on average when employing the combined model.

The autonomous vehicle (AV) is one of the first commercialized AI-embedded robots to make autonomous decisions. Despite technological advancements, unavoidable AV accidents that result in life-and-death consequences cannot be completely eliminated. The emerging social concern of how an AV should make ethical decisions during unavoidable accidents is referred to as the moral dilemma of AV, which has promoted heated discussions among various stakeholders. However, there are research gaps in explainable AV ethical decision-making processes that predict how AVs’ moral behaviors are made that are acceptable from the AV users’ perspectives. This study addresses the key question: What factors affect ethical behavioral intentions in the AV moral dilemma? To answer this question, this study draws theories from multidisciplinary research fields to propose the “Integrative ethical decision-making framework for the AV moral dilemma.” The framework includes four interdependent ethical decision-making stages: AV moral dilemma issue framing, intuitive moral reasoning, rational moral reasoning, and ethical behavioral intention making. Further, the framework includes variables (e.g., perceived moral intensity, individual factors, and personal moral philosophies) that influence the ethical decision-making process. For instance, the framework explains that AV users from Eastern cultures will tend to endorse a situationist ethics position (high idealism and high relativism), which views that ethical decisions are relative to context, compared to AV users from Western cultures. This proposition is derived from the link between individual factors and personal moral philosophy. Moreover, the framework proposes a dual-process theory, which explains that both intuitive and rational moral reasoning are integral processes of ethical decision-making during the AV moral dilemma. Further, this framework describes that ethical behavioral intentions that lead to decisions in the AV moral dilemma are not fixed, but are based on how an individual perceives the seriousness of the situation, which is shaped by their personal moral philosophy. This framework provides a step-by-step explanation of how pluralistic ethical decision-making occurs, reducing the abstractness of AV moral reasoning processes.

The Ingenuity Mars Helicopter has been doing an amazing job flying on Mars. Over the last several weeks it has far surpassed its original goal of proving that flight on Mars was simply possible, and is now showing how such flights are not only practical but also useful.

To that end, NASA has decided that the little helicopter deserves to not freeze to death quite so soon, and the agency has extended its mission for at least another month, giving it the opportunity to scout a new landing site to keep up with Perseverance as the rover starts its own science mission.

Some quick context: the Mars Helicopter mission was originally scheduled to last 30 days, and we’re currently a few weeks into that. The helicopter has flown successfully four times; the most recent flight was on April 30, and was a 266 meter round-trip at 5 meters altitude that took 117 seconds. Everything has worked nearly flawlessly, with (as far as we know) the only hiccup being a minor software bug that has a small chance of preventing the helicopter from entering flight mode. This bug has kicked in once, but JPL just tried doing the flight again, and then everything was fine. 

In a press conference last week, NASA characterized Ingenuity’s technical performance as “exceeding all expectations,” and the helicopter met all of its technical goals (and then some) earlier than anyone expected. Originally, that wouldn’t have made a difference, and Perseverance would have driven off and left Ingenuity behind no matter how well it was performing. But some things have changed, allowing Ingenuity to transition from a tech demo into an extended operational demo, as Jennifer Trosper, Perseverance deputy project manager, explained:

“We had not originally planned to do this operational demo with the helicopter, but two things have happened that have enabled us to do it. The first thing is that originally, we thought that we’d be driving away from the location that we landed at, but the [Perseverance] science team is actually really interested in getting initial samples from this region that we’re in right now. Another thing that happened is that the helicopter is operating in a fantastic way. The communications link is overperforming, and even if we move farther away, we believe that the rover and the helicopter will still have strong communications, and we’ll be able to continue the operational demo.”

The communications link was one of the original reasons why Perseverance’s mission was going to be capped at 30 days. It’s a little bit counter-intuitive, but it turns out that the helicopter simply cannot keep up with the rover, which Ingenuity relies on for communication with Earth. Ingenuity is obviously faster in flight, but once you factor in recharge time, if the rover is driving a substantial distance, the helicopter would not be able to stay within communications range.

And there’s another issue with the communications link: as a tech demo, Ingenuity’s communication system wasn’t tested to make sure that it can’t be disrupted by electronic interference generated by other bits and pieces of the Perseverance rover. Consequently, Ingenuity’s 30-day mission was planned such that when the helicopter was in the air, Perseverance was perfectly stationary. This is why we don’t have video where Perseverance pans its cameras to follow the helicopter—using those actuators might have disrupted the communications link.

Going forward, Perseverance will be the priority, not Ingenuity. The helicopter will have to do its best to stay in contact with the rover as it starts focusing on its own science mission. Ingenuity will have to stay in range (within a kilometer or so) and communicate when it can, even if the rover is busy doing other stuff. This extended mission will initially last 30 more days, and if it turns out that Ingenuity can’t do what it needs to do without needing more from Perseverance, well, that’ll be the end of the Mars helicopter mission. Even best case, it sounds like we won’t be getting any more pictures of Ingenuity in flight, since planning that kind of stuff took up a lot of the rover’s time. 

With all that in mind, here’s what NASA says we should be expecting:

“With short drives expected for Perseverance in the near term, Ingenuity may execute flights that land near the rover’s current location or its next anticipated parking spot. The helicopter can use these opportunities to perform aerial observations of rover science targets, potential rover routes, and inaccessible features while also capturing stereo images for digital elevation maps. The lessons learned from these efforts will provide significant benefit to future mission planners. These scouting flights are a bonus and not a requirement for Perseverance to complete its science mission.

The cadence of flights during Ingenuity’s operations demonstration phase will slow from once every few days to about once every two or three weeks, and the forays will be scheduled to avoid interfering with Perseverance’s science operations. The team will assess flight operations after 30 sols and will complete flight operations no later than the end of August.”

Specifically, Ingenuity spent its recent Flight 4 scouting for a new airfield to land at, and Flight 5 will be the first flight of this new operations phase, where it’ll attempt to land at this new airfield, a place it’s never touched down before about 60m south of its current position on Mars. NASA expects that there might be one or two flights after this, but nobody’s quite sure how it’s going to go, and NASA wasn’t willing to speculate about what’ll happen longer term.

It’s important to remember that all of this is happening in the context of Ingenuity being a 30 day tech demo. The hardware on the helicopter was designed with that length of time in mind, and not a multi-month mission. NASA said during their press conference that the landing gear is probably good for at least 100 landings, and the solar panel and sun angle will be able to meet energy requirements for at least a few months. The expectation is that with enough day/night thermal cycles, a solder joint will snap, rendering Ingenuity inoperable in some way. Nobody knows when that’ll happen, but again, this is a piece of hardware designed to function for 30 days, and despite JPL’s legacy of ridiculously long-loved robotic explorers, we should adjust our expectations accordingly. MiMi Aung, Mars Helicopter Project Manager, has it exactly right when she says that “we will be celebrating each day that Ingenuity survives and operates beyond that original window.” We’re just glad that there will be more to celebrate going forward. 

The Ingenuity Mars Helicopter has been doing an amazing job flying on Mars. Over the last several weeks it has far surpassed its original goal of proving that flight on Mars was simply possible, and is now showing how such flights are not only practical but also useful.

To that end, NASA has decided that the little helicopter deserves to not freeze to death quite so soon, and the agency has extended its mission for at least another month, giving it the opportunity to scout a new landing site to keep up with Perseverance as the rover starts its own science mission.

Some quick context: the Mars Helicopter mission was originally scheduled to last 30 days, and we’re currently a few weeks into that. The helicopter has flown successfully four times; the most recent flight was on April 30, and was a 266 meter round-trip at 5 meters altitude that took 117 seconds. Everything has worked nearly flawlessly, with (as far as we know) the only hiccup being a minor software bug that has a small chance of preventing the helicopter from entering flight mode. This bug has kicked in once, but JPL just tried doing the flight again, and then everything was fine. 

In a press conference last week, NASA characterized Ingenuity’s technical performance as “exceeding all expectations,” and the helicopter met all of its technical goals (and then some) earlier than anyone expected. Originally, that wouldn’t have made a difference, and Perseverance would have driven off and left Ingenuity behind no matter how well it was performing. But some things have changed, allowing Ingenuity to transition from a tech demo into an extended operational demo, as Jennifer Trosper, Perseverance deputy project manager, explained:

“We had not originally planned to do this operational demo with the helicopter, but two things have happened that have enabled us to do it. The first thing is that originally, we thought that we’d be driving away from the location that we landed at, but the [Perseverance] science team is actually really interested in getting initial samples from this region that we’re in right now. Another thing that happened is that the helicopter is operating in a fantastic way. The communications link is overperforming, and even if we move farther away, we believe that the rover and the helicopter will still have strong communications, and we’ll be able to continue the operational demo.”

The communications link was one of the original reasons why Perseverance’s mission was going to be capped at 30 days. It’s a little bit counter-intuitive, but it turns out that the helicopter simply cannot keep up with the rover, which Ingenuity relies on for communication with Earth. Ingenuity is obviously faster in flight, but once you factor in recharge time, if the rover is driving a substantial distance, the helicopter would not be able to stay within communications range.

And there’s another issue with the communications link: as a tech demo, Ingenuity’s communication system wasn’t tested to make sure that it can’t be disrupted by electronic interference generated by other bits and pieces of the Perseverance rover. Consequently, Ingenuity’s 30-day mission was planned such that when the helicopter was in the air, Perseverance was perfectly stationary. This is why we don’t have video where Perseverance pans its cameras to follow the helicopter—using those actuators might have disrupted the communications link.

Going forward, Perseverance will be the priority, not Ingenuity. The helicopter will have to do its best to stay in contact with the rover as it starts focusing on its own science mission. Ingenuity will have to stay in range (within a kilometer or so) and communicate when it can, even if the rover is busy doing other stuff. This extended mission will initially last 30 more days, and if it turns out that Ingenuity can’t do what it needs to do without needing more from Perseverance, well, that’ll be the end of the Mars helicopter mission. Even best case, it sounds like we won’t be getting any more pictures of Ingenuity in flight, since planning that kind of stuff took up a lot of the rover’s time. 

With all that in mind, here’s what NASA says we should be expecting:

“With short drives expected for Perseverance in the near term, Ingenuity may execute flights that land near the rover’s current location or its next anticipated parking spot. The helicopter can use these opportunities to perform aerial observations of rover science targets, potential rover routes, and inaccessible features while also capturing stereo images for digital elevation maps. The lessons learned from these efforts will provide significant benefit to future mission planners. These scouting flights are a bonus and not a requirement for Perseverance to complete its science mission.

The cadence of flights during Ingenuity’s operations demonstration phase will slow from once every few days to about once every two or three weeks, and the forays will be scheduled to avoid interfering with Perseverance’s science operations. The team will assess flight operations after 30 sols and will complete flight operations no later than the end of August.”

Specifically, Ingenuity spent its recent Flight 4 scouting for a new airfield to land at, and Flight 5 will be the first flight of this new operations phase, where it’ll attempt to land at this new airfield, a place it’s never touched down before about 60m south of its current position on Mars. NASA expects that there might be one or two flights after this, but nobody’s quite sure how it’s going to go, and NASA wasn’t willing to speculate about what’ll happen longer term.

It’s important to remember that all of this is happening in the context of Ingenuity being a 30 day tech demo. The hardware on the helicopter was designed with that length of time in mind, and not a multi-month mission. NASA said during their press conference that the landing gear is probably good for at least 100 landings, and the solar panel and sun angle will be able to meet energy requirements for at least a few months. The expectation is that with enough day/night thermal cycles, a solder joint will snap, rendering Ingenuity inoperable in some way. Nobody knows when that’ll happen, but again, this is a piece of hardware designed to function for 30 days, and despite JPL’s legacy of ridiculously long-loved robotic explorers, we should adjust our expectations accordingly. MiMi Aung, Mars Helicopter Project Manager, has it exactly right when she says that “we will be celebrating each day that Ingenuity survives and operates beyond that original window.” We’re just glad that there will be more to celebrate going forward. 

In this paper, we design and develop a novel robotic bronchoscope for sampling of the distal lung in mechanically-ventilated (MV) patients in critical care units. Despite the high cost and attributable morbidity and mortality of MV patients with pneumonia which approaches 40%, sampling of the distal lung in MV patients suffering from range of lung diseases such as Covid-19 is not standardised, lacks reproducibility and requires expert operators. We propose a robotic bronchoscope that enables repeatable sampling and guidance to distal lung pathologies by overcoming significant challenges that are encountered whilst performing bronchoscopy in MV patients, namely, limited dexterity, large size of the bronchoscope obstructing ventilation, and poor anatomical registration. We have developed a robotic bronchoscope with 7 Degrees of Freedom (DoFs), an outer diameter of 4.5 mm and inner working channel of 2 mm. The prototype is a push/pull actuated continuum robot capable of dexterous manipulation inside the lung and visualisation/sampling of the distal airways. A prototype of the robot is engineered and a mechanics-based model of the robotic bronchoscope is developed. Furthermore, we develop a novel numerical solver that improves the computational efficiency of the model and facilitates the deployment of the robot. Experiments are performed to verify the design and evaluate accuracy and computational cost of the model. Results demonstrate that the model can predict the shape of the robot in <0.011s with a mean error of 1.76 cm, enabling the future deployment of a robotic bronchoscope in MV patients.

A unified method for designing the motion of a snake robot negotiating complicated pipe structures is presented. Such robots moving inside pipes must deal with various “obstacles,” such as junctions, bends, diameter changes, shears, and blockages. To surmount these obstacles, we propose a method that enables the robot to adapt to multiple pipe structures in a unified way. This method also applies to motion that is necessary to pass between the inside and the outside of a pipe. We designed the target form of the snake robot using two helices connected by an arbitrary shape. This method can be applied to various obstacles by designing a part of the target form specifically for given obstacles. The robot negotiates obstacles under shift control by employing a rolling motion. Considering the slip between the robot and the pipe, the model expands the method to cover cases where two helices have different properties. We demonstrated the effectiveness of the proposed method in various experiments.

Soft materials are inherently flexible and make suitable candidates for soft robots intended for specific tasks that would otherwise not be achievable (e.g., smart grips capable of picking up objects without prior knowledge of their stiffness). Moreover, soft robots exploit the mechanics of their fundamental building blocks and aim to provide targeted functionality without the use of electronics or wiring. Despite recent progress, locomotion in soft robotics applications has remained a relatively young field with open challenges yet to overcome. Justly, harnessing structural instabilities and utilizing bistable actuators have gained importance as a solution. This report focuses on substrate-free reconfigurable structures composed of multistable unit cells with a nonconvex strain energy potential, which can exhibit structural transitions and produce strongly nonlinear transition waves. The energy released during the transition, if sufficient, balances the dissipation and kinetic energy of the system and forms a wave front that travels through the structure to effect its permanent or reversible reconfiguration. We exploit a triangular unit cell’s design space and provide general guidelines for unit cell selection. Using a continuum description, we predict and map the resulting structure’s behavior for various geometric and material properties. The structural motion created by these strongly nonlinear metamaterials has potential applications in propulsion in soft robotics, morphing surfaces, reconfigurable devices, mechanical logic, and controlled energy absorption.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA IROS 2021 – September 27-1, 2021 – [Online Event] ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Ascend is a smart knee orthosis designed to improve mobility and relieve knee pain. The customized, lightweight, and comfortable design reduces burden on the knee and intuitively adjusts support as needed. Ascend provides a safe and non-surgical solution for patients with osteoarthritis, knee instability, and/or weak quadriceps.

Each one of these is custom-built, and you can pre-order one now.

[ Roam Robotics ]

Ingenuity’s third flight achieved a longer flight time and more sideways movement than previously attempted. During the 80-second flight, the helicopter climbed to 16 feet (5 meters) and flew 164 feet (50 meters) downrange and back, for a total distance of 328 feet (100 meters). The third flight test took place at “Wright Brothers Field” in Jezero Crater, Mars, on April 25, 2021.

[ NASA ]

This right here, the future of remote work.

The robot will run you about $3,000 USD.

[ VStone ] via [ Robotstart ]

Texas-based aerospace robotics company, Wilder Systems, enhanced their existing automation capabilities to aid in the fight against COVID-19. Their recent development of a robotic testing system is both increasing capacity for COVID-19 testing and delivering faster results to individuals. The system conducts saliva-based PCR tests, which is considered the gold standard for COVID testing. Based on a protocol developed by Yale and authorized by the FDA, the system does not need additional approvals. This flexible, modular system can run up to 2,000 test samples per day, and can be deployed anywhere where standard electric power is available.

[ ARM Institute ]

Tests show that people do not like being nearly hit by drones.

But seriously, this research has resulted in some useful potential lessons for deploying drones in areas where they have a chance of interacting with humans.

[ Paper ]

The Ingenuity helicopter made history on April 19, 2021, with the first powered, controlled flight of an aircraft on another planet. How do engineers talk to a helicopter all the way out on Mars? We’ll hear about it from Nacer Chahat of NASA’s Jet Propulsion Laboratory, who worked on the helicopter’s antenna and telecommunication system.

[ NASA ]

A team of scientists from the Max Planck Institute for Intelligent Systems has developed a system with which they can fabricate miniature robots building block by building block, which function exactly as required.

[ Max Planck Institute ]

Well this was inevitable, wasn't it?

The pilot regained control and the drone was fine, though.

[ PetaPixel ]

NASA’s Ingenuity Mars Helicopter takes off and lands in this video captured on April 25, 2021, by Mastcam-Z, an imager aboard NASA’s Perseverance Mars rover. As expected, the helicopter flew out of its field of vision while completing a flight plan that took it 164 feet (50 meters) downrange of the landing spot. Keep watching, the helicopter will return to stick the landing. Top speed for today's flight was about 2 meters per second, or about 4.5 miles-per-hour.

[ NASA ]

U.S. Naval Research Laboratory engineers recently demonstrated Hybrid Tiger, an electric unmanned aerial vehicle (UAV) with multi-day endurance flight capability, at Aberdeen Proving Grounds, Maryland.

[ NRL ]

This week's CMU RI Seminar is by Avik De from Ghost Robotics, on “Design and control of insect-scale bees and dog-scale quadrupeds.”

Did you watch the Q&A? If not, you should watch the Q&A.

[ CMU ]

Autonomous quadrotors will soon play a major role in search-and-rescue, delivery, and inspection missions, where a fast response is crucial. However, their speed and maneuverability are still far from those of birds and human pilots. What does it take to make drones navigate as good or even better than human pilots?

[ GRASP Lab ]

With the current pandemic accelerating the revolution of AI in healthcare, where is the industry heading in the next 5-10 years? What are the key challenges and most exciting opportunities? These questions will be answered by HAI’s Co-Director, Fei-Fei Li and the Founder of DeepLearning.AI, Andrew Ng in this fireside chat virtual event.

[ Stanford HAI ]

Autonomous robots have the potential to serve as versatile caregivers that improve quality of life for millions of people with disabilities worldwide. Yet, physical robotic assistance presents several challenges, including risks associated with physical human-robot interaction, difficulty sensing the human body, and a lack of tools for benchmarking and training physically assistive robots. In this talk, I will present techniques towards addressing each of these core challenges in robotic caregiving.

[ GRASP Lab ]

What does it take to empower persons with disabilities, and why is educating ourselves on this topic the first step towards better inclusion? Why is developing assistive technologies for people with disabilities important in order to contribute to their integration in society? How do we implement the policies and actions required to enable everyone to live their lives fully? ETH Zurich and the Global Shapers Zurich Hub invited to an online dialogue on the topic “For a World without Barriers-Removing Obstacles in Daily Life for People with Disabilities.”

[ Cybathlon ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA IROS 2021 – September 27-1, 2021 – [Online Event] ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Ascend is a smart knee orthosis designed to improve mobility and relieve knee pain. The customized, lightweight, and comfortable design reduces burden on the knee and intuitively adjusts support as needed. Ascend provides a safe and non-surgical solution for patients with osteoarthritis, knee instability, and/or weak quadriceps.

Each one of these is custom-built, and you can pre-order one now.

[ Roam Robotics ]

Ingenuity’s third flight achieved a longer flight time and more sideways movement than previously attempted. During the 80-second flight, the helicopter climbed to 16 feet (5 meters) and flew 164 feet (50 meters) downrange and back, for a total distance of 328 feet (100 meters). The third flight test took place at “Wright Brothers Field” in Jezero Crater, Mars, on April 25, 2021.

[ NASA ]

This right here, the future of remote work.

The robot will run you about $3,000 USD.

[ VStone ] via [ Robotstart ]

Texas-based aerospace robotics company, Wilder Systems, enhanced their existing automation capabilities to aid in the fight against COVID-19. Their recent development of a robotic testing system is both increasing capacity for COVID-19 testing and delivering faster results to individuals. The system conducts saliva-based PCR tests, which is considered the gold standard for COVID testing. Based on a protocol developed by Yale and authorized by the FDA, the system does not need additional approvals. This flexible, modular system can run up to 2,000 test samples per day, and can be deployed anywhere where standard electric power is available.

[ ARM Institute ]

Tests show that people do not like being nearly hit by drones.

But seriously, this research has resulted in some useful potential lessons for deploying drones in areas where they have a chance of interacting with humans.

[ Paper ]

The Ingenuity helicopter made history on April 19, 2021, with the first powered, controlled flight of an aircraft on another planet. How do engineers talk to a helicopter all the way out on Mars? We’ll hear about it from Nacer Chahat of NASA’s Jet Propulsion Laboratory, who worked on the helicopter’s antenna and telecommunication system.

[ NASA ]

A team of scientists from the Max Planck Institute for Intelligent Systems has developed a system with which they can fabricate miniature robots building block by building block, which function exactly as required.

[ Max Planck Institute ]

Well this was inevitable, wasn't it?

The pilot regained control and the drone was fine, though.

[ PetaPixel ]

NASA’s Ingenuity Mars Helicopter takes off and lands in this video captured on April 25, 2021, by Mastcam-Z, an imager aboard NASA’s Perseverance Mars rover. As expected, the helicopter flew out of its field of vision while completing a flight plan that took it 164 feet (50 meters) downrange of the landing spot. Keep watching, the helicopter will return to stick the landing. Top speed for today's flight was about 2 meters per second, or about 4.5 miles-per-hour.

[ NASA ]

U.S. Naval Research Laboratory engineers recently demonstrated Hybrid Tiger, an electric unmanned aerial vehicle (UAV) with multi-day endurance flight capability, at Aberdeen Proving Grounds, Maryland.

[ NRL ]

This week's CMU RI Seminar is by Avik De from Ghost Robotics, on “Design and control of insect-scale bees and dog-scale quadrupeds.”

Did you watch the Q&A? If not, you should watch the Q&A.

[ CMU ]

Autonomous quadrotors will soon play a major role in search-and-rescue, delivery, and inspection missions, where a fast response is crucial. However, their speed and maneuverability are still far from those of birds and human pilots. What does it take to make drones navigate as good or even better than human pilots?

[ GRASP Lab ]

With the current pandemic accelerating the revolution of AI in healthcare, where is the industry heading in the next 5-10 years? What are the key challenges and most exciting opportunities? These questions will be answered by HAI’s Co-Director, Fei-Fei Li and the Founder of DeepLearning.AI, Andrew Ng in this fireside chat virtual event.

[ Stanford HAI ]

Autonomous robots have the potential to serve as versatile caregivers that improve quality of life for millions of people with disabilities worldwide. Yet, physical robotic assistance presents several challenges, including risks associated with physical human-robot interaction, difficulty sensing the human body, and a lack of tools for benchmarking and training physically assistive robots. In this talk, I will present techniques towards addressing each of these core challenges in robotic caregiving.

[ GRASP Lab ]

What does it take to empower persons with disabilities, and why is educating ourselves on this topic the first step towards better inclusion? Why is developing assistive technologies for people with disabilities important in order to contribute to their integration in society? How do we implement the policies and actions required to enable everyone to live their lives fully? ETH Zurich and the Global Shapers Zurich Hub invited to an online dialogue on the topic “For a World without Barriers-Removing Obstacles in Daily Life for People with Disabilities.”

[ Cybathlon ]

We propose a fault-tolerant estimation technique for the six-DoF pose of a tendon-driven continuum mechanisms using machine learning. In contrast to previous estimation techniques, no deformation model is required, and the pose prediction is rather performed with polynomial regression. As only a few datapoints are required for the regression, several estimators are trained with structured occlusions of the available sensor information, and clustered into ensembles based on the available sensors. By computing the variance of one ensemble, the uncertainty in the prediction is monitored and, if the variance is above a threshold, sensor loss is detected and handled. Experiments on the humanoid neck of the DLR robot DAVID, demonstrate that the accuracy of the predicted pose is significantly improved, and a reliable prediction can still be performed using only 3 out of 8 sensors.

Acting, stand-up and dancing are creative, embodied performances that nonetheless follow a script. Unless experimental or improvised, the performers draw their movements from much the same stock of embodied schemas. A slavish following of the script leaves no room for creativity, but active interpretation of the script does. It is the choices one makes, of words and actions, that make a performance creative. In this theory and hypothesis article, we present a framework for performance and interpretation within robotic storytelling. The performance framework is built upon movement theory, and defines a taxonomy of basic schematic movements and the most important gesture types. For the interpretation framework, we hypothesise that emotionally-grounded choices can inform acts of metaphor and blending, to elevate a scripted performance into a creative one. Theory and hypothesis are each grounded in empirical research, and aim to provide resources for other robotic studies of the creative use of movement and gestures.

Composite materials have been long developed to improve the mechanical properties such as strength and toughness. Most composites are non-stretchable which hinders the applications in soft robotics. Recent papers have reported a new design of unidirectional soft composite with superior stretchability and toughness. This paper presents an analytical model to study the toughening mechanism of such composite. We use the Gent model to characterize the large deformation of the hard phase and soft phase of the composite. We analyze how the stress transfer between phases deconcentrates the stress at the crack tip and enhances the toughness. We identify two types of failure modes: rupture of hard phase and interfacial debonding. We calculate the average toughness of the composite with different physical and geometric parameters. The experimental results in literature agree with our theoretical predictions very well.

As the elderly population increases, the importance of the caregiver’s role in the quality of life of the elderly has increased. To achieve effective feedback in terms of care and nursing education, it is important to design a robot that can express emotions or feel pain like an actual human through visual-based feedback. This study proposes a care training assistant robot (CaTARo) system with 3D facial pain expression that simulates an elderly person for improving the skills of workers in elderly care. First, in order to develop an accurate and efficient system for elderly care training, this study introduces a fuzzy logic–based care training evaluation method that can calculate the pain level of a robot for giving the feedback. Elderly caregivers and trainees performed the range of motion exercise using the proposed CaTARo. We obtained quantitative data from CaTARo, and the pain level was calculated by combining four key parameters using the fuzzy logic method. Second, we developed a 3D facial avatar for use in CaTARo that is capable of expressing pain based on the UNBC-McMaster Pain Shoulder Archive, and we then generated four pain groups with respect to the pain level. To mimic the conditions for care training with actual humans, we designed the system to provide pain feedback based on the opinions of experts. The pain feedback was expressed in real time by using a projector and a 3D facial mask during care training. The results of the study confirmed the feasibility of utilizing a care training robot with pain expression for elderly care training, and it is concluded that the proposed approach may be used to improve caregiving and nursing skills upon further research.

Due to the decentralized, loosely coupled nature of a swarm and to the lack of a general design methodology, the development of control software for robot swarms is typically an iterative process. Control software is generally modified and refined repeatedly, either manually or automatically, until satisfactory results are obtained. In this paper, we propose a technique based on off-policy evaluation to estimate how the performance of an instance of control software—implemented as a probabilistic finite-state machine—would be impacted by modifying the structure and the value of the parameters. The proposed technique is particularly appealing when coupled with automatic design methods belonging to the AutoMoDe family, as it can exploit the data generated during the design process. The technique can be used either to reduce the complexity of the control software generated, improving therefore its readability, or to evaluate perturbations of the parameters, which could help in prioritizing the exploration of the neighborhood of the current solution within an iterative improvement algorithm. To evaluate the technique, we apply it to control software generated with an AutoMoDe method, Chocolate−6S . In a first experiment, we use the proposed technique to estimate the impact of removing a state from a probabilistic finite-state machine. In a second experiment, we use it to predict the impact of changing the value of the parameters. The results show that the technique is promising and significantly better than a naive estimation. We discuss the limitations of the current implementation of the technique, and we sketch possible improvements, extensions, and generalizations.

Robot grasping in unstructured and dynamic environments is heavily dependent on the object attributes. Although Deep Learning approaches have delivered exceptional performance in robot perception, human perception and reasoning are still superior in processing novel object classes. Furthermore, training such models requires large, difficult to obtain datasets. This work combines crowdsourcing and gamification to leverage human intelligence, enhancing the object recognition and attribute estimation processes of robot grasping. The framework employs an attribute matching system that encodes visual information into an online puzzle game, utilizing the collective intelligence of players to expand the attribute database and react to real-time perception conflicts. The framework is deployed and evaluated in two proof-of-concept applications: enhancing the control of a robotic exoskeleton glove and improving object identification for autonomous robot grasping. In addition, a model for estimating the framework response time is proposed. The obtained results demonstrate that the framework is capable of rapid adaptation to novel object classes, based purely on visual information and human experience.

In this study, we discovered a phenomenon in which a quadruped robot without any sensors or microprocessor can autonomously generate the various gait patterns of animals using actuator characteristics and select the gaits according to the speed. The robot has one DC motor on each limb and a slider-crank mechanism connected to the motor shaft. Since each motor is directly connected to a power supply, the robot only moves its foot on an elliptical trajectory under a constant voltage. Although this robot does not have any computational equipment such as sensors or microprocessors, when we applied a voltage to the motor, each limb begins to adjust its gait autonomously and finally converged to a steady gait pattern. Furthermore, by raising the input voltage from the power supply, the gait changed from a pace to a half-bound, according to the speed, and also we observed various gait patterns, such as a bound or a rotary gallop. We investigated the convergence property of the gaits for several initial states and input voltages and have described detailed experimental results of each gait observed.

Pages