Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

Researchers at Carnegie Mellon University’s School of Computer Science and the University of California, Berkeley, have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs, and even operate in the dark.

[ CMU ]

This robot is designed as a preliminary platform for humanoid robot research. The platform will be further extended with soles as well as upper limbs. In this video, the current lower limb version of the platform has shown its capability on traversing over uneven terrains without active or passive ankle joint. This under-actuation nature of the robot system has been well addressed with our locomotion control framework, which also provides a new perspective on the leg design of bipedal robot.

[ CLEAR Lab ]

Thanks, Zejun!

Inbiodroid is a startup "dedicated to the development of fully immersive telepresence technologies that create a deeper connection between people and their environment." Hot off the ANA Avatar XPRIZE competition, they're doing a Kickstarter to fund the next generation of telepresence robots.

[ Kickstarter ] via [ Inbiodroid ]

Thanks, Alejandro!

A robot that can feel what a therapist feels when treating a patient, that can adjust the intensity of rehabilitation exercises at any time according to the patient's abilities and needs, and that can thus go on for hours without getting tired: it seems like fiction, and yet researchers from the Vrije Universiteit Brussel and imec have now finished a prototype that unites all these skills in one robot.

[ VUB ]

Thanks, Bram!

Self-driving bikes present some special challenges, as this excellent video graphically demonstrates.

[ Paper ]

Pickle robots unload trucks. This is a short overview of the Pickle Robot Unload System in Action at the end of October 2022—autonomously picking floor-loaded freight to unload a trailer. As a robotic system built on AI and advanced sensors, the system gets better and faster all the time.

[ Pickle ]

Learning agile skills can be challenging with reward shaping. Imitation learning provides an alternative solution by assuming access to decent expert references. However, such experts are not always available. We propose Wasserstein Adversarial Skill Imitation (WASABI) which acquires agile behaviors from partial and potentially physically incompatible demonstrations. In our work, Solo, a quadruped robot learns highly dynamic skills (e.g. backflips) from only hand-held human demonstrations.

WASABI!

[ WASABI ]

NASA and the European Space Agency are developing plans for one of the most ambitious campaigns ever attempted in space: bringing the first samples of Mars material safely back to Earth for detailed study. The diverse set of scientifically curated samples now being collected by NASA’s Mars Perseverance rover could help scientists answer the question of whether ancient life ever arose on the Red Planet.

I thought I was promised some helicopters?

[ NASA ]

A Sanctuary general-purpose robot picks up and sorts medicine pills.

Remotely controlled, if that wasn't clear.

[ Sanctuary ]

I don't know what's going on here, but it scares me.

[ KIMLAB ]

The Canadian Space Agency plans to send a rover to the Moon as early as 2026 to explore a polar region. The mission will demonstrate key technologies and accomplish meaningful science. Its objectives are to gather imagery, measurements, and data on the surface of the Moon, as well as to have the rover survive an entire night on the Moon. Lunar nights, which last about 14 Earth days, are extremely cold and dark, posing a significant technological challenge.

[ CSA ]

Covariant Robotic Induction automates previously manual induction processes. This video shows the Covariant Robotic Induction solution picking a wide range of item types from totes, scanning barcodes, and inducting items onto a unit sorter. Note the robot’s ability to effectively handle items that are traditionally difficult to pick, such as transparent polybagged apparel and oddly shaped, small health and beauty items, and place them precisely onto individual trays.

[ Covariant ]

The solution will integrate Boston Dynamics' Spot® robot, the ExynPak™ powered by ExynAI™ and the Trimble® X7 total station. It will enable fully autonomous missions inside complex and dynamic construction environments, which can result in consistent and precise reality capture for production and quality control workflows.

[ Exyn ]

Our most advanced programmable robot yet is back and better than ever. Sphero RVR+ includes an advanced gearbox to improve torque and payload capacity, enhanced sensors including an improved color sensor, and an improved rechargeable and swappable battery.

$279.

[ Sphero ]

I'm glad Starship is taking this seriously, although it's hard to know from this video how well the robots behave when conditions are less favorable.

[ Starship ]

Complexity, cost, and power requirements for the actuation of individual robots can play a large factor in limiting the size of robotic swarms. Here we present PCBot, a minimalist robot that can precisely move on an orbital shake table using a bi-stable solenoid actuator built directly into its PCB. This allows the actuator to be built as part of the automated PCB manufacturing process, greatly reducing the impact it has on manual assembly.

[ Paper ]

Drone racing world champion Thomas Bitmatta designed an indoor drone racing track for ETH Zurich's autonomous high speed racing drones, and in something like half an hour, the autonomous drones were able to master the track at superhuman speeds (with the aid of a motion capture system).

[ ETH RSL ] via [ BMS Racing ]

Thanks, Paul!

Moravec's paradox is the observation that many things that are difficult to do for robots to do come easily to humans, and vice versa. Stanford University professor Chelsea Finn has been tasked to explain this concept to 5 different people; a child, a teen, a college student, a grad student, and an expert.

[ Wired ]

Roberto Calandra from Meta AI gives a talk about “Perceiving, Understanding, and Interacting through Touch.”

[ UPenn ]

AI advancements have been motivated and inspired by human intelligence for decades. How can we use AI to expand our knowledge and understanding of the world and ourselves? How can we leverage AI to enrich our lives? In his Tanner Lecture, Eric Horvitz, Chief Science Officer at Microsoft, will explore these questions and more, tracing the arc of intelligence from its origins and evolution in humans, to its manifestations and prospects in the tools we create and use.

[ UMich ]

Despite the fact that mixed-cultural backgrounds become of increasing importance in our daily life, the representation of multiple cultural backgrounds in one entity is still rare in socially interactive agents (SIAs). This paper’s contribution is twofold. First, it provides a survey of research on mixed-cultured SIAs. Second, it presents a study investigating how mixed-cultural speech (in this case, non-native accent) influences how a virtual robot is perceived in terms of personality, warmth, competence and credibility. Participants with English or German respectively as their first language watched a video of a virtual robot speaking in either standard English or German-accented English. It was expected that the German-accented speech would be rated more positively by native German participants as well as elicit the German stereotypes credibility and conscientiousness for both German and English participants. Contrary to the expectations, German participants rated the virtual robot lower in terms of competence and credibility when it spoke with a German accent, whereas English participants perceived the virtual robot with a German accent as more credible compared to the version without an accent. Both the native English and native German listeners classified the virtual robot with a German accent as significantly more neurotic than the virtual robot speaking standard English. This work shows that by solely implementing a non-native accent in a virtual robot, stereotypes are partly transferred. It also shows that the implementation of a non-native accent leads to differences in the perception of the virtual robot.

Industrial robots are versatile machines that can be used to implement numerous tasks. They have been successful in applications where–after integration and commissioning–a more or less static and repetitive behaviour in conjunction with closed work cells is sufficient. In aerospace manufacturing, robots still struggle to compete against either specialized machines or manual labour. This can be attributed to complex or custom parts and/or small batch sizes. Here, applicability of robots can be improved by enabling collaborative use-cases. When fixed protective fences are not desired due to handling problems of the large parts involved, sensor-based approaches like speed and separation monitoring (SSM) are required. This contribution is about how to construct dynamic volumes of space around a robot as well as around a person in the way that their combination satisfies required separation distance between robot and person. The goal was to minimize said distance by calculating volumes both adaptively and as precisely as possible given the available information. We used a voxel-based method to compute the robot safety space that includes worst-case breaking behaviour. We focused on providing a worst-case representation considering all possible breaking variations. Our approach to generate the person safety space is based on an outlook for 2D camera, AI-based workspace surveillance.



This is a sponsored article brought to you by UL Solutions.

Invest in building your team’s excellence with functional safety training and certification services from UL Solutions, a global safety science leader.

Our UL Certified Functional Safety Certification programs provide your team opportunities to learn about — or deepen their existing knowledge and skills in — functional safety to achieve professional credentials in this space.

We offer personnel certification at both the professional and expert levels in automotive, autonomous vehicles, electronics and semiconductors, machinery, industrial automation, and cybersecurity.

You can now register for any of the offerings listed below. All our instructor-led, virtual courses provide a deep dive into key functional safety standards.

IEC 61511

UL Certified Functional Safety Professional in IEC 61511 Class with Exam - Virtual

This three-day course provides a comprehensive overview of the IEC 61511:2016 and ANSI/ISA 61511:2018 standards for the process industry. Participants who complete all three days of training can take a two-hour certification exam on the morning of the fourth day. Those who pass the exam earn individual certification as a UL Certified Functional Safety Professional in IEC 61511 or UL-CFSP.

Purchase training→

IEC 61508

Functional Safety Overview and Designing Safety-Related Electronic Control Systems in Accordance with IEC 61508 Standard Class with Exam - Virtual (English)

This three-day course helps engineers, developers and managers successfully apply IEC 61508 to their safety-related electrical systems. IEC 61508 serves as the base functional safety standard for various industries, including process, nuclear and machinery, among others. This course includes a one-hour follow-up Q&A session (scheduled at a later date) with one of UL Solutions’ functional safety experts.

Purchase training→

UL 4600

UL Certified Autonomy Safety Professional Training in UL 4600 2nd Edition Class with Exam - Virtual (English)

This 2.5-day course highlights modern-day autonomous robotics, industrial automation, sensors and semi-automated technologies and how they can apply to safety. The course focuses on UL 4600, the Standard for Evaluation of Autonomous Products, and includes information on related safety standards.

Purchase training→

Functional Safety Training for Earth-Moving Machinery in Agricultural Tractor and Construction Control Systems Per ISO 25119, ISO 13849 and ISO 19014

UL Certified Functional Safety Professional Training in Agriculture and Construction Machinery Class with Exam - Virtual (English)

This 2.5-day course will cover functional safety standards and concepts related to agricultural and construction earth-moving machinery. Applicable standards covered in this training include the EU Machinery Directive; ISO 19014:2018, Earth-Moving Machinery — Functional Safety — Part 1: Methodology to Determine Safety-Related Parts of the Control System and Performance Requirements; and ISO 25119:2018, Tractors and Machinery for Agriculture and Forestry — Safety-Related Parts of Control Systems. UL Solutions’ experts will cover topics such as hazard identification and risk assessment per ISO 12100:2010, Safety of Machinery — General Principles for Design — Risk Assessment and Risk Reduction. Case studies on a range of topics, including motor drives and safety product life cycles, will also help provide examples of how the requirements and concepts of the standards apply.

Purchase training→

ISO 13849, IEC 62061, IEC 61800-5-2, 25119, and the EU Machinery Directive

UL Certified Functional Safety Professional Training in Machinery Class with Exam - Virtual (English)

This 2.5-day course is for engineers working on programmable machinery and control systems. The training course will cover functional safety standards and concepts related to the EU Machinery Directive, including ISO 13849, Safety of Machinery - Safety-Related Parts of Control Systems; IEC 61800-5-2, Adjustable Speed Electrical Power Drive Systems - Part 5-2: Safety Requirements - Functional; and IEC 62061, Safety of Machinery - Functional Safety of Safety-Related Electrical, Electronic and Programmable Electronic Control Systems.

Purchase training→



This is a sponsored article brought to you by UL Solutions.

Invest in building your team’s excellence with functional safety training and certification services from UL Solutions, a global safety science leader.

Our UL Certified Functional Safety Certification programs provide your team opportunities to learn about — or deepen their existing knowledge and skills in — functional safety to achieve professional credentials in this space.

We offer personnel certification at both the professional and expert levels in automotive, autonomous vehicles, electronics and semiconductors, machinery, industrial automation, and cybersecurity.

You can now register for any of the offerings listed below. All our instructor-led, virtual courses provide a deep dive into key functional safety standards.

IEC 61511

UL Certified Functional Safety Professional in IEC 61511 Class with Exam - Virtual

This three-day course provides a comprehensive overview of the IEC 61511:2016 and ANSI/ISA 61511:2018 standards for the process industry. Participants who complete all three days of training can take a two-hour certification exam on the morning of the fourth day. Those who pass the exam earn individual certification as a UL Certified Functional Safety Professional in IEC 61511 or UL-CFSP.

Purchase training→

IEC 61508

Functional Safety Overview and Designing Safety-Related Electronic Control Systems in Accordance with IEC 61508 Standard Class with Exam - Virtual (English)

This three-day course helps engineers, developers and managers successfully apply IEC 61508 to their safety-related electrical systems. IEC 61508 serves as the base functional safety standard for various industries, including process, nuclear and machinery, among others. This course includes a one-hour follow-up Q&A session (scheduled at a later date) with one of UL Solutions’ functional safety experts.

Purchase training→

UL 4600

UL Certified Autonomy Safety Professional Training in UL 4600 2nd Edition Class with Exam - Virtual (English)

This 2.5-day course highlights modern-day autonomous robotics, industrial automation, sensors and semi-automated technologies and how they can apply to safety. The course focuses on UL 4600, the Standard for Evaluation of Autonomous Products, and includes information on related safety standards.

Purchase training→

Functional Safety Training for Earth-Moving Machinery in Agricultural Tractor and Construction Control Systems Per ISO 25119, ISO 13849 and ISO 19014

UL Certified Functional Safety Professional Training in Agriculture and Construction Machinery Class with Exam - Virtual (English)

This 2.5-day course will cover functional safety standards and concepts related to agricultural and construction earth-moving machinery. Applicable standards covered in this training include the EU Machinery Directive; ISO 19014:2018, Earth-Moving Machinery — Functional Safety — Part 1: Methodology to Determine Safety-Related Parts of the Control System and Performance Requirements; and ISO 25119:2018, Tractors and Machinery for Agriculture and Forestry — Safety-Related Parts of Control Systems. UL Solutions’ experts will cover topics such as hazard identification and risk assessment per ISO 12100:2010, Safety of Machinery — General Principles for Design — Risk Assessment and Risk Reduction. Case studies on a range of topics, including motor drives and safety product life cycles, will also help provide examples of how the requirements and concepts of the standards apply.

Purchase training→

ISO 13849, IEC 62061, IEC 61800-5-2, 25119, and the EU Machinery Directive

UL Certified Functional Safety Professional Training in Machinery Class with Exam - Virtual (English)

This 2.5-day course is for engineers working on programmable machinery and control systems. The training course will cover functional safety standards and concepts related to the EU Machinery Directive, including ISO 13849, Safety of Machinery - Safety-Related Parts of Control Systems; IEC 61800-5-2, Adjustable Speed Electrical Power Drive Systems - Part 5-2: Safety Requirements - Functional; and IEC 62061, Safety of Machinery - Functional Safety of Safety-Related Electrical, Electronic and Programmable Electronic Control Systems.

Purchase training→

Despite recent advances in robotic technology, sewer pipe inspection is still limited to conventional approaches that use cable-tethered robots. Such commercially available tethered robots lack autonomy, and their operation must be manually controlled via their tethered cables. Consequently, they can only travel to a certain distance in pipe, cannot access small-diameter pipes, and their deployment incurs high costs for highly skilled operators. In this paper, we introduce a miniaturised mobile robot for pipe inspection. We present an autonomous control strategy for this robot that is effective, stable, and requires only low-computational resources. The robots used here can access pipes as small as 75 mm in diameter. Due to their small size, low carrying capacity, and limited battery supply, our robots can only carry simple sensors, a small processor, and miniature wheel-legs for locomotion. Yet, our control method is able to compensate for these limitations. We demonstrate fully autonomous robot mobility in a sewer pipe network, without any visual aid or power-hungry image processing. The control algorithm allows the robot to correctly recognise each local network configuration, and to make appropriate decisions accordingly. The control strategy was tested using the physical micro robot in a laboratory pipe network. In both simulation and experiment, the robot autonomously and exhaustively explored an unknown pipe network without missing any pipe section while avoiding obstacles. This is a significant advance towards fully autonomous inspection robot systems for sewer pipe networks.

Robotic competitions are an excellent way to promote innovative solutions for the current industries’ challenges and entrepreneurial spirit, acquire technical and transversal skills through active teaching, and promote this area to the public. In other words, since robotics is a multidisciplinary field, its competitions address several knowledge topics, especially in the STEM (Science, Technology, Engineering, and Mathematics) category, that are shared among the students and researchers, driving further technology and science. A new competition encompassed in the Portuguese Robotics Open was created according to the Industry 4.0 concept in the production chain. In this competition, RobotAtFactory 4.0, a shop floor, is used to mimic a fully automated industrial logistics warehouse and the challenges it brings. Autonomous Mobile Robots (AMRs) must be used to operate without supervision and perform the tasks that the warehouse requests. There are different types of boxes which dictate their partial and definitive destinations. In this reasoning, AMRs should identify each and transport them to their destinations. This paper describes an approach to the indoor localization system for the competition based on the Extended Kalman Filter (EKF) and ArUco markers. Different innovation methods for the obtained observations were tested and compared in the EKF. A real robot was designed and assembled to act as a test bed for the localization system’s validation. Thus, the approach was validated in the real scenario using a factory floor with the official specifications provided by the competition organization.

One of the major obstacles to the widespread uptake of data-based Structural Health Monitoring so far, has been the lack of damage-state data for the (mostly high-value) structures of interest. To address this issue, a methodology for sharing data and models between structures has been developed–Population-Based Structural Health Monitoring (PBSHM). PBSHM works on the principle that, if populations of structures are sufficiently similar, or share sections which can be considered similar, then data and models can be shared between them for use in diagnostic inference. The PBSHM methodology therefore relies on two key components: firstly, identifying whether structures are sufficiently similar for successful transfer of diagnostics; this is achieved by the use of an abstract representation of structures. Secondly, machine learning techniques are exploited to effectively transfer information between the structures in a way that improves damage detection and classification across the whole population. Although PBSHM has been conceived to deal with large and general classes of structures, much of the detailed developments presented so far have concerned bridges; the aim of this paper is to provide similarly detailed discussions in the aerospace context. The overview here will examine data transfer between aircraft components, as well as illustrating how one might construct an abstract representation of a full aircraft.

In 2020, cardiovascular diseases resulted in 25% of unnatural deaths in the United States. Treatment with long-term administration of medication can adversely affect other organs, and surgeries such as coronary artery grafts are risky. Meanwhile, sequential compression therapy (SCT) offers a low-risk alternative, but is currently expensive and unwieldy, and often requires the patient to be immobilized during administration. Here, we present a low-cost wearable device to administer SCT, constructed using a stacked lamination fabrication approach. Expanding on concepts from the field of soft robotics, textile sheets are thermally bonded to form pneumatic actuators, which are controlled by an inconspicuous and tetherless electronic onboard supply of pressurized air. Our open-source, low-profile, and lightweight (140 g) device costs $62, less than one-third the cost the least expensive alternative and one-half the weight of lightest alternative approved by the US Food and Drug Administration (FDA), presenting the opportunity to more effectively provide SCT to socioeconomically disadvantaged individuals. Furthermore, our textile-stacking method, inspired by conventional fabrication methods from the apparel industry, along with the lightweight fabrics used, allows the device to be worn more comfortably than other SCT devices. By reducing physical and financial encumbrances, the device presented in this work may better enable patients to treat cardiovascular diseases and aid in recovery from cardiac surgeries.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

NO THANK YOU.

It's for inspecting sewers, which means that it has a way into your house.

[ Tmsuk ] via [ Robotstart ]

As impressive as this is (and I'm pretty impressed), this really reinforces how effortless this kind of manipulation is for most humans.

[ Sanctuary ]

Yifeng Zhu from UT Austin writes, "we have a paper accepted at the Conference on Robot Learning 2022, which describes a robot that learns a closed-loop visuomotor policy to make coffee using k-cup machine. We envision that our model can apply to robot manipulators to perform complicated in-home tasks with a handful of demonstrations, and we are very excited to share our work with the community."

[ VIOLA ]

Thanks Yifeng!

If you want to properly train a table tennis robot in the real world, you need something on the other side of the table to challenge it. This open source ball launcher, called AIMY, is arguably better than a human at this.

[ AIMY ]

Thanks, Alexander!

Come for the inflatable tentacles, stay for the manipulator made out of pool floaties.

[ Suzumori Endo Lab ]

Petoi Bittle robot dog is dressed up to play a Halloween scene. He approaches a gummy bear from behind. But it turned out to be...

[ Bittle ]

Thanks, Rongzhong!

Kodiak is the first self-driving trucking company to demonstrate how our autonomous technology, the kodiakDriver, can maintain complete control of the truck even after suffering a catastrophic tire blowout. The kodiakDriver can actually maintain such precise control that the vehicle doesn’t even leave the lane.

The stopping in the middle of the road there is just to illustrate the control that the vehicle has; in practice, it would pull over to the side.

[ Kodiak ]

Some beautiful flapping wing motion on this orithopter.

[ GRVC ]

We propose WaddleWalls, a room-scale interactive partitioning system using a swarm of robotic partitions that allows occupants to interactively reconfigure workspace partitions to satisfy their privacy and interaction needs.

[ Paper ]

Boston Dynamics celebrates 30 years of innovation, exploration, and collaboration! We're excited to see what the next thirty years holds for us and for the robotics industry.

[ Boston Dynamics ]

I think perhaps I am nowhere near wealthy enough to understand the point of this.

Apparently, you wear a sensorized version of the watch for a couple of weeks, which allows this robot arm to recreate your movements, and then the actual watch gets calibrated based on how you like to wave your arms around IRL. Which seems a little nuts to me, but so does spending $93,500 on a watch.

[ De Bethune ] via [ Gizmodo ]

Well, this is probably my least favorite use case for drones. Not sure how much of the video is real, but the project certainly is.

[ Elbit ]

Franka Emika released its next-generation robotic platform—Franka Research 3—the platform of choice for cutting edge AI & Robotics research.

[ Franka ]

What makes the human hand special, and why is it worth replicating in mechanical form?

[ Shadow ]

This week's CMU RI Seminar is from Chelsea Finn at Stanford, entitled "Robots Should Reduce, Reuse, and Recycle."

Despite numerous successes in deep robotic learning over the past decade, the generalization and versatility of robots across environments and tasks has remained a major challenge. In this talk, I will discuss how our embodied learning algorithms need to reduce, reuse, and recycle—reducing the need for special-purpose online data collection, reusing existing data, and recycling pre-trained models with various downstream tasks.

[ CMU RI ]

Watch the entire Bay Area Robotics Symposium live-stream here.

[ BARS 2022 ]


Special thanks today to Dr. Eric Ackerman, an IEEE member with an email address that often gets confused for mine and who very kindly sends stuff along to me.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

NO THANK YOU.

It's for inspecting sewers, which means that it has a way into your house.

[ Tmsuk ] via [ Robotstart ]

As impressive as this is (and I'm pretty impressed), this really reinforces how effortless this kind of manipulation is for most humans.

[ Sanctuary ]

Yifeng Zhu from UT Austin writes, "we have a paper accepted at the Conference on Robot Learning 2022, which describes a robot that learns a closed-loop visuomotor policy to make coffee using k-cup machine. We envision that our model can apply to robot manipulators to perform complicated in-home tasks with a handful of demonstrations, and we are very excited to share our work with the community."

[ VIOLA ]

Thanks Yifeng!

If you want to properly train a table tennis robot in the real world, you need something on the other side of the table to challenge it. This open source ball launcher, called AIMY, is arguably better than a human at this.

[ AIMY ]

Thanks, Alexander!

Come for the inflatable tentacles, stay for the manipulator made out of pool floaties.

[ Suzumori Endo Lab ]

Petoi Bittle robot dog is dressed up to play a Halloween scene. He approaches a gummy bear from behind. But it turned out to be...

[ Bittle ]

Thanks, Rongzhong!

Kodiak is the first self-driving trucking company to demonstrate how our autonomous technology, the kodiakDriver, can maintain complete control of the truck even after suffering a catastrophic tire blowout. The kodiakDriver can actually maintain such precise control that the vehicle doesn’t even leave the lane.

The stopping in the middle of the road there is just to illustrate the control that the vehicle has; in practice, it would pull over to the side.

[ Kodiak ]

Some beautiful flapping wing motion on this orithopter.

[ GRVC ]

We propose WaddleWalls, a room-scale interactive partitioning system using a swarm of robotic partitions that allows occupants to interactively reconfigure workspace partitions to satisfy their privacy and interaction needs.

[ Paper ]

Boston Dynamics celebrates 30 years of innovation, exploration, and collaboration! We're excited to see what the next thirty years holds for us and for the robotics industry.

[ Boston Dynamics ]

I think perhaps I am nowhere near wealthy enough to understand the point of this.

Apparently, you wear a sensorized version of the watch for a couple of weeks, which allows this robot arm to recreate your movements, and then the actual watch gets calibrated based on how you like to wave your arms around IRL. Which seems a little nuts to me, but so does spending $93,500 on a watch.

[ De Bethune ] via [ Gizmodo ]

Well, this is probably my least favorite use case for drones. Not sure how much of the video is real, but the project certainly is.

[ Elbit ]

Franka Emika released its next-generation robotic platform—Franka Research 3—the platform of choice for cutting edge AI & Robotics research.

[ Franka ]

What makes the human hand special, and why is it worth replicating in mechanical form?

[ Shadow ]

This week's CMU RI Seminar is from Chelsea Finn at Stanford, entitled "Robots Should Reduce, Reuse, and Recycle."

Despite numerous successes in deep robotic learning over the past decade, the generalization and versatility of robots across environments and tasks has remained a major challenge. In this talk, I will discuss how our embodied learning algorithms need to reduce, reuse, and recycle—reducing the need for special-purpose online data collection, reusing existing data, and recycling pre-trained models with various downstream tasks.

[ CMU RI ]

Watch the entire Bay Area Robotics Symposium live-stream here.

[ BARS 2022 ]


Special thanks today to Dr. Eric Ackerman, an IEEE member with an email address that often gets confused for mine and who very kindly sends stuff along to me.

There is an increasing demand for multi-agent systems in which each mobile agent, such as a robot in a warehouse or a flying drone, moves toward its destination while avoiding other agents. Although several control schemes for collision avoidance have been proposed, they cannot achieve quick and safe movement with minimal acceleration and deceleration. To address this, we developed a decentralized control scheme that involves modifying the social force model, a model of pedestrian dynamics, and successfully realized quick, smooth, and safe movement. However, each agent had to observe many nearby agents and predict their future motion; that is, unnecessary sensing and calculations were required for each agent. In this study, we addressed this issue by introducing active sensing. In this control scheme, an index referred to as the “collision risk level” is defined, and the observation range of each agent is actively controlled on this basis. Through simulations, we demonstrated that the proposed control scheme works reasonably while reducing unnecessary sensing and calculations.



Luxonis, a sensor company best known for their OAK line of stereo depth cameras, has decided to take a crack at something a little more complicated and a lot more RGB. They’ve just launched a Kickstarter for Rae, a pint-sized open source mobile robot with an integrated depth camera that combines accessible apps with ROS 2 and is somehow also kind of affordable.

“Rae,” which is technically “rae” but I can’t bring myself to not capitalize it, stands for “robotics access for all.” The video doesn’t really communicate how small it is: it’s only 120mm on a side and weighs just 400g (even though it’s made of Actual Metal), designed to be easy to futz with on your desk.

A stereo camera provides depth data, with a 4K camera for streaming video, and there’s another stereo pair on the back. There’s also a small LCD display. Rails across the top make it easy to mount accessories. Connectivity is via USB-C, both for communications and charging, and it’s got WiFi and Bluetooth along with a mic and speaker. And the programmable RGB is unmissable. The wheels aren’t big (the robot itself isn’t big, to be fair), but Luxonis says that it can still handle carpet and even grass and gravel with its 4 mm of ground clearance. An integrated battery is good for one hour of operation, with one hour of charging, but you can add more batteries as a payload (maxing out at about 2 kg). No integrated docking capability, though.

On the software side, Rae comes with a whole bunch of computer vision, AI, and machine learning apps. You get stuff like mapping, facial recognition, and 3D object localization and tracking, and it’ll play hide-and-seek with you right out of the box, among other relatively simple applications. It’s cloud-based, so you can access your robot remotely, and you can build your own applications for Rae and share them with other users. To what extent folks will actually end up doing this uncertain—these sorts of robot app store-things haven’t been all that successful in the past, so as you’re thinking about what you might do with Rae, I’d focus on what it comes with rather than what others could potentially make for it.

Rae in the wild.Luxonis

Thanks to Luxonis superfans, the Kickstarter campaign was fully funded in about eight minutes, and the super-early bird version of the robot is sold out. This is a shame, because it was a shockingly cheap $299. As of now-ish, Rae has more than 400 backers and is nearing $200,000 on Kickstarter, close to 10x-ing its goal. Rae is currently available for a pledge of $399 (plus shipping), which is still quite good, especially if you look at what else is out there.

The obvious comparison here is to the TurtleBot, which is arguably more capable and more expandable and could be a better learning platform for ROS but is also way more expensive—the most basic version of the new TurtleBot 4 is well over $1000 USD. Even the TurtleBot 3 is $600, and only comes with a planar lidar. The next step down is something like a Create 3, which is $300 but has no depth sensor on it at all, unless you count the one that keeps it from falling down stairs. And there’s something to be said for Rae’s small size, too—it just looks easy and convenient, doesn’t it?

“We want Rae to be really, really easy to use for somebody that is not necessarily an advanced roboticist,” Luxonis’ COO Bradley Dillon told us. “Our thinking was, what if we had a really compact all-in-one platform that people could create with?” Unlike TurtleBot, Rae is tightly integrated, which I think makes it less intimidating, especially for folks without a lot of robotics experience. And while Rae is absolutely ROS 2 compatible, all of the included apps, and the fact that you can interface with the robot from your phone and get it to immediately do stuff, makes it a lot more accessible right out of the box. If you want to get all up in its software guts, you’re welcome to, but unlike with TurtleBot, doing so is not at all required.

“We think that students from elementary to high school could get into robotics with Rae.” —Bradley Dillon

The tricky part about making any robot part of a curriculum is that generally you need a curriculum, which is a really, really big challenge. Tutorials, how-to guides, sample programs and code, ideally a progressive interface like Scratch or Blockly, an active community for answering questions—this is all an enormous investment of time and resources, but without it, asking educators to come up with all of it on their own is generally a non-starter.

Dillon acknowledges this problem. “We can't solve all aspects of accessibility at once, but we do feel like we're lowering the barrier to entry quite a bit,” he says. “And we think that as the community size grows, it will help drive that barrier down even further.” This is really something that Rae does have going for it—the low cost should get it into the hands of many more people with a much wider spectrum of robotics experience, and if that prompts Luxonis (or someone else) to put some serious work into helping educators (and also the rest of us novices) use Rae as an entry point, that would be very cool.

An exploded view of Rae.Luxonis

If all goes well, Rae should start shipping to backers sometime in May, and we’ll see if we can get our hands on one before then for a review. If you’re sold already, the Kickstarter is right here.



Luxonis, a sensor company best known for their OAK line of stereo depth cameras, has decided to take a crack at something a little more complicated and a lot more RGB. They’ve just launched a Kickstarter for Rae, a pint-sized open source mobile robot with an integrated depth camera that combines accessible apps with ROS 2 and is somehow also kind of affordable.

“Rae,” which is technically “rae” but I can’t bring myself to not capitalize it, stands for “robotics access for all.” The video doesn’t really communicate how small it is: it’s only 120mm on a side and weighs just 400g (even though it’s made of Actual Metal), designed to be easy to futz with on your desk.

A stereo camera provides depth data, with a 4K camera for streaming video, and there’s another stereo pair on the back. There’s also a small LCD display. Rails across the top make it easy to mount accessories. Connectivity is via USB-C, both for communications and charging, and it’s got WiFi and Bluetooth along with a mic and speaker. And the programmable RGB is unmissable. The wheels aren’t big (the robot itself isn’t big, to be fair), but Luxonis says that it can still handle carpet and even grass and gravel with its 4 mm of ground clearance. An integrated battery is good for one hour of operation, with one hour of charging, but you can add more batteries as a payload (maxing out at about 2 kg). No integrated docking capability, though.

On the software side, Rae comes with a whole bunch of computer vision, AI, and machine learning apps. You get stuff like mapping, facial recognition, and 3D object localization and tracking, and it’ll play hide-and-seek with you right out of the box, among other relatively simple applications. It’s cloud-based, so you can access your robot remotely, and you can build your own applications for Rae and share them with other users. To what extent folks will actually end up doing this uncertain—these sorts of robot app store-things haven’t been all that successful in the past, so as you’re thinking about what you might do with Rae, I’d focus on what it comes with rather than what others could potentially make for it.

Rae in the wild.Luxonis

Thanks to Luxonis superfans, the Kickstarter campaign was fully funded in about eight minutes, and the super-early bird version of the robot is sold out. This is a shame, because it was a shockingly cheap $299. As of now-ish, Rae has more than 400 backers and is nearing $200,000 on Kickstarter, close to 10x-ing its goal. Rae is currently available for a pledge of $399 (plus shipping), which is still quite good, especially if you look at what else is out there.

The obvious comparison here is to the TurtleBot, which is arguably more capable and more expandable and could be a better learning platform for ROS but is also way more expensive—the most basic version of the new TurtleBot 4 is well over $1000 USD. Even the TurtleBot 3 is $600, and only comes with a planar lidar. The next step down is something like a Create 3, which is $300 but has no depth sensor on it at all, unless you count the one that keeps it from falling down stairs. And there’s something to be said for Rae’s small size, too—it just looks easy and convenient, doesn’t it?

“We want Rae to be really, really easy to use for somebody that is not necessarily an advanced roboticist,” Luxonis’ COO Bradley Dillon told us. “Our thinking was, what if we had a really compact all-in-one platform that people could create with?” Unlike TurtleBot, Rae is tightly integrated, which I think makes it less intimidating, especially for folks without a lot of robotics experience. And while Rae is absolutely ROS 2 compatible, all of the included apps, and the fact that you can interface with the robot from your phone and get it to immediately do stuff, makes it a lot more accessible right out of the box. If you want to get all up in its software guts, you’re welcome to, but unlike with TurtleBot, doing so is not at all required.

“We think that students from elementary to high school could get into robotics with Rae.” —Bradley Dillon

The tricky part about making any robot part of a curriculum is that generally you need a curriculum, which is a really, really big challenge. Tutorials, how-to guides, sample programs and code, ideally a progressive interface like Scratch or Blockly, an active community for answering questions—this is all an enormous investment of time and resources, but without it, asking educators to come up with all of it on their own is generally a non-starter.

Dillon acknowledges this problem. “We can't solve all aspects of accessibility at once, but we do feel like we're lowering the barrier to entry quite a bit,” he says. “And we think that as the community size grows, it will help drive that barrier down even further.” This is really something that Rae does have going for it—the low cost should get it into the hands of many more people with a much wider spectrum of robotics experience, and if that prompts Luxonis (or someone else) to put some serious work into helping educators (and also the rest of us novices) use Rae as an entry point, that would be very cool.

An exploded view of Rae.Luxonis

If all goes well, Rae should start shipping to backers sometime in May, and we’ll see if we can get our hands on one before then for a review. If you’re sold already, the Kickstarter is right here.



This sponsored article is brought to you by Technology Innovation Institute.

Autonomous systems sit at the intersection of AI, IoT, cloud architectures, and agile software development practices. Various streams of these systems are becoming prominent, such as unmanned drones, self-driving cars, automated warehouses, and managing capabilities in smart cities. The drone industry alone was estimated at US $100 billion in 2020, and autonomous systems are already driving significantly more value across other domains. 1

But surprisingly little attention has been paid to securing autonomous systems as systems composed of multiple automated components. Various patchwork efforts have focused on individual components. In tandem, cloud services are starting to adopt a Zero Trust approach for securing the chain of trust that might traverse multiple systems.

With that, it has become imperative to extend a Zero Trust architecture to systems of autonomous systems to protect not only drones but also industrial equipment, supply chain automation, and smart cities.

In the near future, autonomous systems will bring a new level of digital transformation to the industry worth trillions of dollars, including automating transportation, traffic management, municipal services, law enforcement, shipping, port management, construction, agriculture, and more.

Autonomous enterprise systems are further enriching these more physical aspects of autonomous systems. Gartner coined the term hyperautomation to describe tools for scaling automation using software robots that were valued at $534 billion in 2021.2 Despite the importance of autonomous systems, surprisingly little research has focused on securing autonomous systems as a collection of systems.

This is not to say that researchers are ignoring security — after all, security infrastructure and tools are a multi-billion dollar industry. But when it comes to securing physical components, much of the focus has been on securing individual elements such as data, software, hardware, and communications links rather than the behavior of an ensemble of autonomous systems.

Despite the importance of autonomous systems, surprisingly little research has focused on securing autonomous systems as a collection of systems.

Similarly, researchers are just starting to scratch the surface of protecting against swarms of autonomous things guided with malicious intent. Just last year, a half dozen precisely targeted malicious drones managed to slow oil production in Saudi Arabia for days, and more recently, several low-cost drones caused significant damage to oil tankers in the UAE. This illustrates the importance of detecting hostile drones entering secure spaces.

This kind of security is just the beginning of what will be required to move towards a larger scale deployment of drones as envisioned by the U.S Federal Aviation Administration’s beyond visual line of sight (BVLOS) regulations. 3 These regulations promise to open immense commercial opportunities to improve industrial inspection, shipping, and remote monitoring.

However, wider scale deployment will require a more systemic approach to protect against the impact of thousands of low-cost autonomous drones working in concert.

A more comprehensive approach is required to protect the security and resilience of autonomous systems and protect against cyber-physical attacks that leverage autonomous systems.TII

Autonomous Security for Smart Cities

Autonomous security is also a pressing issue for industrial systems and smart city use cases. Hackers are becoming better at coordinating millions of IoT devices to launch a devastating distributed denial of service attacks on computer servers today.

Similar tactics that leveraged physical and mobile autonomous things could extend the blast radius beyond IT infrastructure to destroy physical infrastructure like factories, pipelines, electric grids, or worse. A more comprehensive approach is required to protect the security and resilience of autonomous systems and protect against cyber-physical attacks that leverage autonomous systems.

The Technology Innovation Institute (TII)’s Secure Systems Research Centre (SSRC) is leading one promising approach to building an autonomous security testbed that explores the interplay between how hardware, software, and communications systems can be exploited so that they can be hardened.

The early phases of this work are focused on protecting scalable swarms of unmanned aerial vehicles controlled by the cloud. The long-term goal is to create a framework for understanding and defending against autonomous security risks across all types of infrastructure, including fleets of cars, automated warehouses, construction sites, farms, and smart cities.

Over the years, basic autonomous capabilities have grown into almost every aspect of our physical infrastructure, from automated braking in individual cars to orchestrating power flow across nationwide electrical grids with precision. Autonomous systems are already demonstrating tremendous value today, and we are just scratching the surface. For example, Goldman Sachs estimated that unmanned autonomous vehicles (UAV) had grown into a $100 billion industry in 2021. 4

Military applications accounted for about 70 percent of this spending. However, commercial applications were also substantial in construction, agriculture, insurance claims, offshore oil, gas and refining, pipelines, utilities, and mining. For example, the construction industry uses drones to automatically capture footage of construction sites before, during, and after the construction process.

Drones carrying lidar and high-resolution cameras can automatically generate 3D models in minutes that would have previously taken humans days or weeks. This makes it practical to keep tabs on buildings as they are being built, track progress, and identify mistakes when they are cheaper to fix. After construction, drones can also survey physical infrastructure like bridges to identify cracks and other problems before the whole structure suffers a bigger problem.

Drones are also improving the planning and management of large farms. For example, drones with spectral imaging cameras can quickly identify nutrient deficiencies, pest outbreaks, and drought, allowing farmers to address them more precisely and cheaply. In the UAE, drones have also helped map the entire country’s agricultural resources in a matter of days, which would not have been practical using physical surveys alone. 5 In another effort, UAE teams used drones to plant 6.25 million trees in only two days. 6

Scaling Secure Autonomous Systems

It is easy to get caught up in autonomous systems as a single self-driving car or individual drone. However, the real promise of autonomous systems comes when autonomous capabilities are simultaneously scaled to improve the control of individual things, the orchestration of a collection of things, and the understanding of things at scale.

The individual-level can be considered as the evolution from cruise control to automated braking and fully self-driving cars. The orchestration level entails the evolution from synchronized traffic lights to dynamically adjusted traffic lights to advanced mapping services that route cars around traffic jams. Autonomous understanding systems include traffic monitoring cameras to crowdsourcing dashcam video into dynamically updated digital twins for improving overall traffic. 7

These same three factors of control, orchestration, and understanding play out across various use cases. A warehouse robot might reduce the need for staff. An autonomous warehouse management system could optimize the scheduling and staging of items in the warehouse. In contrast, an autonomous understanding system could help reengineer the warehouse design to further increase performance in the same space.

This combination of autonomous control, autonomous orchestration, and autonomous understanding is already showing some promise in the UAE. For example, one pilot project has created an autonomous port truck system that automates the process of shifting shipping containers from boats to trucks. 8

Gartner refers to the simultaneous evolution of control, orchestration, and understanding in IT systems as hyperautomation. In this case, enterprises use individual robotic process automation (RPA) software robots (called bots) to automate a collection of human tasks. Orchestration engines help organize the flow of work across multiple bots.

Then process and task mining bots analyze enterprise applications or even watch over the shoulders of individuals to find further opportunities for improvement. Researchers are just starting to explore how similar practices may be extended to include autonomous vehicles.

The Next Challenge in Systems Security

That is one of the reasons ATRC’s ASPIRE chose to focus on autonomous swarm coordination as part of its next grand challenge project.9 ASPIRE is tasked with hosting grand challenge competitions loosely organized like the US DARPA’s challenge that spearheaded research on autonomous vehicles.

The upcoming challenge tasks researchers with finding the best way to orchestrate a swarm for drones to search for and retrieve objects hidden on ships that are too heavy for any individual drone. The need for end- to-end security and resilience Enterprises and security researchers are just starting to struggle with protecting individual autonomous things, much less swarms. A new security approach is required for these types of swarms to scale for real- world applications.

The early generation of IoT devices were rushed to market with only basic considerations on how they might be protected against hackers or securely updated against new threats. Many of these early devices are not updateable after the fact. Consequently, they are a popular target for hackers eager to create large-scale botnets for launching distributed denial of service attacks such as the Mirai botnet. 10

This has given rise to a secondary industry of IoT security gateways designed to detect and block malicious activity outside of poorly secured appliances like lighting controllers, crockpots, TV set-top boxes, and cameras. The security posture of the first connected cars is better, but there are still glaring vulnerabilities and gaps that need to be addressed. Some of the vulnerabilities highlighted in Upstream’s 2021 Automotive Cyber Security Report 11 include:

  • Hackers found 19 vulnerabilities in a Mercedes-Benz E-class car that allowed them to remotely control the vehicle, open doors, and start the engine.
  • Hackers took control of a car’s OEM corporate network by reverse engineering a car’s transmission control unit to infiltrate the network.
  • Over 300 vulnerabilities were discovered in 40 popular electronic control units used in cars.
  • Hackers managed to gain control over Tesla’s entire connected car fleet by exploiting a vulnerability in the communications protocol.

Modern cars allow the software to be updated after the fact but generally require consumers to come to a shop for an update. Only a few leaders, like Tesla, have mastered the ability to securely update software at scale.

Building secure systems will need to address hardware, software, and protocols and their interplay. Hardware security issues need to protect against attacks in which a hacker can physically update a system to compromise security or cause damage.

For example, the Stuxnet 12 attack corrupted hardware in an Iranian uranium enrichment facility to send miscalibrated timing data that confused the control systems. The result was that the controller drove hundreds of expensive centrifuge systems so fast that they exploded.

There are a variety of ways hackers could launch remote hardware-directed attacks on UAVs. For example, focused beams of sound could confuse the inertial guidance unit used to control a drone. Directed EMF beams might cause a short circuit on sensitive electronics, and lasers or bright lights might confuse or destroy camera sensors.

Vulnerabilities in software systems allow hackers to spy on or take remote control of systems to launch further attacks. Early examples in IT systems included malware like the Zeus Trojan that allowed hackers to spy on banking interactions to capture credentials and steal $500 million. 13

In some cases, hackers are finding ways to infiltrate software supply chains to plant targeted malware vulnerabilities. This was how hackers managed to burrow into thousands of government, banking, and enterprise systems as part of last year’s Solar Winds breach.

The fundamental concept at the heart of the Zero Trust security model is to never trust and always verify the provenance of each request.TII

The Zero Trust Security Paradigm

The term Zero Trust model was coined by Forrester research in 2010 to denote a new paradigm for securing distributed systems. 14

Security systems have traditionally been secured by hardening a physical perimeter. But in the world of cloud computing, the perimeter is more nebulous. Zero trust security connotes the idea of always authenticating and verifying every access in order to secure around a more flexible perimeter. The Zero Trust paradigm allows security teams to plan for the possibility that vulnerabilities may exist throughout a chain of interactions among multiple systems, such as across several cloud services, data processes, storage services, and networks.

The fundamental concept is to never trust and always verify the provenance of each request. Another basic principle is to assume that a breach has already occurred, making it essential to limit the blast radius of any breach.

Autonomous systems extend automated processes across a wider variety and physical range of hardware, communications protocols, as well as control and orchestration mechanisms. Each of these brings with them their own attack surface. Thus, security teams need to minimize the impact that a breach on one level could have on other systems.

Zero trust security connotes the idea of always authenticating and verifying every access in order to secure around a more flexible perimeter.

Examples include attacks on control servers, communication networks, embedded system applications, physical devices, software supply chains, and silicon supply chains.

Autonomous system security needs to be built across multiple independent security walls so that if one key or system is breached, the integrity of the whole is protected. Each system should be designed to fail safely and securely so as to minimize the impact on adjacent components.

This can also make it harder for hackers to escalate an attack on a low-level system to more critical systems, as with the recent Log4J attacks. For example, autonomous systems like autonomous drones need attestation schemes to ensure that only authorized software runs on the drones. An attestation scheme uses cryptographically signed software updates to ensure that only valid code can run on remote systems.

This prevents hackers from reprogramming a drone by simulating a legitimate program update communication or replacing legitimate updates with a bogus software upgrade staged at the command center.

Vulnerabilities in communication protocols could allow hackers to spy on drone activity or simulate control signals to take control of a drone. Such attacks could happen at any level of the communication stack, from hacking into communications within the cloud, the wireless signals between the cloud and a drone, or between multiple drones.

In some cases, hackers may be able to attack systems by mimicking communications within a drone or autonomous car. For example, researchers have found ways to listen to and simulate the unprotected wireless communications involved in tire pressure monitoring. 15 This allowed them to trick the car into indicating that a good tire had a flat, which might cause a vehicle to stop.

Zero Trust Applied to Chip Design

Trust is essential in computing systems — arguably more so for chips at the heart of these systems. Unfortunately, trust is also an increasing rarity, because many chips design companies are outsourcing critical steps (fabrication, testing, assembly) to third-party companies.

Such a distributed chip supply chain is financially appealing, but under normal circumstances, also necessarily untrustworthy. This distributed paradigm is susceptible to threats such as chip design reverse engineering, piracy, overproduction, and tampering.

A rogue element in the fabrication with full access to the chip design blueprint can reverse engineer the functionality of the chip or its critical components, copy and pirate any hardware design intellectual property, run extra fabrication shifts to produce more chips than requested by the design house to sell them in a gray market, or insert difficult-to-catch Trojans that serve a malicious purpose (e.g., leak sensitive information) into the chips during fabrication.

Software supply chain attacks have been making the news lately. It’s also essential to protect against hardware supply chain attacks in which malicious actors insert backdoors or hardware Trojans into chips.

Emerging chip-to-chip authentication techniques could help mitigate such issues. The core idea is to extend zero-trust concepts applied to network security to chip-to-chip communication to mitigate the impact of attacks on the physical supply chain or malicious firmware updates.

This kind of approach could involve combining public-key infrastructure, trusted computing, and secure memory management to strike the right balance between security and performance.

For example, in a drone system, designers might have a flight controller that connects many peripheral chips for motor control and sensors of various kinds. Today, there is no authentication of those chips done in real-time when the system boots. It is assumed that it is a legitimate chip because it boots up in a particular manner. We need a Zero Trust approach in which the Boot processor cryptographically verifies that these peripheral chips are from legitimate manufacturers and are running legitimate software before allowing them to connect with the main CPU on the flight controller.

This chain is extended all the way from this level to applications running on the flight controller. Researchers have been developing a technique called logic locking 16 that gives control back to the design house in the chip supply chain where they normally have almost no control.

By using a logic locking tool or technique, a chip designer can insert additional logic into the design to introduce a locking mechanism that expects a secret unlock key, which is a binary vector (combination of 0s and 1s). The secret key is known to only the design house and is loaded by a trusted party (e.g., design house themselves) on the chip after fabrication.

This is a one-time load operation where the key is written into the chip. Only then a fabricated chip becomes “unlocked,” and thus, functional. Logic locking serves multiple purposes:

• First, the design house can ensure that all of its fabricated chips can be deployed in the market under their control; any overproduced chip by the fab will remain locked and unfunctional, as it will be missing the unlock key.

• Second, the blueprint that is available to the fab fails to reveal all the information about the functionality of the chip and its blocks as the secret key is unknown to the untrusted entities. Any attempt to reverse engineer the chip/block functionality is thus hindered.

Without the functionality of the chip fully understood, the insertion of meaningful Trojans in the foundry is also thwarted. Modern system-on-chip designs can accelerate product development for performant and low-cost chip functionality. However, they also carry risks from the use of untrusted IP.

Existing testing techniques like fuzzing and penetration tests depend on the judgment of experts. Also, they tend to be performed late in the design cycles, and it can be costly and challenging to make significant changes when problems are found. Approaches like concolic testing (“concrete” plus “symbolic”), primarily used in software security testing today, could be extended to chip circuit design to detect problems much earlier in the design cycle.

It’s also vital to extend confidential computing security 17 to hypervisors running on RISC-V processors that are increasingly being adopted in autonomous systems. The core idea is to isolate virtual machines from the virtual machine manager and other non-trusted software components available on the platform. This will require a combination of VM-to-VM authentication and encrypted communication.

One challenge is that RISC-V processors do not currently provide hardware support for encrypted communication channels between VMs. Implementing this capability in software adds additional overhead and latency.

One strategy is to create Zero Trust hardware building blocks such as IOMMU and IOPMP and ISA extensions to alleviate this overhead. Trusted execution environments (TEE) were developed to provide a higher level of security for applications by using an encryption perimeter around program execution running on the hardware, but these were built primarily for applications confined within a CPU.

Autonomous Systems Infrastructure

Autonomous systems infrastructure needs to combine a variety of embedded computing platforms such as drone navigation systems, CPU-based architectures, and other types of dedicated hardware. Existing approaches are also fixed at design time, which leads to using untrusted software to employ peripherals in TEEs.

New approaches for composite enclaves will be required to extend TEEs to more flexible designs. It’s also essential to develop new tools for detecting and responding to unknown and unexpected changes caused by novel attack techniques. A trust verification infrastructure could extend traditional API observability approaches to hardware through a combination of monitoring, logging, and tracing. These kinds of actions allow the construction of continuous verification mechanisms for anomaly and intrusion detection.

Offline profiling techniques could generate trust profiles that describe how the hardware is supposed to operate. During operation, ongoing logging could ensure that the behavior adheres to the trust profile.

However, this needs to be constructed to minimize the risk of logging tools being leveraged as part of a side-channel attack. We also need to explore new hardware capabilities and advanced software techniques to compartmentalize software stacks across multiple levels.

One of the most promising approaches is CHERI (Capability Hardware Enhanced RISC Instructions), which is being explored by DARPA, Google, SRI International, and the University of Cambridge. 18 Further work is required to extend this approach to improve fine-grained compartmentalization at the operating system level. This could combine new middleware, OS libraries, unikernels, and various mechanisms to grant and revoke authorization in order to enforce compartmental constraints.

This will be required striking the right balance between different degrees of flexibility in both configuration and determining the appropriate privileges. We are still in the early days of building large-scale autonomous systems, but as we scale them up, new considerations like these will be required to extend zero-trust security to embedded systems, autonomous systems, and systems of autonomous systems.

Enterprises and researchers are exploring ways to scale systems of individual autonomous systems, with the most promising research currently being focused on scaling systems of autonomous drones. In the long run, everyone wants to get to autonomous cars and factories and there is a lot of experimentation going on with unmanned vehicles that tend to require a human driver or assistant in the case of delivery vehicles.

But UAVs are already delivering value today, and regulators are starting to open the skies for more ambitious applications. In early 2021, the FAA granted American Robotics the first license to fly drones beyond the visual line of sight (BVLOS). Around the world, enterprises are working with regulators to develop Unmanned Traffic Management (UTM) systems. Major aerospace companies and innovative start-ups are working with regulators to show how various combinations of AI, advanced mapping, vehicle-to- vehicle communications, and encrypted communication and control could facilitate safe drone management at scale.

In the United States, Boeing has partnered with SparkCognition on SkyGrid. Airbus is leading efforts to promote SESAR for the EU. Guardian Angel recently worked with U.K. regulators on Operation Zenith to demonstrate how a series of on-airfield tasks could be performed without endangering or disrupting airport operations. These are essential efforts and are a necessary first step in safely scaling fleets of trusted drones.

However, UTM systems generally start with the assumption that drones are all trusted. More work needs to be done to understand and analyze how these systems can be compromised and hence trusted in the first place.

Researchers around the world are exploring how individual components of these systems can be compromised and hardened. For example, researchers in Germany and Switzerland have experimented with implementing quantum-safe cryptographic algorithms to protect drone communications. They argue that long-running drones will also need to support crypto-agility that allows dynamic updating of security algorithms in response to the discovery of new vulnerabilities. 19

These researchers also explored how to implement remote attestation schemes that protect drones from software tampering. Other researchers have explored drone cloud control mechanisms. For example, a team of researchers in Brazil has developed the Cloud-SPHERE platform as one approach for integrating UAVs into IoT and Cloud Computing paradigms.

Architectures that attempt to control each drone or autonomous system directly will run into scalability challenges as the number of individuals in the swarm grows.TII

Bringing Security to the Swarm

The next phase of autonomous drones will require developing architecture to scale drone control and security to support autonomous swarms. For example, a collection of low-cost drones can be orchestrated into drone swarms controlled by the cloud to explore new use cases like search and rescue, disinfecting public spaces, and coordinating tasks such as lifting heavy equipment beyond the capacity of any one drone.

One big shift will be the need for more distributed control mechanisms. Architectures that attempt to control each drone or autonomous system directly will run into scalability challenges as the number of individuals in the swarm grows.

One approach pursued by the TII’s Secure Systems Research Centre (SSRC) is the development of a dynamic hierarchy composed of drones with different capabilities for control and task execution. Similar organization of drones has been described before, and our focus is going to be on security and resilience in such a hierarchy.

In this scheme, a tier of Fog Drones acts as intermediaries between less sophisticated Edge Drones and the cloud. The Fog Drone can also offload many tasks such as summarizing input from many Edge Drones to reduce the amount of communication required with the cloud and between drones.

Cross-disciplinary Research

The Secure Systems Research Centre (SSRC), part of the Technology Innovation Institute (TII), is working with a cross-disciplinary team of researchers at leading research institutions worldwide to develop a comprehensive Zero Trust autonomous security testbed to explore security implications spanning hardware, software, and communications at the systems level.

SSRC partners include Georgia Institute of Technology, Purdue University, University of Applied Sciences and Arts of Southern Switzerland, Tampere University, University of Turku, Khalifa University, Imperial College, University of Manchester, TU Graz, University of New South Wales, University of Modena and Reggio Emilia, University of Bologna, Sapienza University of Rome, University of Milan, University of Minho, University of Waterloo, McMaster University, NYU Abu Dhabi, and UT Dallas.

This can also reduce the amount of processing required on each Edge Drone. This work is also exploring how mesh networks can further optimize and secure communications between drones operating in constrained situations such as a cave, fallen building, or hostile environment.

Cross-disciplinary research teams at SSRC are exploring ways to synthesize lessons learned from physical testbeds into useful and actionable security models. Ultimately, these security models could help autonomous teams identify and improve autonomous systems development that spans drone hardware, software implementations, and communications choices earlier in the release cycle.

Several testbeds have also been developed at Masdar in the UAE and Purdue. One goal is to develop machine learning methods at both the drone and cloud levels to detect security issues and enable resilience.

Another goal is to develop tools for testing these systems in augmented reality environments for urban settings. The teams are also exploring ways to improve the ability to capture security-related data into digital twins that reflect the security implications of drones. This will help automate the ability to reflect new security vulnerabilities discovered in the real world in the models shared with researchers.

These researchers are also finding ways to harden open-source hardware, software, and communication protocols for developing and deploying drone systems. This approach opens the architecture to a wide range of security and drone researchers to find vulnerabilities sooner.

This open-source approach could also benefit from the rapid innovation that the open-source robotics community is already seeing. Some of the underpinnings of the current platform include the PX4 advanced autopilot, NuttX real-time operating systems, and the Robot Operating System 2 (ROS2).

The team has also developed and implemented an open-source RISC-V processor and system on chip with specialized security features baked in.

Exploring Security Scenarios

The various teams at TII’s Secure Systems Research Centre are currently exploring the security implications of different scenarios, and these explorations are informing best practices for hardening against these kinds of issues.

Here are examples of some of these scenarios:

Hijacking a high-value cargo

A drone is attempting to transport an organ between hospitals. The attacker’s objective is to hijack the drone and force it to land in another location to sell the organ in the underground economy. Possible attack strategies include spoofing the sensors, jamming GPS or optical sensors, injecting fake visual location data, or a complete takeover using the control protocol. The data from successful attacks will inform modern designs or help train machine learning algorithms.

Network resiliency

A wide area swarm is deployed for long-term surveillance, such as protecting a nature reserve or border. The swarm uses a mesh network protocol to communicate, and attackers attempt to jam the network to temporarily halt communication between the swarm and the control center. As a result, a distributed optimization reconfiguration scheme is designed to allow the swarm to reconfigure itself to re-establish contact. This scenario could also help improve strategies for slowing the propagation of malicious code or data among vehicles in the swarm. For example, regular communication between the control center and drones could help identify individual drones that may have been compromised, and communications could be routed around these.

Perimeter defense against stealthy UAV

A ground-based monitoring system uses radar, lidar, and cameras to protect a building from a hostile vehicle while disregarding other delivery vehicles in the area. The attacker’s objective is to disguise an attack drone as a delivery drone to breach the defended area. One attack strategy would be to use generalized adversarial networks to mimic the behavior of legitimate drones. The team will work on secure learning algorithms that robustly identify these fake drones.

Corrupt firmware update

New capabilities are updated to the swarm via firmware and conveyed to each drone via radio. The attacker attempts to upload a corrupted firmware update with malicious intent. Various mitigation strategies include different encryption and key management schemes, ensuring firmware integrity using cryptographically signed attestation schemes, and hardening the firmware update protocol. Swarm hijacking Drone swarms are exposed to additional vulnerabilites beyond those experienced by individual drones. For example, hackers could sieze control of the communication link used to manage the swarm. Improvements in distributed monitoring capabilities and dynamic rerouting capabilities could improve attack detection, identification, and mitigation.

Exploiting unused features

Drone control systems like PX4 Autopilot and ArduPilot use QGroundControl to set up and control flights in operations, a general-purpose library. One concern is that attackers could discover unused, underutilized, or obsolete software components to initiate an exploit. These features may receive less security testing as a result. For example, an attacker may discover a way to abuse a vulnerability in video streaming features that a drone might not even use in everyday operations. Research focuses on how to map features in these systems and effectively turn off all features and disable the underlying code that is not required for a given mission. Another research direction is to develop a lightweight monitoring tool to assure the desired behavior at runtime.

The Future of Autonomous Systems Security

Today, almost all drone applications involve the management of individual drones. The next evolution of drone adoption will require finding ways to scale both the command-and-control infrastructure , as well as hardening the security and resilience of these systems.

Ultimately, research around securing autonomous systems, and not just individual drones, will help facilitate widespread commercial deployment. It is essential for designers of autonomous systems to adopt components that have been hardened and can be updated regularly as new problems are discovered.

Many enterprises are adopting DevSecOps practices in which security considerations are undertaken as part of the software development and deployment. In these cases, various tools are used to vet code updates against known best practices and reject updates that fail basic security tests.

Improvements in UAV architectures could also be used to improve the resilience of enterprise applications, autonomous warehouses, and smart cities.

Afterwards, software scanning tools, such as WhiteHat and Contrast OSS, build an inventory of libraries used by the apps, sending an alert when critical vulnerabilities are detected within active systems. Similar approaches will need to extend to improve the components used in developing and deploying autonomous systems that scan not only the software, but also the hardware and communications protocols used.

The first results of these kinds of collaborative drone security programs are just the beginning. Eventually, improvements in UAV architectures could also be used to improve the resilience of enterprise applications, autonomous warehouses, and smart cities.

In addition, better tools for modelling drone security issues will also inform the development of strategies to protect against largescale attacks by swarms of compromised drones. The FAA suggests that the evolution of UTM systems, which provide protection for UAVs, other infrastructure, and people, should follow a spiral approach, starting with low complexity operations and gradually building modules to support higher complexity operational concepts and requirements.

Similarly, the evolution of tools for improving autonomous systems security will require a spiral approach as autonomous systems evolve.

References

1 Goldman Sachs. “Drones: Reporting for Work.” Accessed March 16, 2022. https://www.goldmansachs.com/insights/technology-driving-innovation/drones/

2 Gartner. “Gartner Forecasts Worldwide Hyperautomation-Enabling Software Market to Reach Nearly $600 Billion by 2022.” Accessed March 16, 2022. https://www.gartner.com/en/newsroom/press-releases/2021-04-28-gartner-forecasts-worldwide-hyperautomation-enabling-software-market-to-reach-nearly-600-billion-by-2022

3 “Advisory and Rulemaking Committees – Unmanned Aircraft Systems (UAS) Beyond Visual Line-of- Sight (BVLOS) Operations Aviation Rulemaking Committee (ARC).” Template. Accessed March 16, 2022. https://www.faa.gov/regulations_policies/rulemaking/committees/documents/index.cfm/committee/browse/committeeID/837

4 Goldman Sachs. “Drones: Reporting for Work.” Accessed March 16, 2022. https://www.goldmansachs.com/insights/technology-driving-innovation/drones/

5 Ford, Georgina. “Counting Camels in The Desert - A Drone-Powered Success Story.” Commercial Drone Professional (blog), September 30, 2021. https://www.commercialdroneprofessional.com/counting-camels-in-the-desert-a-drone-powered-success-story/

6 Douglas, Alex. “UAE to Emerge as World Leader in Using Drones, Predicts Falcon Eye.” Commercial Drone Professional (blog), April 1, 2020. https://www.commercialdroneprofessional.com/uae-to-emerge-as-world-leader-in-using-drones-predicts-falcon-eye/

7 VentureBeat. “Nexar and Las Vegas Tackle Traffic with Digital Twins,” September 27, 2021. https://venturebeat.com/business/nexar-and-las-vegas-tackle-traffic-with-digital-twins/

8 “Region’s First Autonomous Port Truck System to Be Implemented GulfToday.” Accessed March 16, 2022. https://www.gulftoday.ae/business/2021/07/06/regions-first-autonomous-port--truck-system-to-be-implemented

9 Defaiya, Al. “Al Defaiya | Abu Dhabi’s ASPIRE Launches Over US$3 Million MBZIRC Maritime Grand Challenge,” October 22, 2021. https://www.defaiya.com/news/Regional%20News/UAE/2021/10/22/abu-dhabi-s-aspire-launches-over-us-3-million-mbzirc-maritime-grand-challenge

10 “The Mirai Botnet Explained: How IoT Devices Almost Brought down the Internet | CSO Online.” Accessed March 16, 2022. https://www.csoonline.com/article/3258748/the-mirai-botnet-explained-how-teen-scammers-and-cctv-cameras-almost-brought-down-the-internet.html

11 Upstream Security. “2021 Automotive Cybersecurity Report | Press Release | Upstream.” Accessed March 16, 2022. https://upstream.auto/press-releases/2021-report/

12 Kushner, David (26 February 2013). “The Real Story of Stuxnet”. IEEE Spectrum. 50 (3): 48–53. doi:10.1109/MSPEC.2013.6471059. S2CID 29782870. https://spectrum.ieee.org/the-real-story-of-stuxnet

13 “$500 Million Botnet Citadel Attacked by Microsoft and the FBI | The Independent | The Independent.” Accessed March 16, 2022. https://www.independent.co.uk/tech/500-million-botnet-citadel-attacked-by-microsoft-and-the-fbi-8647594.html

14 September 17, Kelly Jackson Higgins Editor-in-Chief and 2010. “Forrester Pushes ‘Zero Trust’ Model For Security.” Dark Reading, September 17, 2010. https://www.darkreading.com/perimeter/forrester-pushes-zero-trust-model-for-security

15 BAE Systems | Cyber Security & Intelligence. “Security Challenges for Connected and Autonomous Vehicles.” Accessed March 16, 2022. https://www.baesystems.com/en/cybersecurity/feature/security-challenges-for-connected-and-autonomous-vehicles

16 Yasin, Muhammad, and Ozgur Sinanoglu. “Evolution of logic locking.” In 2017 IFIP/IEEE International Conference on Very Large-Scale Integration (VLSI-SoC), pp. 1-6. IEEE, 2017.

17 Rashid, Fahmida Y. “The rise of confidential computing: Big tech companies are adopting a new security model to protect data while it’s in use-[news].” IEEE Spectrum 57, no. 6 (2020): 8-9. https://spectrum.ieee.org/what-is-confidential-computing



This sponsored article is brought to you by Technology Innovation Institute.

Autonomous systems sit at the intersection of AI, IoT, cloud architectures, and agile software development practices. Various streams of these systems are becoming prominent, such as unmanned drones, self-driving cars, automated warehouses, and managing capabilities in smart cities. The drone industry alone was estimated at US $100 billion in 2020, and autonomous systems are already driving significantly more value across other domains. 1

But surprisingly little attention has been paid to securing autonomous systems as systems composed of multiple automated components. Various patchwork efforts have focused on individual components. In tandem, cloud services are starting to adopt a Zero Trust approach for securing the chain of trust that might traverse multiple systems.

With that, it has become imperative to extend a Zero Trust architecture to systems of autonomous systems to protect not only drones but also industrial equipment, supply chain automation, and smart cities.

In the near future, autonomous systems will bring a new level of digital transformation to the industry worth trillions of dollars, including automating transportation, traffic management, municipal services, law enforcement, shipping, port management, construction, agriculture, and more.

Autonomous enterprise systems are further enriching these more physical aspects of autonomous systems. Gartner coined the term hyperautomation to describe tools for scaling automation using software robots that were valued at $534 billion in 2021.2 Despite the importance of autonomous systems, surprisingly little research has focused on securing autonomous systems as a collection of systems.

This is not to say that researchers are ignoring security — after all, security infrastructure and tools are a multi-billion dollar industry. But when it comes to securing physical components, much of the focus has been on securing individual elements such as data, software, hardware, and communications links rather than the behavior of an ensemble of autonomous systems.

Despite the importance of autonomous systems, surprisingly little research has focused on securing autonomous systems as a collection of systems.

Similarly, researchers are just starting to scratch the surface of protecting against swarms of autonomous things guided with malicious intent. Just last year, a half dozen precisely targeted malicious drones managed to slow oil production in Saudi Arabia for days, and more recently, several low-cost drones caused significant damage to oil tankers in the UAE. This illustrates the importance of detecting hostile drones entering secure spaces.

This kind of security is just the beginning of what will be required to move towards a larger scale deployment of drones as envisioned by the U.S Federal Aviation Administration’s beyond visual line of sight (BVLOS) regulations. 3 These regulations promise to open immense commercial opportunities to improve industrial inspection, shipping, and remote monitoring.

However, wider scale deployment will require a more systemic approach to protect against the impact of thousands of low-cost autonomous drones working in concert.

A more comprehensive approach is required to protect the security and resilience of autonomous systems and protect against cyber-physical attacks that leverage autonomous systems.TII

Autonomous Security for Smart Cities

Autonomous security is also a pressing issue for industrial systems and smart city use cases. Hackers are becoming better at coordinating millions of IoT devices to launch a devastating distributed denial of service attacks on computer servers today.

Similar tactics that leveraged physical and mobile autonomous things could extend the blast radius beyond IT infrastructure to destroy physical infrastructure like factories, pipelines, electric grids, or worse. A more comprehensive approach is required to protect the security and resilience of autonomous systems and protect against cyber-physical attacks that leverage autonomous systems.

The Technology Innovation Institute (TII)’s Secure Systems Research Centre (SSRC) is leading one promising approach to building an autonomous security testbed that explores the interplay between how hardware, software, and communications systems can be exploited so that they can be hardened.

The early phases of this work are focused on protecting scalable swarms of unmanned aerial vehicles controlled by the cloud. The long-term goal is to create a framework for understanding and defending against autonomous security risks across all types of infrastructure, including fleets of cars, automated warehouses, construction sites, farms, and smart cities.

Over the years, basic autonomous capabilities have grown into almost every aspect of our physical infrastructure, from automated braking in individual cars to orchestrating power flow across nationwide electrical grids with precision. Autonomous systems are already demonstrating tremendous value today, and we are just scratching the surface. For example, Goldman Sachs estimated that unmanned autonomous vehicles (UAV) had grown into a $100 billion industry in 2021. 4

Military applications accounted for about 70 percent of this spending. However, commercial applications were also substantial in construction, agriculture, insurance claims, offshore oil, gas and refining, pipelines, utilities, and mining. For example, the construction industry uses drones to automatically capture footage of construction sites before, during, and after the construction process.

Drones carrying lidar and high-resolution cameras can automatically generate 3D models in minutes that would have previously taken humans days or weeks. This makes it practical to keep tabs on buildings as they are being built, track progress, and identify mistakes when they are cheaper to fix. After construction, drones can also survey physical infrastructure like bridges to identify cracks and other problems before the whole structure suffers a bigger problem.

Drones are also improving the planning and management of large farms. For example, drones with spectral imaging cameras can quickly identify nutrient deficiencies, pest outbreaks, and drought, allowing farmers to address them more precisely and cheaply. In the UAE, drones have also helped map the entire country’s agricultural resources in a matter of days, which would not have been practical using physical surveys alone. 5 In another effort, UAE teams used drones to plant 6.25 million trees in only two days. 6

Scaling Secure Autonomous Systems

It is easy to get caught up in autonomous systems as a single self-driving car or individual drone. However, the real promise of autonomous systems comes when autonomous capabilities are simultaneously scaled to improve the control of individual things, the orchestration of a collection of things, and the understanding of things at scale.

The individual-level can be considered as the evolution from cruise control to automated braking and fully self-driving cars. The orchestration level entails the evolution from synchronized traffic lights to dynamically adjusted traffic lights to advanced mapping services that route cars around traffic jams. Autonomous understanding systems include traffic monitoring cameras to crowdsourcing dashcam video into dynamically updated digital twins for improving overall traffic. 7

These same three factors of control, orchestration, and understanding play out across various use cases. A warehouse robot might reduce the need for staff. An autonomous warehouse management system could optimize the scheduling and staging of items in the warehouse. In contrast, an autonomous understanding system could help reengineer the warehouse design to further increase performance in the same space.

This combination of autonomous control, autonomous orchestration, and autonomous understanding is already showing some promise in the UAE. For example, one pilot project has created an autonomous port truck system that automates the process of shifting shipping containers from boats to trucks. 8

Gartner refers to the simultaneous evolution of control, orchestration, and understanding in IT systems as hyperautomation. In this case, enterprises use individual robotic process automation (RPA) software robots (called bots) to automate a collection of human tasks. Orchestration engines help organize the flow of work across multiple bots.

Then process and task mining bots analyze enterprise applications or even watch over the shoulders of individuals to find further opportunities for improvement. Researchers are just starting to explore how similar practices may be extended to include autonomous vehicles.

The Next Challenge in Systems Security

That is one of the reasons ATRC’s ASPIRE chose to focus on autonomous swarm coordination as part of its next grand challenge project.9 ASPIRE is tasked with hosting grand challenge competitions loosely organized like the US DARPA’s challenge that spearheaded research on autonomous vehicles.

The upcoming challenge tasks researchers with finding the best way to orchestrate a swarm for drones to search for and retrieve objects hidden on ships that are too heavy for any individual drone. The need for end- to-end security and resilience Enterprises and security researchers are just starting to struggle with protecting individual autonomous things, much less swarms. A new security approach is required for these types of swarms to scale for real- world applications.

The early generation of IoT devices were rushed to market with only basic considerations on how they might be protected against hackers or securely updated against new threats. Many of these early devices are not updateable after the fact. Consequently, they are a popular target for hackers eager to create large-scale botnets for launching distributed denial of service attacks such as the Mirai botnet. 10

This has given rise to a secondary industry of IoT security gateways designed to detect and block malicious activity outside of poorly secured appliances like lighting controllers, crockpots, TV set-top boxes, and cameras. The security posture of the first connected cars is better, but there are still glaring vulnerabilities and gaps that need to be addressed. Some of the vulnerabilities highlighted in Upstream’s 2021 Automotive Cyber Security Report 11 include:

  • Hackers found 19 vulnerabilities in a Mercedes-Benz E-class car that allowed them to remotely control the vehicle, open doors, and start the engine.
  • Hackers took control of a car’s OEM corporate network by reverse engineering a car’s transmission control unit to infiltrate the network.
  • Over 300 vulnerabilities were discovered in 40 popular electronic control units used in cars.
  • Hackers managed to gain control over Tesla’s entire connected car fleet by exploiting a vulnerability in the communications protocol.

Modern cars allow the software to be updated after the fact but generally require consumers to come to a shop for an update. Only a few leaders, like Tesla, have mastered the ability to securely update software at scale.

Building secure systems will need to address hardware, software, and protocols and their interplay. Hardware security issues need to protect against attacks in which a hacker can physically update a system to compromise security or cause damage.

For example, the Stuxnet 12 attack corrupted hardware in an Iranian uranium enrichment facility to send miscalibrated timing data that confused the control systems. The result was that the controller drove hundreds of expensive centrifuge systems so fast that they exploded.

There are a variety of ways hackers could launch remote hardware-directed attacks on UAVs. For example, focused beams of sound could confuse the inertial guidance unit used to control a drone. Directed EMF beams might cause a short circuit on sensitive electronics, and lasers or bright lights might confuse or destroy camera sensors.

Vulnerabilities in software systems allow hackers to spy on or take remote control of systems to launch further attacks. Early examples in IT systems included malware like the Zeus Trojan that allowed hackers to spy on banking interactions to capture credentials and steal $500 million. 13

In some cases, hackers are finding ways to infiltrate software supply chains to plant targeted malware vulnerabilities. This was how hackers managed to burrow into thousands of government, banking, and enterprise systems as part of last year’s Solar Winds breach.

The fundamental concept at the heart of the Zero Trust security model is to never trust and always verify the provenance of each request.TII

The Zero Trust Security Paradigm

The term Zero Trust model was coined by Forrester research in 2010 to denote a new paradigm for securing distributed systems. 14

Security systems have traditionally been secured by hardening a physical perimeter. But in the world of cloud computing, the perimeter is more nebulous. Zero trust security connotes the idea of always authenticating and verifying every access in order to secure around a more flexible perimeter. The Zero Trust paradigm allows security teams to plan for the possibility that vulnerabilities may exist throughout a chain of interactions among multiple systems, such as across several cloud services, data processes, storage services, and networks.

The fundamental concept is to never trust and always verify the provenance of each request. Another basic principle is to assume that a breach has already occurred, making it essential to limit the blast radius of any breach.

Autonomous systems extend automated processes across a wider variety and physical range of hardware, communications protocols, as well as control and orchestration mechanisms. Each of these brings with them their own attack surface. Thus, security teams need to minimize the impact that a breach on one level could have on other systems.

Zero trust security connotes the idea of always authenticating and verifying every access in order to secure around a more flexible perimeter.

Examples include attacks on control servers, communication networks, embedded system applications, physical devices, software supply chains, and silicon supply chains.

Autonomous system security needs to be built across multiple independent security walls so that if one key or system is breached, the integrity of the whole is protected. Each system should be designed to fail safely and securely so as to minimize the impact on adjacent components.

This can also make it harder for hackers to escalate an attack on a low-level system to more critical systems, as with the recent Log4J attacks. For example, autonomous systems like autonomous drones need attestation schemes to ensure that only authorized software runs on the drones. An attestation scheme uses cryptographically signed software updates to ensure that only valid code can run on remote systems.

This prevents hackers from reprogramming a drone by simulating a legitimate program update communication or replacing legitimate updates with a bogus software upgrade staged at the command center.

Vulnerabilities in communication protocols could allow hackers to spy on drone activity or simulate control signals to take control of a drone. Such attacks could happen at any level of the communication stack, from hacking into communications within the cloud, the wireless signals between the cloud and a drone, or between multiple drones.

In some cases, hackers may be able to attack systems by mimicking communications within a drone or autonomous car. For example, researchers have found ways to listen to and simulate the unprotected wireless communications involved in tire pressure monitoring. 15 This allowed them to trick the car into indicating that a good tire had a flat, which might cause a vehicle to stop.

Zero Trust Applied to Chip Design

Trust is essential in computing systems — arguably more so for chips at the heart of these systems. Unfortunately, trust is also an increasing rarity, because many chips design companies are outsourcing critical steps (fabrication, testing, assembly) to third-party companies.

Such a distributed chip supply chain is financially appealing, but under normal circumstances, also necessarily untrustworthy. This distributed paradigm is susceptible to threats such as chip design reverse engineering, piracy, overproduction, and tampering.

A rogue element in the fabrication with full access to the chip design blueprint can reverse engineer the functionality of the chip or its critical components, copy and pirate any hardware design intellectual property, run extra fabrication shifts to produce more chips than requested by the design house to sell them in a gray market, or insert difficult-to-catch Trojans that serve a malicious purpose (e.g., leak sensitive information) into the chips during fabrication.

Software supply chain attacks have been making the news lately. It’s also essential to protect against hardware supply chain attacks in which malicious actors insert backdoors or hardware Trojans into chips.

Emerging chip-to-chip authentication techniques could help mitigate such issues. The core idea is to extend zero-trust concepts applied to network security to chip-to-chip communication to mitigate the impact of attacks on the physical supply chain or malicious firmware updates.

This kind of approach could involve combining public-key infrastructure, trusted computing, and secure memory management to strike the right balance between security and performance.

For example, in a drone system, designers might have a flight controller that connects many peripheral chips for motor control and sensors of various kinds. Today, there is no authentication of those chips done in real-time when the system boots. It is assumed that it is a legitimate chip because it boots up in a particular manner. We need a Zero Trust approach in which the Boot processor cryptographically verifies that these peripheral chips are from legitimate manufacturers and are running legitimate software before allowing them to connect with the main CPU on the flight controller.

This chain is extended all the way from this level to applications running on the flight controller. Researchers have been developing a technique called logic locking 16 that gives control back to the design house in the chip supply chain where they normally have almost no control.

By using a logic locking tool or technique, a chip designer can insert additional logic into the design to introduce a locking mechanism that expects a secret unlock key, which is a binary vector (combination of 0s and 1s). The secret key is known to only the design house and is loaded by a trusted party (e.g., design house themselves) on the chip after fabrication.

This is a one-time load operation where the key is written into the chip. Only then a fabricated chip becomes “unlocked,” and thus, functional. Logic locking serves multiple purposes:

• First, the design house can ensure that all of its fabricated chips can be deployed in the market under their control; any overproduced chip by the fab will remain locked and unfunctional, as it will be missing the unlock key.

• Second, the blueprint that is available to the fab fails to reveal all the information about the functionality of the chip and its blocks as the secret key is unknown to the untrusted entities. Any attempt to reverse engineer the chip/block functionality is thus hindered.

Without the functionality of the chip fully understood, the insertion of meaningful Trojans in the foundry is also thwarted. Modern system-on-chip designs can accelerate product development for performant and low-cost chip functionality. However, they also carry risks from the use of untrusted IP.

Existing testing techniques like fuzzing and penetration tests depend on the judgment of experts. Also, they tend to be performed late in the design cycles, and it can be costly and challenging to make significant changes when problems are found. Approaches like concolic testing (“concrete” plus “symbolic”), primarily used in software security testing today, could be extended to chip circuit design to detect problems much earlier in the design cycle.

It’s also vital to extend confidential computing security 17 to hypervisors running on RISC-V processors that are increasingly being adopted in autonomous systems. The core idea is to isolate virtual machines from the virtual machine manager and other non-trusted software components available on the platform. This will require a combination of VM-to-VM authentication and encrypted communication.

One challenge is that RISC-V processors do not currently provide hardware support for encrypted communication channels between VMs. Implementing this capability in software adds additional overhead and latency.

One strategy is to create Zero Trust hardware building blocks such as IOMMU and IOPMP and ISA extensions to alleviate this overhead. Trusted execution environments (TEE) were developed to provide a higher level of security for applications by using an encryption perimeter around program execution running on the hardware, but these were built primarily for applications confined within a CPU.

Autonomous Systems Infrastructure

Autonomous systems infrastructure needs to combine a variety of embedded computing platforms such as drone navigation systems, CPU-based architectures, and other types of dedicated hardware. Existing approaches are also fixed at design time, which leads to using untrusted software to employ peripherals in TEEs.

New approaches for composite enclaves will be required to extend TEEs to more flexible designs. It’s also essential to develop new tools for detecting and responding to unknown and unexpected changes caused by novel attack techniques. A trust verification infrastructure could extend traditional API observability approaches to hardware through a combination of monitoring, logging, and tracing. These kinds of actions allow the construction of continuous verification mechanisms for anomaly and intrusion detection.

Offline profiling techniques could generate trust profiles that describe how the hardware is supposed to operate. During operation, ongoing logging could ensure that the behavior adheres to the trust profile.

However, this needs to be constructed to minimize the risk of logging tools being leveraged as part of a side-channel attack. We also need to explore new hardware capabilities and advanced software techniques to compartmentalize software stacks across multiple levels.

One of the most promising approaches is CHERI (Capability Hardware Enhanced RISC Instructions), which is being explored by DARPA, Google, SRI International, and the University of Cambridge. 18 Further work is required to extend this approach to improve fine-grained compartmentalization at the operating system level. This could combine new middleware, OS libraries, unikernels, and various mechanisms to grant and revoke authorization in order to enforce compartmental constraints.

This will be required striking the right balance between different degrees of flexibility in both configuration and determining the appropriate privileges. We are still in the early days of building large-scale autonomous systems, but as we scale them up, new considerations like these will be required to extend zero-trust security to embedded systems, autonomous systems, and systems of autonomous systems.

Enterprises and researchers are exploring ways to scale systems of individual autonomous systems, with the most promising research currently being focused on scaling systems of autonomous drones. In the long run, everyone wants to get to autonomous cars and factories and there is a lot of experimentation going on with unmanned vehicles that tend to require a human driver or assistant in the case of delivery vehicles.

But UAVs are already delivering value today, and regulators are starting to open the skies for more ambitious applications. In early 2021, the FAA granted American Robotics the first license to fly drones beyond the visual line of sight (BVLOS). Around the world, enterprises are working with regulators to develop Unmanned Traffic Management (UTM) systems. Major aerospace companies and innovative start-ups are working with regulators to show how various combinations of AI, advanced mapping, vehicle-to- vehicle communications, and encrypted communication and control could facilitate safe drone management at scale.

In the United States, Boeing has partnered with SparkCognition on SkyGrid. Airbus is leading efforts to promote SESAR for the EU. Guardian Angel recently worked with U.K. regulators on Operation Zenith to demonstrate how a series of on-airfield tasks could be performed without endangering or disrupting airport operations. These are essential efforts and are a necessary first step in safely scaling fleets of trusted drones.

However, UTM systems generally start with the assumption that drones are all trusted. More work needs to be done to understand and analyze how these systems can be compromised and hence trusted in the first place.

Researchers around the world are exploring how individual components of these systems can be compromised and hardened. For example, researchers in Germany and Switzerland have experimented with implementing quantum-safe cryptographic algorithms to protect drone communications. They argue that long-running drones will also need to support crypto-agility that allows dynamic updating of security algorithms in response to the discovery of new vulnerabilities. 19

These researchers also explored how to implement remote attestation schemes that protect drones from software tampering. Other researchers have explored drone cloud control mechanisms. For example, a team of researchers in Brazil has developed the Cloud-SPHERE platform as one approach for integrating UAVs into IoT and Cloud Computing paradigms.

Architectures that attempt to control each drone or autonomous system directly will run into scalability challenges as the number of individuals in the swarm grows.TII

Bringing Security to the Swarm

The next phase of autonomous drones will require developing architecture to scale drone control and security to support autonomous swarms. For example, a collection of low-cost drones can be orchestrated into drone swarms controlled by the cloud to explore new use cases like search and rescue, disinfecting public spaces, and coordinating tasks such as lifting heavy equipment beyond the capacity of any one drone.

One big shift will be the need for more distributed control mechanisms. Architectures that attempt to control each drone or autonomous system directly will run into scalability challenges as the number of individuals in the swarm grows.

One approach pursued by the TII’s Secure Systems Research Centre (SSRC) is the development of a dynamic hierarchy composed of drones with different capabilities for control and task execution. Similar organization of drones has been described before, and our focus is going to be on security and resilience in such a hierarchy.

In this scheme, a tier of Fog Drones acts as intermediaries between less sophisticated Edge Drones and the cloud. The Fog Drone can also offload many tasks such as summarizing input from many Edge Drones to reduce the amount of communication required with the cloud and between drones.

Cross-disciplinary Research

The Secure Systems Research Centre (SSRC), part of the Technology Innovation Institute (TII), is working with a cross-disciplinary team of researchers at leading research institutions worldwide to develop a comprehensive Zero Trust autonomous security testbed to explore security implications spanning hardware, software, and communications at the systems level.

SSRC partners include Georgia Institute of Technology, Purdue University, University of Applied Sciences and Arts of Southern Switzerland, Tampere University, University of Turku, Khalifa University, Imperial College, University of Manchester, TU Graz, University of New South Wales, University of Modena and Reggio Emilia, University of Bologna, Sapienza University of Rome, University of Milan, University of Minho, University of Waterloo, McMaster University, NYU Abu Dhabi, and UT Dallas.

This can also reduce the amount of processing required on each Edge Drone. This work is also exploring how mesh networks can further optimize and secure communications between drones operating in constrained situations such as a cave, fallen building, or hostile environment.

Cross-disciplinary research teams at SSRC are exploring ways to synthesize lessons learned from physical testbeds into useful and actionable security models. Ultimately, these security models could help autonomous teams identify and improve autonomous systems development that spans drone hardware, software implementations, and communications choices earlier in the release cycle.

Several testbeds have also been developed at Masdar in the UAE and Purdue. One goal is to develop machine learning methods at both the drone and cloud levels to detect security issues and enable resilience.

Another goal is to develop tools for testing these systems in augmented reality environments for urban settings. The teams are also exploring ways to improve the ability to capture security-related data into digital twins that reflect the security implications of drones. This will help automate the ability to reflect new security vulnerabilities discovered in the real world in the models shared with researchers.

These researchers are also finding ways to harden open-source hardware, software, and communication protocols for developing and deploying drone systems. This approach opens the architecture to a wide range of security and drone researchers to find vulnerabilities sooner.

This open-source approach could also benefit from the rapid innovation that the open-source robotics community is already seeing. Some of the underpinnings of the current platform include the PX4 advanced autopilot, NuttX real-time operating systems, and the Robot Operating System 2 (ROS2).

The team has also developed and implemented an open-source RISC-V processor and system on chip with specialized security features baked in.

Exploring Security Scenarios

The various teams at TII’s Secure Systems Research Centre are currently exploring the security implications of different scenarios, and these explorations are informing best practices for hardening against these kinds of issues.

Here are examples of some of these scenarios:

Hijacking a high-value cargo

A drone is attempting to transport an organ between hospitals. The attacker’s objective is to hijack the drone and force it to land in another location to sell the organ in the underground economy. Possible attack strategies include spoofing the sensors, jamming GPS or optical sensors, injecting fake visual location data, or a complete takeover using the control protocol. The data from successful attacks will inform modern designs or help train machine learning algorithms.

Network resiliency

A wide area swarm is deployed for long-term surveillance, such as protecting a nature reserve or border. The swarm uses a mesh network protocol to communicate, and attackers attempt to jam the network to temporarily halt communication between the swarm and the control center. As a result, a distributed optimization reconfiguration scheme is designed to allow the swarm to reconfigure itself to re-establish contact. This scenario could also help improve strategies for slowing the propagation of malicious code or data among vehicles in the swarm. For example, regular communication between the control center and drones could help identify individual drones that may have been compromised, and communications could be routed around these.

Perimeter defense against stealthy UAV

A ground-based monitoring system uses radar, lidar, and cameras to protect a building from a hostile vehicle while disregarding other delivery vehicles in the area. The attacker’s objective is to disguise an attack drone as a delivery drone to breach the defended area. One attack strategy would be to use generalized adversarial networks to mimic the behavior of legitimate drones. The team will work on secure learning algorithms that robustly identify these fake drones.

Corrupt firmware update

New capabilities are updated to the swarm via firmware and conveyed to each drone via radio. The attacker attempts to upload a corrupted firmware update with malicious intent. Various mitigation strategies include different encryption and key management schemes, ensuring firmware integrity using cryptographically signed attestation schemes, and hardening the firmware update protocol. Swarm hijacking Drone swarms are exposed to additional vulnerabilites beyond those experienced by individual drones. For example, hackers could sieze control of the communication link used to manage the swarm. Improvements in distributed monitoring capabilities and dynamic rerouting capabilities could improve attack detection, identification, and mitigation.

Exploiting unused features

Drone control systems like PX4 Autopilot and ArduPilot use QGroundControl to set up and control flights in operations, a general-purpose library. One concern is that attackers could discover unused, underutilized, or obsolete software components to initiate an exploit. These features may receive less security testing as a result. For example, an attacker may discover a way to abuse a vulnerability in video streaming features that a drone might not even use in everyday operations. Research focuses on how to map features in these systems and effectively turn off all features and disable the underlying code that is not required for a given mission. Another research direction is to develop a lightweight monitoring tool to assure the desired behavior at runtime.

The Future of Autonomous Systems Security

Today, almost all drone applications involve the management of individual drones. The next evolution of drone adoption will require finding ways to scale both the command-and-control infrastructure , as well as hardening the security and resilience of these systems.

Ultimately, research around securing autonomous systems, and not just individual drones, will help facilitate widespread commercial deployment. It is essential for designers of autonomous systems to adopt components that have been hardened and can be updated regularly as new problems are discovered.

Many enterprises are adopting DevSecOps practices in which security considerations are undertaken as part of the software development and deployment. In these cases, various tools are used to vet code updates against known best practices and reject updates that fail basic security tests.

Improvements in UAV architectures could also be used to improve the resilience of enterprise applications, autonomous warehouses, and smart cities.

Afterwards, software scanning tools, such as WhiteHat and Contrast OSS, build an inventory of libraries used by the apps, sending an alert when critical vulnerabilities are detected within active systems. Similar approaches will need to extend to improve the components used in developing and deploying autonomous systems that scan not only the software, but also the hardware and communications protocols used.

The first results of these kinds of collaborative drone security programs are just the beginning. Eventually, improvements in UAV architectures could also be used to improve the resilience of enterprise applications, autonomous warehouses, and smart cities.

In addition, better tools for modelling drone security issues will also inform the development of strategies to protect against largescale attacks by swarms of compromised drones. The FAA suggests that the evolution of UTM systems, which provide protection for UAVs, other infrastructure, and people, should follow a spiral approach, starting with low complexity operations and gradually building modules to support higher complexity operational concepts and requirements.

Similarly, the evolution of tools for improving autonomous systems security will require a spiral approach as autonomous systems evolve.

References

1 Goldman Sachs. “Drones: Reporting for Work.” Accessed March 16, 2022. https://www.goldmansachs.com/insights/technology-driving-innovation/drones/

2 Gartner. “Gartner Forecasts Worldwide Hyperautomation-Enabling Software Market to Reach Nearly $600 Billion by 2022.” Accessed March 16, 2022. https://www.gartner.com/en/newsroom/press-releases/2021-04-28-gartner-forecasts-worldwide-hyperautomation-enabling-software-market-to-reach-nearly-600-billion-by-2022

3 “Advisory and Rulemaking Committees – Unmanned Aircraft Systems (UAS) Beyond Visual Line-of- Sight (BVLOS) Operations Aviation Rulemaking Committee (ARC).” Template. Accessed March 16, 2022. https://www.faa.gov/regulations_policies/rulemaking/committees/documents/index.cfm/committee/browse/committeeID/837

4 Goldman Sachs. “Drones: Reporting for Work.” Accessed March 16, 2022. https://www.goldmansachs.com/insights/technology-driving-innovation/drones/

5 Ford, Georgina. “Counting Camels in The Desert - A Drone-Powered Success Story.” Commercial Drone Professional (blog), September 30, 2021. https://www.commercialdroneprofessional.com/counting-camels-in-the-desert-a-drone-powered-success-story/

6 Douglas, Alex. “UAE to Emerge as World Leader in Using Drones, Predicts Falcon Eye.” Commercial Drone Professional (blog), April 1, 2020. https://www.commercialdroneprofessional.com/uae-to-emerge-as-world-leader-in-using-drones-predicts-falcon-eye/

7 VentureBeat. “Nexar and Las Vegas Tackle Traffic with Digital Twins,” September 27, 2021. https://venturebeat.com/business/nexar-and-las-vegas-tackle-traffic-with-digital-twins/

8 “Region’s First Autonomous Port Truck System to Be Implemented GulfToday.” Accessed March 16, 2022. https://www.gulftoday.ae/business/2021/07/06/regions-first-autonomous-port--truck-system-to-be-implemented

9 Defaiya, Al. “Al Defaiya | Abu Dhabi’s ASPIRE Launches Over US$3 Million MBZIRC Maritime Grand Challenge,” October 22, 2021. https://www.defaiya.com/news/Regional%20News/UAE/2021/10/22/abu-dhabi-s-aspire-launches-over-us-3-million-mbzirc-maritime-grand-challenge

10 “The Mirai Botnet Explained: How IoT Devices Almost Brought down the Internet | CSO Online.” Accessed March 16, 2022. https://www.csoonline.com/article/3258748/the-mirai-botnet-explained-how-teen-scammers-and-cctv-cameras-almost-brought-down-the-internet.html

11 Upstream Security. “2021 Automotive Cybersecurity Report | Press Release | Upstream.” Accessed March 16, 2022. https://upstream.auto/press-releases/2021-report/

12 Kushner, David (26 February 2013). “The Real Story of Stuxnet”. IEEE Spectrum. 50 (3): 48–53. doi:10.1109/MSPEC.2013.6471059. S2CID 29782870. https://spectrum.ieee.org/the-real-story-of-stuxnet

13 “$500 Million Botnet Citadel Attacked by Microsoft and the FBI | The Independent | The Independent.” Accessed March 16, 2022. https://www.independent.co.uk/tech/500-million-botnet-citadel-attacked-by-microsoft-and-the-fbi-8647594.html

14 September 17, Kelly Jackson Higgins Editor-in-Chief and 2010. “Forrester Pushes ‘Zero Trust’ Model For Security.” Dark Reading, September 17, 2010. https://www.darkreading.com/perimeter/forrester-pushes-zero-trust-model-for-security

15 BAE Systems | Cyber Security & Intelligence. “Security Challenges for Connected and Autonomous Vehicles.” Accessed March 16, 2022. https://www.baesystems.com/en/cybersecurity/feature/security-challenges-for-connected-and-autonomous-vehicles

16 Yasin, Muhammad, and Ozgur Sinanoglu. “Evolution of logic locking.” In 2017 IFIP/IEEE International Conference on Very Large-Scale Integration (VLSI-SoC), pp. 1-6. IEEE, 2017.

17 Rashid, Fahmida Y. “The rise of confidential computing: Big tech companies are adopting a new security model to protect data while it’s in use-[news].” IEEE Spectrum 57, no. 6 (2020): 8-9. https://spectrum.ieee.org/what-is-confidential-computing

The paradigm change introduced by soft robotics is going to dramatically push forward the abilities of autonomous systems in the next future, enabling their applications in extremely challenging scenarios. The ability of soft robots to safely interact and adapt to the surroundings is key to operate in unstructured environments, where the autonomous agent has little or no knowledge about the world around it. A similar context occurs when critical infrastructures face threats or disruptions, for examples due to natural disasters or external attacks (physical or cyber). In this case, autonomous systems may be employed to respond to such emergencies and have to be able to deal with unforeseen physical conditions and uncertainties, where the mechanical interaction with the environment is not only inevitable but also desirable to successfully perform their tasks. In this perspective, I discuss applications of soft robots for the protection of infrastructures, including recent advances in pipelines inspection, rubble search and rescue, and soft aerial manipulation, and promising perspectives on operations in radioactive environments, underwater monitoring and space exploration.

Collective behavior observed in nature has been actively employed in swarm robotics. In order to better respond to external cues, the agents in such systems organize themselves in an ordered structure based on simple local rules. The central assumption, in swarm robotics, is that all agents in the system collaborate to fulfill a common goal. In nature, however, many multi-agent systems exhibit a more complex collective behavior involving a certain level of competition. One representative example of complex collective behavior is a multi-ball Bernoulli-ball system. In this paper, by extracting local force among the Bernoulli balls, we approximated the state-transfer model mapping interaction forces to observed behaviors. The results show that the collective Bernoulli-ball system spent 41% of its time on competitive behaviors, in which up to 84% of the interaction state is unorganized. The rest 59% of the time is spent on collaborative behavior. We believe that the novel proposed model opens new avenues in swarm robotics research.

This paper describes a compensation system for soft aerial vehicle stabilization. Balancing the arms is one of the main challenges of soft UAVs since the propeller is freely tilting together with the flexible arm. In comparison with previous designs, in which the autopilot was adjusted to deal with these imbalances with no extra actuation, this work introduces a soft tendon-actuated system to achieve in-flight stabilization in an energy-efficient way. The controller is specifically designed for disturbance rejection of aeroelastic perturbations using the Ziegler-Nichols method, depending on the flight mode and material properties. This aerodynamics-aware compensation system allows to further bridge the gap between soft and aerial robotics, leading to an increase in the flexibility of the UAV, and the ability to deal with changes in material properties, increasing the useful life of the drone. In energetic terms, the novel system is 15–30% more efficient, and is the basis for future applications such as object grasping.

Recent advances in deep learning have bolstered our ability to forecast the evolution of dynamical systems, but common neural networks do not adhere to physical laws, critical information that could lead to sounder state predictions. This contribution addresses this concern by proposing a neural network to polynomial (NN-Poly) approximation, a method that furnishes algorithmic guarantees of adhering to physics while retaining state prediction accuracy. To achieve these goals, this article shows how to represent a trained fully connected perceptron, convolution, and recurrent neural networks of various activation functions as Taylor polynomials of arbitrary order. This solution is not only analytic in nature but also least squares optimal. The NN-Poly system identification or state prediction method is evaluated against a single-layer neural network and a polynomial trained on data generated by dynamic systems. Across our test cases, the proposed method maintains minimal root mean-squared state error, requires few parameters to form, and enables model structure for verification and safety. Future work will incorporate safety constraints into state predictions, with this new model structure and test high-dimensional dynamical system data.

Pages