Feed aggregator

Robot design to simulate interpersonal social interaction is an active area of research with applications in therapy and companionship. Neural responses to eye-to-eye contact in humans have recently been employed to determine the neural systems that are active during social interactions. Whether eye-contact with a social robot engages the same neural system remains to be seen. Here, we employ a similar approach to compare human-human and human-robot social interactions. We assume that if human-human and human-robot eye-contact elicit similar neural activity in the human, then the perceptual and cognitive processing is also the same for human and robot. That is, the robot is processed similar to the human. However, if neural effects are different, then perceptual and cognitive processing is assumed to be different. In this study neural activity was compared for human-to-human and human-to-robot conditions using near infrared spectroscopy for neural imaging, and a robot (Maki) with eyes that blink and move right and left. Eye-contact was confirmed by eye-tracking for both conditions. Increased neural activity was observed in human social systems including the right temporal parietal junction and the dorsolateral prefrontal cortex during human-human eye contact but not human-robot eye-contact. This suggests that the type of human-robot eye-contact used here is not sufficient to engage the right temporoparietal junction in the human. This study establishes a foundation for future research into human-robot eye-contact to determine how elements of robot design and behavior impact human social processing within this type of interaction and may offer a method for capturing difficult to quantify components of human-robot interaction, such as social engagement.

Soft robotics has widely been known for its compliant characteristics when dealing with contraction or manipulation. These soft behavior patterns provide safe and adaptive interactions, greatly relieving the complexity of active control policies. However, another promising aspect of soft robotics, which is to achieve useful information from compliant behavior, is not widely studied. This characteristic could help to reduce the dependence of sensors, gain a better knowledge of the environment, and enrich high-level control strategies. In this paper, we have developed a state-change model of a soft robotic arm, and we demonstrate how compliant behavior could be used to estimate external load based on this model. Moreover, we propose an improved version of the estimation procedure, further reducing the estimation error by compensating the influcence of pressure deadzone. Experiments of both methods are compared, displaying the potential effectiveness of applying these methods.

Purpose: It is now clear that the COVID-19 viruses can be transferred via airborne transmission. The objective of this study was to attempt the design and fabrication of an AMBU ventilator with a negative pressure headbox linked to a negative pressure transporting capsule, which could provide a low-cost construction, flexible usage unit, and also airborne prevention that could be manufactured without a high level of technology.

Method: The machine consists of an automated AMBU bag ventilator, a negative pressure headbox, and a transporting capsule. The function and working duration of each component were tested.

Results: The two main settings of the ventilator include an active mode that can be set at the time range of 0 s–9 h 59 min 59 s and a resting mode, which could work continuously for 24 h. The blower motor and battery system, which were used to power the ventilator, create negative air pressure within the headbox, and the transporting capsule, could run for at least 2 h without being recharged. The transporting capsule was able to create an air change rate of 21.76 ACH with-10 Pa internal pressure.

Conclusion: This automated AMBU ventilator allowed flow rate, rhythm, and volume of oxygen to be set. The hazardous expired air was treated by a HEPA filter. The patient’s transporting capsule is of a compact size and incorporates the air treatment systems. Further development of this machine should focus on how to link seamlessly with imaging technology, to verify standardization, to test using human subjects, and then to be the commercialized.

Research into robotic sensing has, understandably I guess, been very human-centric. Most of us navigate and experience the world visually and in 3D, so robots tend to get covered with things like cameras and lidar. Touch is important to us, as is sound, so robots are getting pretty good with understanding tactile and auditory information, too. Smell, though? In most cases, smell doesn’t convey nearly as much information for us, so while it hasn’t exactly been ignored in robotics, it certainly isn’t the sensing modality of choice in most cases.

Part of the problem with smell sensing is that we just don’t have a good way of doing it, from a technical perspective. This has been a challenge for a long time, and it’s why we either bribe or trick animals like dogs, rats, vultures, and other animals to be our sensing systems for airborne chemicals. If only they’d do exactly what we wanted them to do all the time, this would be fine, but they don’t, so it’s not. 

Until we get better at making chemical sensors, leveraging biology is the best we can do, and what would be ideal would be some sort of robot-animal hybrid cyborg thing. We’ve seen some attempts at remote controlled insects, but as it turns out, you can simplify things if you don’t use the entire insect, but instead just find a way to use its sensing system. Enter the Smellicopter.

There’s honestly not too much to say about the drone itself. It’s an open-source drone project called Crazyflie 2.0, with some additional off the shelf sensors for obstacle avoidance and stabilization. The interesting bits are a couple of passive fins that keep the drone pointed into the wind, and then the sensor, called an electroantennogram.

Image: UW The drone’s sensor, called an electroantennogram, consists of a "single excised antenna" from a Manduca sexta hawkmoth and a custom signal processing circuit.

To make one of these sensors, you just, uh, “harvest” an antenna from a live hawkmoth. Obligingly, the moth antenna is hollow, meaning that you can stick electrodes up it. Whenever the olfactory neurons in the antenna (which is still technically alive even though it’s not attached to the moth anymore) encounter an odor that they’re looking for, they produce an electrical signal that the electrodes pick up. Plug the other ends of the electrodes into a voltage amplifier and filter, run it through an analog to digital converter, and you’ve got a chemical sensor that weighs just 1.5 gram and consumes only 2.7 mW of power. It’s significantly more sensitive than a conventional metal-oxide odor sensor, in a much smaller and more efficient form factor, making it ideal for drones. 

To localize an odor, the Smellicopter uses a simple bioinspired approach called crosswind casting, which involves moving laterally left and right and then forward when an odor is detected. Here’s how it works:

The vehicle takes off to a height of 40 cm and then hovers for ten seconds to allow it time to orient upwind. The smellicopter starts casting left and right crosswind. When a volatile chemical is detected, the smellicopter will surge 25 cm upwind, and then resume casting. As long as the wind direction is fairly consistent, this strategy will bring the insect or robot increasingly closer to a singular source with each surge.

Since odors are airborne, they need a bit of a breeze to spread very far, and the Smellicopter won’t be able to detect them unless it’s downwind of the source. But, that’s just how odors work— even if you’re right next to the source, if the wind is blowing from you towards the source rather than the other way around, you might not catch a whiff of it.

Whenever the olfactory neurons in the antenna encounter an odor that they’re looking for, they produce an electrical signal that the electrodes pick up

There are a few other constraints to keep in mind with this sensor as well. First, rather than detecting something useful (like explosives), it’s going to detect the smells of pretty flowers, because moths like pretty flowers. Second, the antenna will literally go dead on you within a couple hours, since it only functions while its tissues are alive and metaphorically kicking. Interestingly, it may be possible to use CRISPR-based genetic modification to breed moths with antennae that do respond to useful smells, which would be a neat trick, and we asked the researchers—Melanie Anderson, a doctoral student of mechanical engineering at the University of Washington, in Seattle; Thomas Daniel, a UW professor of biology; and Sawyer Fuller, a UW assistant professor of mechanical engineering—about this, along with some other burning questions, via email. 

IEEE Spectrum, asking the important questions first: So who came up with "Smellicopter"?

Melanie Anderson: Tom Daniel coined the term "Smellicopter". Another runner up was "OdorRotor"! 

In general, how much better are moths at odor localization than robots?  

Melanie Anderson: Moths are excellent at odor detection and odor localization and need to be in order to find mates and food. Their antennae are much more sensitive and specialized than any portable man-made odor sensor. We can't ask the moths how exactly they search for odors so well, but being able to have the odor sensitivity of a moth on a flying platform is a big step in that direction.

Tom Daniel: Our best estimate is that they outperform robotic sensing by at least three orders of magnitude.

How does the localization behavior of the Smellicopter compare to that of a real moth? 

Anderson: The cast-and-surge odor search strategy is a simplified version of what we believe the moth (and many other odor searching animals) are doing. It is a reactive strategy that relies on the knowledge that if you detect odor, you can assume that the source is somewhere up-wind of you. When you detect odor, you simply move upwind, and when you lose the odor signal you cast in a cross-wind direction until you regain the signal. 

Can you elaborate on the potential for CRISPR to be able to engineer moths for the detection of specific chemicals?  

Anderson: CRISPR is already currently being used to modify the odor detection pathways in moth species. It is one of our future efforts to specifically use this to make the antennae sensitive to other chemicals of interest, such as the chemical scent of explosives. 

Sawyer Fuller: We think that one of the strengths of using a moth's antenna, in addition to its speed, is that it may provide a path to both high chemical specificity as well as high sensitivity. By expressing a preponderance of only one or a few chemosensors, we are anticipating that a moth antenna will give a strong response only to that chemical. There are several efforts underway in other research groups to make such specific, sensitive chemical detectors. Chemical sensing is an area where biology exceeds man-made systems in terms of efficiency, small size, and sensitivity. So that's why we think that the approach of trying to leverage biological machinery that already exists has some merit.

You mention that the antennae lifespan can be extended for a few days with ice- how feasible do you think this technology is outside of a research context?

Anderson: The antennae can be stored in tiny vials in a standard refrigerator or just with an ice pack to extend their life to about a week. Additionally, the process for attaching the antenna to the electrical circuit is a teachable skill. It is definitely feasible outside of a research context.

Considering the trajectory that sensor development is on, how long do you think that this biological sensor system will outperform conventional alternatives?  

Anderson:  It's hard to speak toward what will happen in the future, but currently, the moth antenna still stands out among any commercially-available portable sensors.

There have been some experiments with cybernetic insects; what are the advantages and disadvantages of your approach, as opposed to (say) putting some sort of tracking system on a live moth?

Daniel: I was part of a cyber insect team a number of years ago.  The challenge of such research is that the animal has natural reactions to attempts to steer or control it.  

Anderson: While moths are better at odor tracking than robots currently, the advantage of the drone platform is that we have control over it. We can tell it to constrain the search to a certain area, and return after it finishes searching. 

What can you tell us about the health, happiness, and overall wellfare of the moths in your experiments?

Anderson: The moths are cold anesthetized before the antennae are removed. They are then frozen so that they can be used for teaching purposes or in other research efforts. 

What are you working on next?

Daniel: The four big efforts are (1) CRISPR modification, (2) experiments aimed at improving the longevity of the antennal preparation, (3) improved measurements of antennal electrical responses to odors combined with machine learning to see if we can classify different odors, and (4) flight in outdoor environments.

Fuller: The moth's antenna sensor gives us a new ability to sense with a much shorter latency than was previously possible with similarly-sized sensors (e.g. semiconductor sensors). What exactly a robot agent should do to best take advantage of this is an open question. In particular, I think the speed may help it to zero in on plume sources in complex environments much more quickly. Think of places like indoor settings with flow down hallways that splits out at doorways, and in industrial settings festooned with pipes and equipment. We know that it is possible to search out and find odors in such scenarios, as anybody who has had to contend with an outbreak of fruit flies can attest. It is also known that these animals respond very quickly to sudden changes in odor that is present in such turbulent, patchy plumes. Since it is hard to reduce such plumes to a simple model, we think that machine learning may provide insights into how to best take advantage of the improved temporal plume information we now have available.

Tom Daniel also points out that the relative simplicity of this project (now that the UW researchers have it all figured out, that is) means that even high school students could potentially get involved in it, even if it’s on a ground robot rather than a drone. All the details are in the paper that was just published in Bioinspiration & Biomimetics.

Research into robotic sensing has, understandably I guess, been very human-centric. Most of us navigate and experience the world visually and in 3D, so robots tend to get covered with things like cameras and lidar. Touch is important to us, as is sound, so robots are getting pretty good with understanding tactile and auditory information, too. Smell, though? In most cases, smell doesn’t convey nearly as much information for us, so while it hasn’t exactly been ignored in robotics, it certainly isn’t the sensing modality of choice in most cases.

Part of the problem with smell sensing is that we just don’t have a good way of doing it, from a technical perspective. This has been a challenge for a long time, and it’s why we either bribe or trick animals like dogs, rats, vultures, and other animals to be our sensing systems for airborne chemicals. If only they’d do exactly what we wanted them to do all the time, this would be fine, but they don’t, so it’s not. 

Until we get better at making chemical sensors, leveraging biology is the best we can do, and what would be ideal would be some sort of robot-animal hybrid cyborg thing. We’ve seen some attempts at remote controlled insects, but as it turns out, you can simplify things if you don’t use the entire insect, but instead just find a way to use its sensing system. Enter the Smellicopter.

There’s honestly not too much to say about the drone itself. It’s an open-source drone project called Crazyflie 2.0, with some additional off the shelf sensors for obstacle avoidance and stabilization. The interesting bits are a couple of passive fins that keep the drone pointed into the wind, and then the sensor, called an electroantennogram.

Image: UW The drone’s sensor, called an electroantennogram, consists of a "single excised antenna" from a Manduca sexta hawkmoth and a custom signal processing circuit.

To make one of these sensors, you just, uh, “harvest” an antenna from a live hawkmoth. Obligingly, the moth antenna is hollow, meaning that you can stick electrodes up it. Whenever the olfactory neurons in the antenna (which is still technically alive even though it’s not attached to the moth anymore) encounter an odor that they’re looking for, they produce an electrical signal that the electrodes pick up. Plug the other ends of the electrodes into a voltage amplifier and filter, run it through an analog to digital converter, and you’ve got a chemical sensor that weighs just 1.5 gram and consumes only 2.7 mW of power. It’s significantly more sensitive than a conventional metal-oxide odor sensor, in a much smaller and more efficient form factor, making it ideal for drones. 

To localize an odor, the Smellicopter uses a simple bioinspired approach called crosswind casting, which involves moving laterally left and right and then forward when an odor is detected. Here’s how it works:

The vehicle takes off to a height of 40 cm and then hovers for ten seconds to allow it time to orient upwind. The smellicopter starts casting left and right crosswind. When a volatile chemical is detected, the smellicopter will surge 25 cm upwind, and then resume casting. As long as the wind direction is fairly consistent, this strategy will bring the insect or robot increasingly closer to a singular source with each surge.

Since odors are airborne, they need a bit of a breeze to spread very far, and the Smellicopter won’t be able to detect them unless it’s downwind of the source. But, that’s just how odors work— even if you’re right next to the source, if the wind is blowing from you towards the source rather than the other way around, you might not catch a whiff of it.

Whenever the olfactory neurons in the antenna encounter an odor that they’re looking for, they produce an electrical signal that the electrodes pick up

There are a few other constraints to keep in mind with this sensor as well. First, rather than detecting something useful (like explosives), it’s going to detect the smells of pretty flowers, because moths like pretty flowers. Second, the antenna will literally go dead on you within a couple hours, since it only functions while its tissues are alive and metaphorically kicking. Interestingly, it may be possible to use CRISPR-based genetic modification to breed moths with antennae that do respond to useful smells, which would be a neat trick, and we asked the researchers—Melanie Anderson, a doctoral student of mechanical engineering at the University of Washington, in Seattle; Thomas Daniel, a UW professor of biology; and Sawyer Fuller, a UW assistant professor of mechanical engineering—about this, along with some other burning questions, via email. 

IEEE Spectrum, asking the important questions first: So who came up with "Smellicopter"?

Melanie Anderson: Tom Daniel coined the term "Smellicopter". Another runner up was "OdorRotor"! 

In general, how much better are moths at odor localization than robots?  

Melanie Anderson: Moths are excellent at odor detection and odor localization and need to be in order to find mates and food. Their antennae are much more sensitive and specialized than any portable man-made odor sensor. We can't ask the moths how exactly they search for odors so well, but being able to have the odor sensitivity of a moth on a flying platform is a big step in that direction.

Tom Daniel: Our best estimate is that they outperform robotic sensing by at least three orders of magnitude.

How does the localization behavior of the Smellicopter compare to that of a real moth? 

Anderson: The cast-and-surge odor search strategy is a simplified version of what we believe the moth (and many other odor searching animals) are doing. It is a reactive strategy that relies on the knowledge that if you detect odor, you can assume that the source is somewhere up-wind of you. When you detect odor, you simply move upwind, and when you lose the odor signal you cast in a cross-wind direction until you regain the signal. 

Can you elaborate on the potential for CRISPR to be able to engineer moths for the detection of specific chemicals?  

Anderson: CRISPR is already currently being used to modify the odor detection pathways in moth species. It is one of our future efforts to specifically use this to make the antennae sensitive to other chemicals of interest, such as the chemical scent of explosives. 

Sawyer Fuller: We think that one of the strengths of using a moth's antenna, in addition to its speed, is that it may provide a path to both high chemical specificity as well as high sensitivity. By expressing a preponderance of only one or a few chemosensors, we are anticipating that a moth antenna will give a strong response only to that chemical. There are several efforts underway in other research groups to make such specific, sensitive chemical detectors. Chemical sensing is an area where biology exceeds man-made systems in terms of efficiency, small size, and sensitivity. So that's why we think that the approach of trying to leverage biological machinery that already exists has some merit.

You mention that the antennae lifespan can be extended for a few days with ice- how feasible do you think this technology is outside of a research context?

Anderson: The antennae can be stored in tiny vials in a standard refrigerator or just with an ice pack to extend their life to about a week. Additionally, the process for attaching the antenna to the electrical circuit is a teachable skill. It is definitely feasible outside of a research context.

Considering the trajectory that sensor development is on, how long do you think that this biological sensor system will outperform conventional alternatives?  

Anderson:  It's hard to speak toward what will happen in the future, but currently, the moth antenna still stands out among any commercially-available portable sensors.

There have been some experiments with cybernetic insects; what are the advantages and disadvantages of your approach, as opposed to (say) putting some sort of tracking system on a live moth?

Daniel: I was part of a cyber insect team a number of years ago.  The challenge of such research is that the animal has natural reactions to attempts to steer or control it.  

Anderson: While moths are better at odor tracking than robots currently, the advantage of the drone platform is that we have control over it. We can tell it to constrain the search to a certain area, and return after it finishes searching. 

What can you tell us about the health, happiness, and overall wellfare of the moths in your experiments?

Anderson: The moths are cold anesthetized before the antennae are removed. They are then frozen so that they can be used for teaching purposes or in other research efforts. 

What are you working on next?

Daniel: The four big efforts are (1) CRISPR modification, (2) experiments aimed at improving the longevity of the antennal preparation, (3) improved measurements of antennal electrical responses to odors combined with machine learning to see if we can classify different odors, and (4) flight in outdoor environments.

Fuller: The moth's antenna sensor gives us a new ability to sense with a much shorter latency than was previously possible with similarly-sized sensors (e.g. semiconductor sensors). What exactly a robot agent should do to best take advantage of this is an open question. In particular, I think the speed may help it to zero in on plume sources in complex environments much more quickly. Think of places like indoor settings with flow down hallways that splits out at doorways, and in industrial settings festooned with pipes and equipment. We know that it is possible to search out and find odors in such scenarios, as anybody who has had to contend with an outbreak of fruit flies can attest. It is also known that these animals respond very quickly to sudden changes in odor that is present in such turbulent, patchy plumes. Since it is hard to reduce such plumes to a simple model, we think that machine learning may provide insights into how to best take advantage of the improved temporal plume information we now have available.

Tom Daniel also points out that the relative simplicity of this project (now that the UW researchers have it all figured out, that is) means that even high school students could potentially get involved in it, even if it’s on a ground robot rather than a drone. All the details are in the paper that was just published in Bioinspiration & Biomimetics.

Photo: Andrew Caballero-Reynolds/AFP/Getty Images .article-detail aside.inlay .sb-list li { border-bottom: 1px solid #000; padding: 0 0 1em; margin: 0 0 1em; list-style-type: none; overflow-y: auto; }

When Mark Zuckerberg said “Move fast and break things,” this is surely not what he meant.

Nevertheless, at a technological level the 6 January attacks on the U.S. Capitol could be contextualized by a line of patents filed or purchased by Facebook, tracing back 20 years. This portfolio arguably sheds light on how the most powerful country in the world was brought low by a rampaging mob nurtured on lies and demagoguery.

While Facebook’s arsenal of over 9,000 patents span a bewildering range of topics, at its heart are technologies that allow individuals to see an exclusive feed of content uniquely curated to their interests, their connections, and increasingly, their prejudices.

Algorithms create intensely personal “filter bubbles,” which are powerfully addictive to users, irresistible to advertisers, and a welcoming environment for rampant misinformation and disinformation such as QAnon, antivaxxer propaganda, and election conspiracy theories.

As Facebook turns 17—it was “born” 4 February 2004—a close reading of the company’s patent history shows how the social network has persistently sought to attract, categorize, and retain users by giving them more and more of what keeps them engaged on the site. In other words, hyperpartisan communities on Facebook that grow disconnected from reality are arguably less a bug than a feature.

Anyone who has used social media in recent years will likely have seen both misinformation (innocently shared false information) and disinformation (lies and propaganda). Last March, in a survey of over 300 social media users by the Center for an Informed Public at the University of Washington (UW), published on Medium, almost 80 percent reported seeing COVID-19 misinformation online, with over a third believing something false themselves. A larger survey covering the United States and Brazil in 2019, by the University of Liverpool and others, found that a quarter of Facebook users had accidentally shared misinformation. Nearly one in seven admitted sharing fake news on purpose.

A Tale of 12 Patents Facebook has more than 9,000 patents. A crucial few scaled its ability to build informational isolation chambers for its users (“filter bubbles”)—ones in which people could arguably evade commonplace facts and embrace entirely alternative realities. Lately, Facebook’s patents begin to address its echo-chamber-on-overdrive problem.
  1. 2001: Intelligent Information Delivery System (filed by Philips, purchased by Facebook in 2011)

    U.S. Patent No. 6,912,517 B2

  2. 2004: Facebook founded

  3. 2006: Communicating a Newsfeed of Media Content Based on a Member’s Interactions in a Social Network Environment

    U.S. Patent No. 8,171,128 B2

  4. 2006: Providing a Newsfeed Based on User Affinity for Entities and Monitored Actions in a Social Network Environment

    U.S. Patent No. 8,402,094

  5. 2009: Filtering Content in a Social Networking Service

    U.S. Patent No. 9,110,953 B2

  6. 2011: Content Access Management in a Social Networking System for Externally Stored Content

    U.S. Patent No. 9,286,642 B2

  7. 2012: Inferring Target Clusters Based on Social Connections

    U.S. Patent No. 10,489,825 B2

  8. 2012: Facebook IPO

  9. 2013: Categorizing Stories in a Social Networking System News Feed

    U.S. Patent No. 10,356,135 B2

  10. 2015: Systems and Methods for Demotion of Content Items in a Feed

    U.S. Patent No. 20,160,321,260 A1

  11. 2016: Quotations-Modules on Online Social Networks

    U.S. Patent No. 10,157,224 B2

  12. 2017: Contextual Information for Determining Credibility of Social-Networking Posts (Abandoned 2020)

    Publication No. US 2019/0163794 A1

  13. 2017: Filtering Out Communications Related to Unwanted Emotions on Online Social Networks (Abandoned 2019)

    Publication No. US 2019/0124023 A1

  14. 2017: Systems and Methods for Providing Diverse Content

    U.S. Patent No. 10,783,197 B2

    [All patents are listed by filing years. Ten of them were granted two to seven years later. Two were ultimately abandoned.]

“Misinformation tends to be more compelling than journalistic content, as it’s easy to make something interesting and fun if you have no commitment to the truth,” says Patricia Rossini, the social-media researcher who conducted the Liverpool study.

In December, a complaint filed by dozens of U.S. states asserted, “Due to Facebook’s unlawful conduct and the lack of competitive constraints...there has been a proliferation of misinformation and violent or otherwise objectionable content on Facebook’s properties.”

When a platform is open, like Twitter, most users can see almost everyone’s tweets. Therefore, tracking the source and spread of misinformation is comparatively straightforward. Facebook, on the other hand, has spent a decade and a half building a mostly closed information ecosystem.

Last year, Forbes estimated that the company’s 15,000 content moderators make some 300,000 bad calls every day. Precious little of that process is ever open to public scrutiny, although Facebook recently referred its decision to suspend Donald Trump’s Facebook and Instagram accounts to its Oversight Board. This independent 11-member “Supreme Court” is designed to review thorny content moderation decisions.

Meanwhile, even some glimpses of sunlight prove fleeting: After the 2020 U.S. presidential election, Facebook temporarily tweaked its algorithms to promote authoritative, fact-based news sources like NPR, a U.S. public-radio network. According to The New York Times, it soon reversed that decision, though, effectively cutting short its ability to curtail what a spokesperson called “inaccurate claims about the election.”

The company began filing patents soon after it was founded in 2004. A 2006 patent described how to automatically track your activity to detect relationships with other users, while another the same year laid out how those relationships could determine which media content and news might appear in your feed.

In 2006, Facebook patented a way to “characterize major differences between two sets of users.” In 2009, Mark ­Zuckerberg himself filed a patent that showed how ­Facebook “and/or external parties” could “target information delivery,” including political news, that might be of particular interest to a group.

This automated curation can drive people down partisan rabbit holes, fears Jennifer Stromer-Galley, a professor in the School of Information Studies at Syracuse University. “When you see perspectives that are different from yours, it requires thinking and creates aggravation,” she says. “As a for-profit company that’s selling attention to advertisers, Facebook doesn’t want that, so there’s a risk of algorithmic reinforcement of homogeneity, and filter bubbles.”

In the run-up to Facebook’s IPO in 2012, the company moved to protect its rapidly growing business from intellectual property lawsuits. A 2011 Facebook patent describes how to filter content according to biographic, geographic, and other information shared by a user. Another patent, bought by Facebook that year from the consumer electronics company Philips, concerns “an intelligent information delivery system” that, based on someone’s personal preferences collects, prioritizes, and “selectively delivers relevant and timely” information.

In recent years, as the negative consequences of Facebook’s drive to serve users ever more attention-grabbing content emerged, the company’s patent strategy seems to have shifted. Newer patents appear to be trying to rein in the worst excesses of the filter bubbles Facebook pioneered.

The word “misinformation” appeared in a Facebook patent for the first time in 2015, for technology designed to demote “objectionable material that degrades user experience with the news feed and otherwise compromises the integrity of the social network.” A pair of patents in 2017 described providing users with more diverse content from both sides of the political aisle and adding contextual tags to help rein in misleading “false news.”

Such tags using information from independent fact-checking organizations could help, according to a study by Ethan Porter, coauthor of False Alarm: The Truth About Political Mistruths in the Trump Era (Cambridge University Press, 2019). “It’s no longer a controversy that fact-checks reliably improve factual accuracy,” he says. “And contrary to popular misconception, there is no evident exception for controversial or highly politicized topics.”

Franziska Roesner, a computer scientist and part of the UW team, was involved in a similar, qualitative study last year that also gave a glimmer of hope. “People are now much more aware of the spread and impact of misinformation than they were in 2016 and can articulate robust strategies for vetting content,” she says. “The problem is that they don’t always follow them.”

Rossini’s Liverpool study also found that behaviors usually associated with democratic gains, such as discussing politics and being exposed to differing opinions, were associated with dysfunctional information sharing. Put simply, the worst offenders for sharing fake news were also the best at building online communities; they shared a lot of information, both good and bad.

Moreover, Rossini doubts the very existence of filter bubbles. Because many Facebook users have more and more varied digital friends than they do in-person connections, she says “most social media users are systematically exposed to more diversity than they would be in their offline life.”

The problem is that some of that diversity includes hate speech, lies, and propaganda that very few of us would ever seek out voluntarily—but that goes on to radicalize some.

“I personally quit Facebook two and a half years ago when the Cambridge Analytica scandal happened,” says Lalitha Agnihotri, formerly a data scientist for the Dutch company Philips, who in 2001 was part of a team that filed a related patent. In 2011, Facebook then acquired that Philips patent. “I don’t think Facebook treats my data right. Now that I realize that IP generated by me enabled Facebook to do things wrong, I feel terrible about it.”

Agnihotri says that she has been contacted by Facebook recruiters several times over the years but has always turned them down. “My 12-year-old suggested that maybe I need to join them, to make sure they do things right,” she says. “But it will be hard, if not impossible, to change a culture that comes from their founder.”

This article appears in the February 2021 print issue as “The Careful Engineering of Facebook’s Filter Bubble.”

Wearable robots assist individuals with sensorimotor impairment in daily life, or support industrial workers in physically demanding tasks. In such scenarios, low mass and compact design are crucial factors for device acceptance. Remote actuation systems (RAS) have emerged as a popular approach in wearable robots to reduce perceived weight and increase usability. Different RAS have been presented in the literature to accommodate for a wide range of applications and related design requirements. The push toward use of wearable robotics in out-of-the-lab applications in clinics, home environments, or industry created a shift in requirements for RAS. In this context, high durability, ergonomics, and simple maintenance gain in importance. However, these are only rarely considered and evaluated in research publications, despite being drivers for device abandonment by end-users. In this paper, we summarize existing approaches of RAS for wearable assistive technology in a literature review and compare advantages and disadvantages, focusing on specific evaluation criteria for out-of-the-lab applications to provide guidelines for the selection of RAS. Based on the gained insights, we present the development, optimization, and evaluation of a cable-based RAS for out-of-the-lab applications in a wearable assistive soft hand exoskeleton. The presented RAS features full wearability, high durability, high efficiency, and appealing design while fulfilling ergonomic criteria such as low mass and high wearing comfort. This work aims to support the transfer of RAS for wearable robotics from controlled lab environments to out-of-the-lab applications.

A low-power and non-volatile technology called the memristor shows initial promise as a basis for machine learning. According to new research, memristors efficiently tackle AI medical diagnosis problems, an encouraging development that suggests additional applications in other fields, especially low-power or network “edge” applications. This may be, the researchers say, because memristors artificially mimic some of the neuron’s essential properties. 

Memristors, or memory resistors, are a kind of building block for electronic circuits that scientists predicted roughly 50 years ago but only created for the first time a little more than a decade ago. These components, also known as resistive random access memory (RRAM) devices, are essentially electric switches that can remember whether they were toggled on or off after their power is turned off. As such, they resemble synapses—the links between neurons in the human brain—whose electrical conductivity strengthens or weakens depending on how much electrical charge has passed through them in the past.

In theory, memristors can act like artificial neurons capable of both computing and storing data. As such, researchers have suggested memristors could potentially greatly reduce the energy and time lost in conventional computers shuttling data back and forth between processors and memory. The devices could also work well within neural networks, which are machine learning systems that use synthetic versions of synapses and neurons to mimic the process of learning in the human brain.

One challenge with developing applications for memristors is the randomness found in these devices. The level of electrical resistance or conductivity seen in memristors depends on a handful of atoms linking up two electrodes, making it difficult to control their electrical properties from the outset, says study lead author Thomas Dalgaty, an electrical engineer at Grenoble Alpes University in France.

Now Dalgaty and his colleagues have developed a way to harness this randomness for machine learning applications. They detailed their findings this month in the journal Nature Electronics.

Memristors are programmed by cycling through high-conductance on states and low-conductance off states. Usually the level of electrical conductivity seen in memristors can vary between one on state and the next due to intrinsic random processes within the devices.

However, if memristors are cycled on and off enough, the electrical conductivity of each memristor follows a pattern—“a bell curve,” Dalgaty says. The scientists revealed they could implement an algorithm known as Markov chain Monte Carlo sampling that could actively exploit this predictable behavior to solve a number of machine-learning tasks.

When compared with the performance of conventional digital CMOS electronics, the researchers’ memristor arrays achieved a stunning five order of magnitude reduction in energy. This, Dalgaty says, is because the memristors did not need to shuffle data back and forth between processors and memory. For context, that 100,000-fold discrepancy is equivalent to “the difference in height between the Burj Khalifa, the tallest building in the world, and a coin,” he explains.

One potentially exciting application for memristors would be devices capable of learning, adapting and operating at the far ends of a network (a.k.a. its “edge”), where low-power devices like embedded systems, smart home gear and IoT nodes sometimes reside. Indeed, Dalgaty says, memristors could help make edge learning devices a reality.

“Currently edge learning is not possible because the energy required to perform machine learning with existing hardware is far greater than the energy that is available at the edge,” he explains. “Edge learning [using memristors] ... can potentially open up completely new application domains that were not possible before.”

For example, the researchers used an array made of 16,384 memristors to detect heart rhythm anomalies from electrocardiogram recordings, reporting a better detection rate than a standard neural network based on conventional, non-memristor electronics. The team also used their array to solve image recognition tasks such as diagnosing malignant breast-tissue samples.

Potential future edge learning memristor applications might include implanted medical early-warning systems that can adapt to a patient’s state as it changes over time. “We are looking towards these really energy-constrained edge applications that maybe don’t or can’t exist yet because of energy [restrictions],” Dalgaty says.

The next big challenge, Dalgaty says, “will be putting all of this functionality together onto a single integrated chip that can be applied outside of the laboratory.” It may take a few years before such a chip exists, he says.

As service robots become increasingly autonomous and follow their own task-related goals, human-robot conflicts seem inevitable, especially in shared spaces. Goal conflicts can arise from simple trajectory planning to complex task prioritization. For successful human-robot goal-conflict resolution, humans and robots need to negotiate their goals and priorities. For this, the robot might be equipped with effective conflict resolution strategies to be assertive and effective but similarly accepted by the user. In this paper, conflict resolution strategies for service robots (public cleaning robot, home assistant robot) are developed by transferring psychological concepts (e.g., negotiation, cooperation) to HRI. Altogether, fifteen strategies were grouped by the expected affective outcome (positive, neutral, negative). In two online experiments, the acceptability of and compliance with these conflict resolution strategies were tested with humanoid and mechanic robots in two application contexts (public: n1 = 61; private: n2 = 93). To obtain a comparative value, the strategies were also applied by a human. As additional outcomes trust, fear, arousal, and valence, as well as perceived politeness of the agent were assessed. The positive/neutral strategies were found to be more acceptable and effective than negative strategies. Some negative strategies (i.e., threat, command) even led to reactance and fear. Some strategies were only positively evaluated and effective for certain agents (human or robot) or only acceptable in one of the two application contexts (i.e., approach, empathy). Influences on strategy acceptance and compliance in the public context could be found: acceptance was predicted by politeness and trust. Compliance was predicted by interpersonal power. Taken together, psychological conflict resolution strategies can be applied in HRI to enhance robot task effectiveness. If applied robot-specifically and context-sensitively they are accepted by the user. The contribution of this paper is twofold: conflict resolution strategies based on Human Factors and Social Psychology are introduced and empirically evaluated in two online studies for two application contexts. Influencing factors and requirements for the acceptance and effectiveness of robot assertiveness are discussed.

Control theory provides engineers with a multitude of tools to design controllers that manipulate the closed-loop behavior and stability of dynamical systems. These methods rely heavily on insights into the mathematical model governing the physical system. However, in complex systems, such as autonomous underwater vehicles performing the dual objective of path following and collision avoidance, decision making becomes nontrivial. We propose a solution using state-of-the-art Deep Reinforcement Learning (DRL) techniques to develop autonomous agents capable of achieving this hybrid objective without having a priori knowledge about the goal or the environment. Our results demonstrate the viability of DRL in path following and avoiding collisions towards achieving human-level decision making in autonomous vehicle systems within extreme obstacle configurations.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online] RoboSoft 2021 – April 12-16, 2021 – [Online]

Let us know if you have suggestions for next week, and enjoy today's videos.

A new parent STAR robot is presented. The parent robot has a tail on which the child robot can climb. By collaborating together, the two robots can reach locations that neither can reach on its own.

The parent robot can also supply the child robot with energy by recharging its batteries. The parent STAR can dispatch and recuperate the child STAR automatically (when aligned). The robots are fitted with sensors and controllers and have automatic capabilities but make no decisions on their own.

[ Bio-Inspired and Medical Robotics Lab ]

How TRI trains its robots.

[ TRI ]

The only thing more satisfying than one SCARA robot is two SCARA robots working together.

[ Fanuc ]

I'm not sure that this is strictly robotics, but it's so cool that it's worth a watch anyway.

[ Shinoda & Makino Lab ]

Flying insects heavily rely on optical flow for visual navigation and flight control. Roboticists have endowed small flying robots with optical flow control as well, since it requires just a tiny vision sensor. However, when using optical flow, the robots run into two problems that insects appear to have overcome. Firstly, since optical flow only provides mixed information on distances and velocities, using it for control leads to oscillations when getting closer to obstacles. Secondly, since optical flow provides very little information on obstacles in the direction of motion, it is hardest to detect obstacles that the robot is actually going to collide with! We propose a solution to these problems by means of a learning process.

[ Nature ]

A new Guinness World Record was set on Friday in north China for the longest animation performed by 600 unmanned aerial vehicles (UAVs).

[ Xinhua ]

Translucency is prevalent in everyday scenes. As such, perception of transparent objects is essential for robots to perform manipulation. In this work, we propose LIT, a two-stage method for transparent object pose estimation using light-field sensing and photorealistic rendering.

[ University of Michigan ] via [ Fetch Robotics ]

This paper reports the technological progress and performance of team “CERBERUS” after participating in the Tunnel and Urban Circuits of the DARPA Subterranean Challenge.

And here's a video report on the SubT Urban Beta Course performance:

[ CERBERUS ]

Congrats to Energy Robotics on 2 million euros in seed funding!

[ Energy Robotics ]

Thanks Stefan!

In just 2 minutes, watch HEBI robotics spending 23 minutes assembling a robot arm.

HEBI Robotics is hosting a webinar called 'Redefining the Robotic Arm' next week, which you can check out at the link below.

[ HEBI Robotics ]

Thanks Hardik!

Achieving versatile robot locomotion requires motor skills which can adapt to previously unseen situations. We propose a Multi-Expert Learning Architecture (MELA) that learns to generate adaptive skills from a group of representative expert skills. During training, MELA is first initialised by a distinct set of pre-trained experts, each in a separate deep neural network (DNN). Then by learning the combination of these DNNs using a Gating Neural Network (GNN), MELA can acquire more specialised experts and transitional skills across various locomotion modes.

[ Paper ]

Since the dawn of history, advances in science and technology have pursued “power” and “accuracy.” Initially, “hardness” in machines and materials was sought for reliable operations. In our area of Science of Soft Robots, we have combined emerging academic fields aimed at “softness” to increase the exposure and collaboration of researchers in different fields.

[ Science of Soft Robots ]

A team from the Laboratory of Robotics and IoT for Smart Precision Agriculture and Forestry at INESC TEC - Technology and Science are creating a ROS stack solution using Husky UGV for precision field crop agriculture.

[ Clearpath Robotics ]

Associate Professor Christopher J. Hasson in the Department of Physical Therapy is the director Neuromotor Systems Laboratory at Northeastern University. There he is working with a robotic arm to provide enhanced assistance to physical therapy patients, while maintaining the intimate therapist and patient relationship.

[ Northeastern ]

Mobile Robotic telePresence (MRP) systems aim to support enhanced collaboration between remote and local members of a given setting. But MRP systems also put the remote user in positions where they frequently rely on the help of local partners. Getting or ‘recruiting’ such help can be done with various verbal and embodied actions ranging in explicitness. In this paper, we look at how such recruitment occurs in video data drawn from an experiment where pairs of participants (one local, one remote) performed a timed searching task.

[ Microsoft Research ]

A presentation [from Team COSTAR] for the American Geophysical Union annual fall meeting on the application of robotic multi-sensor 3D Mapping for scientific exploration of caves. Lidar-based 3D maps are combined with visual/thermal/spectral/gas sensors to provide rich 3D context for scientific measurements map.

[ COSTAR ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online] RoboSoft 2021 – April 12-16, 2021 – [Online]

Let us know if you have suggestions for next week, and enjoy today's videos.

A new parent STAR robot is presented. The parent robot has a tail on which the child robot can climb. By collaborating together, the two robots can reach locations that neither can reach on its own.

The parent robot can also supply the child robot with energy by recharging its batteries. The parent STAR can dispatch and recuperate the child STAR automatically (when aligned). The robots are fitted with sensors and controllers and have automatic capabilities but make no decisions on their own.

[ Bio-Inspired and Medical Robotics Lab ]

How TRI trains its robots.

[ TRI ]

The only thing more satisfying than one SCARA robot is two SCARA robots working together.

[ Fanuc ]

I'm not sure that this is strictly robotics, but it's so cool that it's worth a watch anyway.

[ Shinoda & Makino Lab ]

Flying insects heavily rely on optical flow for visual navigation and flight control. Roboticists have endowed small flying robots with optical flow control as well, since it requires just a tiny vision sensor. However, when using optical flow, the robots run into two problems that insects appear to have overcome. Firstly, since optical flow only provides mixed information on distances and velocities, using it for control leads to oscillations when getting closer to obstacles. Secondly, since optical flow provides very little information on obstacles in the direction of motion, it is hardest to detect obstacles that the robot is actually going to collide with! We propose a solution to these problems by means of a learning process.

[ Nature ]

A new Guinness World Record was set on Friday in north China for the longest animation performed by 600 unmanned aerial vehicles (UAVs).

[ Xinhua ]

Translucency is prevalent in everyday scenes. As such, perception of transparent objects is essential for robots to perform manipulation. In this work, we propose LIT, a two-stage method for transparent object pose estimation using light-field sensing and photorealistic rendering.

[ University of Michigan ] via [ Fetch Robotics ]

This paper reports the technological progress and performance of team “CERBERUS” after participating in the Tunnel and Urban Circuits of the DARPA Subterranean Challenge.

And here's a video report on the SubT Urban Beta Course performance:

[ CERBERUS ]

Congrats to Energy Robotics on 2 million euros in seed funding!

[ Energy Robotics ]

Thanks Stefan!

In just 2 minutes, watch HEBI robotics spending 23 minutes assembling a robot arm.

HEBI Robotics is hosting a webinar called 'Redefining the Robotic Arm' next week, which you can check out at the link below.

[ HEBI Robotics ]

Thanks Hardik!

Achieving versatile robot locomotion requires motor skills which can adapt to previously unseen situations. We propose a Multi-Expert Learning Architecture (MELA) that learns to generate adaptive skills from a group of representative expert skills. During training, MELA is first initialised by a distinct set of pre-trained experts, each in a separate deep neural network (DNN). Then by learning the combination of these DNNs using a Gating Neural Network (GNN), MELA can acquire more specialised experts and transitional skills across various locomotion modes.

[ Paper ]

Since the dawn of history, advances in science and technology have pursued “power” and “accuracy.” Initially, “hardness” in machines and materials was sought for reliable operations. In our area of Science of Soft Robots, we have combined emerging academic fields aimed at “softness” to increase the exposure and collaboration of researchers in different fields.

[ Science of Soft Robots ]

A team from the Laboratory of Robotics and IoT for Smart Precision Agriculture and Forestry at INESC TEC - Technology and Science are creating a ROS stack solution using Husky UGV for precision field crop agriculture.

[ Clearpath Robotics ]

Associate Professor Christopher J. Hasson in the Department of Physical Therapy is the director Neuromotor Systems Laboratory at Northeastern University. There he is working with a robotic arm to provide enhanced assistance to physical therapy patients, while maintaining the intimate therapist and patient relationship.

[ Northeastern ]

Mobile Robotic telePresence (MRP) systems aim to support enhanced collaboration between remote and local members of a given setting. But MRP systems also put the remote user in positions where they frequently rely on the help of local partners. Getting or ‘recruiting’ such help can be done with various verbal and embodied actions ranging in explicitness. In this paper, we look at how such recruitment occurs in video data drawn from an experiment where pairs of participants (one local, one remote) performed a timed searching task.

[ Microsoft Research ]

A presentation [from Team COSTAR] for the American Geophysical Union annual fall meeting on the application of robotic multi-sensor 3D Mapping for scientific exploration of caves. Lidar-based 3D maps are combined with visual/thermal/spectral/gas sensors to provide rich 3D context for scientific measurements map.

[ COSTAR ]

Invasive aquatic plant species, and in particular Eurasian Water-Milfoil (EWM), pose a major threat to domestic flora and fauna and can in turn negatively impact local economies. Numerous strategies have been developed to harvest and remove these plant species from the environment. However it is still an open question as to which method is best suited to removing a particular invasive species and the impact of different lake conditions on the choice. One problem common to all harvesting methods is the need to assess the location and degree of infestation on an ongoing manner. This is a difficult and error prone problem given that the plants grow underwater and significant infestation at depth may not be visible at the surface. Here we detail efforts to monitor EWM infestation and evaluate harvesting methods using an autonomous surface vessel (ASV). This novel ASV is based around a mono-hull design with two outriggers. Powered by a differential pair of underwater thrusters, the ASV is outfitted with RTK GPS for position estimation and a set of submerged environmental sensors that are used to capture imagery and depth information including the presence of material suspended in the water column. The ASV is capable of both autonomous and tele-operation.

In December 2019, an outbreak of novel coronavirus pneumonia occurred, and subsequently attracted worldwide attention when it bloomed into the COVID-19 pandemic. To limit the spread and transmission of the novel coronavirus, governments, regulatory bodies, and health authorities across the globe strongly enforced shut down of educational institutions including medical and dental schools. The adverse effects of COVID-19 on dental education have been tremendous, including difficulties in the delivery of practical courses such as restorative dentistry. As a solution to help dental schools adapt to the pandemic, we have developed a compact and portable teaching-learning platform called DenTeach. This platform is intended for remote teaching and learning pertaining to dental schools at these unprecedented times. This device can facilitate fully remote and physical-distancing-aware teaching and learning in dentistry. DenTeach platform consists of an instructor workstation (DT-Performer), a student workstation (DT-Student), advanced wireless networking technology, and cloud-based data storage and retrieval. The platform procedurally synchronizes the instructor and the student with real-time video, audio, feel, and posture (VAFP). To provide quantitative feedback to instructors and students, the DT-Student workstation quantifies key performance indices (KPIs) related to a given task to assess and improve various aspects of the dental skills of the students. DenTeach has been developed for use in teaching, shadowing, and practice modes. In the teaching mode, the device provides each student with tactile feedback by processing the data measured and/or obtained from the instructor's workstation, which helps the student enhance their dental skills while inherently learning from the instructor. In the shadowing mode, the student can download the augmented videos and start watching, feeling, and repeating the tasks before entering the practice mode. In the practice mode, students use the system to perform dental tasks and have their dental performance skills automatically evaluated in terms of KPIs such that both the student and the instructor are able to monitor student’s work. Most importantly, as DenTeach is packaged in a small portable suitcase, it can be used anywhere by connecting to the cloud-based data storage network to retrieve procedures and performance metrics. This paper also discusses the feasibility of the DenTeach device in the form of a case study. It is demonstrated that a combination of the KPIs, video views, and graphical reports in both teaching and shadowing modes effectively help the student understand which aspects of their work needs further improvement. Moreover, the results of the practice mode over 10 trials have shown significant improvement in terms of tool handling, smoothness of motion, and steadiness of the operation.

Having a trusted and useful system that helps to diminish the risk of medical errors and facilitate the improvement of quality in the medical education is indispensable. Thousands of surgical errors are occurred annually with high adverse event rate, despite inordinate number of devised patients safety initiatives. Inadvertently or otherwise, surgeons play a critical role in the aforementioned errors. Training surgeons is one of the most crucial and delicate parts of medical education and needs more attention due to its practical intrinsic. In contrast to engineering, dealing with mortal alive creatures provides a minuscule chance of trial and error for trainees. Training in operative rooms, on the other hand, is extremely expensive in terms of not only equipment but also hiring professional trainers. In addition, the COVID-19 pandemic has caused to establish initiatives such as social distancing in order to mitigate the rate of outbreak. This leads surgeons to postpone some non-urgent surgeries or operate with restrictions in terms of safety. Subsequently, educational systems are affected by the limitations due to the pandemic. Skill transfer systems in cooperation with a virtual training environment is thought as a solution to address aforesaid issues. This enables not only novice surgeons to enrich their proficiency but also helps expert surgeons to be supervised during the operation. This paper focuses on devising a solution based on deep leaning algorithms to model the behavior of experts during the operation. In other words, the proposed solution is a skill transfer method that learns professional demonstrations using different effective factors from the body of experts. The trained model then provides a real-time haptic guidance signal for either instructing trainees or supervising expert surgeons. A simulation is utilized to emulate an operating room for femur drilling surgery, which is a common invasive treatment for osteoporosis. This helps us with both collecting the essential data and assessing the obtained models. Experimental results show that the proposed method is capable of emitting guidance force haptic signal with an acceptable error rate.

As autonomous machines, such as automated vehicles (AVs) and robots, become pervasive in society, they will inevitably face moral dilemmas where they must make decisions that risk injuring humans. However, prior research has framed these dilemmas in starkly simple terms, i.e., framing decisions as life and death and neglecting the influence of risk of injury to the involved parties on the outcome. Here, we focus on this gap and present experimental work that systematically studies the effect of risk of injury on the decisions people make in these dilemmas. In four experiments, participants were asked to program their AVs to either save five pedestrians, which we refer to as the utilitarian choice, or save the driver, which we refer to as the nonutilitarian choice. The results indicate that most participants made the utilitarian choice but that this choice was moderated in important ways by perceived risk to the driver and risk to the pedestrians. As a second contribution, we demonstrate the value of formulating AV moral dilemmas in a game-theoretic framework that considers the possible influence of others’ behavior. In the fourth experiment, we show that participants were more (less) likely to make the utilitarian choice, the more utilitarian (nonutilitarian) other drivers behaved; furthermore, unlike the game-theoretic prediction that decision-makers inevitably converge to nonutilitarianism, we found significant evidence of utilitarianism. We discuss theoretical implications for our understanding of human decision-making in moral dilemmas and practical guidelines for the design of autonomous machines that solve these dilemmas while, at the same time, being likely to be adopted in practice.

The United States Federal Aviation Administration has been desperately trying to keep up with the proliferation of recreational and commercial drones. They haven’t been as successful as all of us might have wanted, but some progress is certainly being made, most recently with some new rules about flying drones at night and over people and vehicles, as well as the requirement for a remote-identification system for all drones.

Over the next few years, FAA’s drone rules are going to affect you even if you just fly a drone for fun in your backyard, so we’ll take detailed look about what changes are coming and how you can prepare.

The first thing to acknowledge is that the FAA, as an agency, is turning out to be a very poor communicator where drones are concerned. I’ve written about this before, but understanding exactly what you can and cannot do with a drone, and where you’re allowed to do it, is super frustrating and way more complicated than it needs to be. So if some of this seems confusing, it’s not you.

What kind of drone pilot am I?

Part of the problem is that the FAA has separated drone pilots into two categories that have rules that are sometimes different in ways that don’t always make sense. There are recreational pilots, who fly drones “strictly for recreational purposes,” and then there are commercial pilots, who fly drones to make money, for non-profit work, for journalism, for education, or really for anything that has a goal besides fun.

Recreational pilots are allowed to fly under safety guidelines from a “community-based organization” like the Academy of Model Aeronautics (AMA), while commercial pilots have to fly under the rules found in Part 107 of the Federal Aviation Regulations. So, while the Part 107 rules have, for example, prohibited flying at night without a waiver from the FAA, the FAA also says that recreational flyers can fly at night as long as the drone “has lighting that allows you to know its location and orientation at all times.” Go figure.

What are the current rules for recreational and commercial pilots?

You can find these on FAA’s website:

What are the new drones rules that the FAA announced?

Late last year, the FAA released what it called in a press release “Two Much-Anticipated Drone Rules to Advance Safety and Innovation in the United States.”

The first update is for Part 107 pilots, and covers operations over people, over vehicles, and at night. Currently, Part 107 pilots need to apply to the FAA for waivers to do any of these things, and now you do not need a waiver to do them, as long as you follow the new rules.

The second new rule is about how drones identify themselves in flight, called Remote ID, and applies to everybody flying a drone, even if it’s just for fun. If you’re a recreational pilot, you can skip down to the part about Remote ID, which will affect you.

Can I fly at night?

Yup. The new rule allows for night flying with a properly lit up drone (“anti-collision lights that can be seen for 3 statute miles and have a flash rate sufficient to avoid a collision”). The rule also helpfully notes that these lights must be turned on.

This applies to Part 107 pilots only, and as we noted above, whether recreational fliers can fly at night isn’t as clear as it should be. And Part 107 pilots who want to take advantage of this new rule will need to take an updated knowledge test, which the FAA will provide more information on within the next few months.

Can I fly over moving vehicles?

Generally, yes, if you’re a Part 107 pilot. You can fly over moving vehicles as long as you’re just transiting over them, rather than maintaining sustained flight over them. If you want to maintain sustained flight, you can do that too, although in that case everyone in the vehicle needs to know that there’s a drone around and it has to be in an access controlled area.

Vehicles, as far as the FAA is concerned, includes anything where a person is moving more quickly than they’d be able to on foot, because this rule exists to try and mitigate the likelihood of a wayward drone hitting someone at a higher speed. Vehicles therefore include skateboards, rollerblades, bicycles, roller coasters, boats, and so on.

Is my drone allowed to fly over people?

Part 107 pilots are now allowed to fly over people in some circumstances, under restrictions that change depending on how big and scary your drone is. The FAA has separated drones into four risk categories, based on how much damage they could do to a human that they come into contact with.

  • Category 1: A Category 1 drone represents “a low risk of injury” to humans and therefore weighs 0.55 pounds (0.25 kg) or less including everything attached to the drone from takeoff to landing. Furthermore, a Category 1 drone cannot have “any exposed rotating parts that would lacerate human skin,” and whatever kind of protection that implies must not fall outside the weight limit. If your drone meets both of these criteria, there’s no need to do anything else about it.
  • Category 2: A Category 2 drone is the next step up, and since we’re now out of the “low risk of injury” category, the FAA will require a declaration of compliance from “anyone who designs, produces, or modifies a small unmanned aircraft” in this category. For Category 2, this declaration has to show that the drone “must not be capable of causing an injury to a human being that is more severe than an injury caused by a transfer of 11 ft-lbs of kinetic energy from a rigid object,” and the declaration must be approved by the FAA. Category 2 drones must also incorporate the same kind of laceration protection as Category 1, although one of the more interesting comments on the ruling came from Skydio, which asked whether a software-based safety system that could protect against skin laceration would be acceptable. The FAA said that’s fine, as long as it can be demonstrated to be effective through some as-yet unspecified process. 
  • Category 3: A Category 3 drone is just the same at Category 2, except bigger and/or faster, and it “must not be capable of causing an injury to a human being that is more severe than an injury caused by a transfer of 25 ft-lbs of kinetic energy from a rigid object.” Laceration protection also required.
  • Category 4: If you think your drone is safe to operate over people but it doesn’t fit into one of the categories above, you can apply to the FAA for an airworthiness certificate, which (if approved) will let you fly over people with your drone (sometimes) without applying for a waiver.
Great, so can I fly over people whenever I want?

To fly over people, you must be flying under Part 107, your drone must be in one of the four categories above, and you’ll need to follow these specific rules on outdoor flight over people. Note that the FAA defines “sustained flight over an open-air assembly” as “hovering above the heads of persons gathered in an open-air assembly, flying back and forth over an open-air assembly, or circling above the assembly in such a way that the small unmanned aircraft remains above some part of the assembly.”

  • Category 1: Sustained flight over groups of people outdoors is allowed as long as your drone is Remote ID compliant. We’ll get to the Remote ID stuff in a bit.
  • Category 2: Sustained flight over groups of people outdoors is allowed as long as your drone is Remote ID compliant. The big difference between Category 1 and Category 2 is that with a Category 1 drone, you can make your own prop guards or whatever and weigh it, and as long as it’s under 0.55 pound, you’re good to go. Category 2 drones have to go through a certification process with the FAA. If you buy a drone, the manufacturer will likely have done this already. If you build a drone, you’ll have to do it yourself.
  • Category 3: No sustained flight over groups of people. You also can’t fly a Category 3 drone over even a single person, unless it’s either a restricted area where anyone inside has been notified that a drone may be flying over them, or the people the drone is flying over are somehow protected (like under a shelter of some kind). Remote ID is also required.
  • Category 4: There’s a process, but you’ll need to talk with the FAA.
What if I want to do stuff that isn’t covered under these new rules?

Part 107 pilots can still apply to the FAA for waivers, just like before.

I fly recreationally and don’t have my Part 107. Can I fly at night, over moving vehicles, or over people?

Definitely not over people or vehicles. Maybe at night, but honestly, best not to do that either?

What’s Remote ID?

The FAA describes Remote ID as being like a digital license plate for your drone. If you’re following the rules, you’re currently required to register your drone (unless it’s very small) and then make that registration number visible on the drone somewhere.

This isn’t particularly useful if you’re someone on the ground trying to identify a drone flying overhead, so the FAA is instead requiring that all drones broadcast a unique identifying number whenever they’re airborne.

The FAA is requiring that all drones broadcast a unique identifying number whenever they’re airborne Does my drone have Remote ID?

Most likely not. This is a brand new requirement.

What drones will be required to broadcast Remote ID?

Every drone that weighs more than 0.55 pounds (0.25 kg). Drones weighing less than that may be required to have Remote ID if they’re being flown under Part 107.

If you have a drone that weighs under 0.55 pounds and fly recreationally, then lucky you, you don’t have to worry about Remote ID.

What kind of broadcast signal is Remote ID?

The FAA only says that drones “must be designed to maximize the range at which the broadcast can be received,” but it’ll be different for each drone. The target seems to be 400 feet, which is what the FAA figures maximum line of sight distance to be. There was some discussion about making network identification an option (like, if your drone can talk to the Internet somehow, it doesn’t have to broadcast directly), but the FAA thought that would be too complicated. 

What information will Remote ID be sending out?
  • An identifying number for your drone
  • The location of your drone (latitude, longitude, and altitude)
  • How fast your drone is moving
  • Your location (the location of the drone’s controller)
  • A status identifier that says whether your drone is experiencing an emergency
Who can access the Remote ID broadcast?

According to the FAA: “Most personal wireless devices within range of the broadcast.” In other words, anyone with interest and a mobile phone will be able to locate both nearby drones and the GPS coordinates of whoever is piloting them.

Only the FAA will be able to correlate the drone’s ID number with your personal information, although they’ll share with law enforcement if requested.

Only the FAA will be able to correlate the drone’s ID number with your personal information, although they’ll share with law enforcement if requested Can I turn Remote ID off?

Part of the Remote ID specification is that the user should not have the ability to disable it, and if you somehow manage to anyway, the drone should then refuse to take off.

When do I actually have to start worrying about Remote ID?

September 2023. You’ve got some time!

What are drone manufacturers going to do?

Manufacturers have 18 months to start integrating Remote ID into their products.

What happens to my old drone when the Remote ID requirement kicks in?

The good news is that at least in some cases, it sounds like even the current generation of drones will be able meet Remote ID requirements. As one example, we spoke with Brendan Groves, head of policy and regulatory affairs at Skydio, about what Skydio’s plans are for Remote ID going forward, and he made us feel a little better, saying they are tracking this issue closely and that they are “committed to making Skydio 2s in use now compliant with the new rule before the deadline.”

Of course, different drone makers will have different answers, so if you own a drone you should ask the manufacturer about for more information.

What if my drone isn’t going to get updated for Remote ID?

Remote ID doesn't have to be directly integrated into your drone, and the FAA expects that add-on Remote ID broadcast modules will be available.

Can I make my own module?

Sure, but the FAA has to approve it.

Remote ID sucks and I won’t do it! What are my options?

The FAA will partner with educational and research institutions and community-based organizations to establish defined areas in which drones can fly in line of sight only without Remote ID enabled. 

Is there an upside to any of this?

Besides the obvious impact on safety and security, Remote ID will be particularly important for drones that have a significant amount of autonomy. According to the FAA, Remote ID is critical to enabling advanced autonomous operations—like routine flights beyond visual-line-of-sight—by providing airspace awareness.

Where can I find more details?

Executive summaries are here and here, and the full rules are available through the FAA’s website here.

The United States Federal Aviation Administration has been desperately trying to keep up with the proliferation of recreational and commercial drones. They haven’t been as successful as all of us might have wanted, but some progress is certainly being made, most recently with some new rules about flying drones at night and over people and vehicles, as well as the requirement for a remote-identification system for all drones.

Over the next few years, FAA’s drone rules are going to affect you even if you just fly a drone for fun in your backyard, so we’ll take detailed look about what changes are coming and how you can prepare.

The first thing to acknowledge is that the FAA, as an agency, is turning out to be a very poor communicator where drones are concerned. I’ve written about this before, but understanding exactly what you can and cannot do with a drone, and where you’re allowed to do it, is super frustrating and way more complicated than it needs to be. So if some of this seems confusing, it’s not you.

What kind of drone pilot am I?

Part of the problem is that the FAA has separated drone pilots into two categories that have rules that are sometimes different in ways that don’t always make sense. There are recreational pilots, who fly drones “strictly for recreational purposes,” and then there are commercial pilots, who fly drones to make money, for non-profit work, for journalism, for education, or really for anything that has a goal besides fun.

Recreational pilots are allowed to fly under safety guidelines from a “community-based organization” like the Academy of Model Aeronautics (AMA), while commercial pilots have to fly under the rules found in Part 107 of the Federal Aviation Regulations. So, while the Part 107 rules have, for example, prohibited flying at night without a waiver from the FAA, the FAA also says that recreational flyers can fly at night as long as the drone “has lighting that allows you to know its location and orientation at all times.” Go figure.

What are the current rules for recreational and commercial pilots?

You can find these on FAA’s website:

What are the new drones rules that the FAA announced?

Late last year, the FAA released what it called in a press release “Two Much-Anticipated Drone Rules to Advance Safety and Innovation in the United States.”

The first update is for Part 107 pilots, and covers operations over people, over vehicles, and at night. Currently, Part 107 pilots need to apply to the FAA for waivers to do any of these things, and now you do not need a waiver to do them, as long as you follow the new rules.

The second new rule is about how drones identify themselves in flight, called Remote ID, and applies to everybody flying a drone, even if it’s just for fun. If you’re a recreational pilot, you can skip down to the part about Remote ID, which will affect you.

Can I fly at night?

Yup. The new rule allows for night flying with a properly lit up drone (“anti-collision lights that can be seen for 3 statute miles and have a flash rate sufficient to avoid a collision”). The rule also helpfully notes that these lights must be turned on.

This applies to Part 107 pilots only, and as we noted above, whether recreational fliers can fly at night isn’t as clear as it should be. And Part 107 pilots who want to take advantage of this new rule will need to take an updated knowledge test, which the FAA will provide more information on within the next few months.

Can I fly over moving vehicles?

Generally, yes, if you’re a Part 107 pilot. You can fly over moving vehicles as long as you’re just transiting over them, rather than maintaining sustained flight over them. If you want to maintain sustained flight, you can do that too, although in that case everyone in the vehicle needs to know that there’s a drone around and it has to be in an access controlled area.

Vehicles, as far as the FAA is concerned, includes anything where a person is moving more quickly than they’d be able to on foot, because this rule exists to try and mitigate the likelihood of a wayward drone hitting someone at a higher speed. Vehicles therefore include skateboards, rollerblades, bicycles, roller coasters, boats, and so on.

Is my drone allowed to fly over people?

Part 107 pilots are now allowed to fly over people in some circumstances, under restrictions that change depending on how big and scary your drone is. The FAA has separated drones into four risk categories, based on how much damage they could do to a human that they come into contact with.

  • Category 1: A Category 1 drone represents “a low risk of injury” to humans and therefore weighs 0.55 pounds (0.25 kg) or less including everything attached to the drone from takeoff to landing. Furthermore, a Category 1 drone cannot have “any exposed rotating parts that would lacerate human skin,” and whatever kind of protection that implies must not fall outside the weight limit. If your drone meets both of these criteria, there’s no need to do anything else about it.
  • Category 2: A Category 2 drone is the next step up, and since we’re now out of the “low risk of injury” category, the FAA will require a declaration of compliance from “anyone who designs, produces, or modifies a small unmanned aircraft” in this category. For Category 2, this declaration has to show that the drone “must not be capable of causing an injury to a human being that is more severe than an injury caused by a transfer of 11 ft-lbs of kinetic energy from a rigid object,” and the declaration must be approved by the FAA. Category 2 drones must also incorporate the same kind of laceration protection as Category 1, although one of the more interesting comments on the ruling came from Skydio, which asked whether a software-based safety system that could protect against skin laceration would be acceptable. The FAA said that’s fine, as long as it can be demonstrated to be effective through some as-yet unspecified process. 
  • Category 3: A Category 3 drone is just the same at Category 2, except bigger and/or faster, and it “must not be capable of causing an injury to a human being that is more severe than an injury caused by a transfer of 25 ft-lbs of kinetic energy from a rigid object.” Laceration protection also required.
  • Category 4: If you think your drone is safe to operate over people but it doesn’t fit into one of the categories above, you can apply to the FAA for an airworthiness certificate, which (if approved) will let you fly over people with your drone (sometimes) without applying for a waiver.
Great, so can I fly over people whenever I want?

To fly over people, you must be flying under Part 107, your drone must be in one of the four categories above, and you’ll need to follow these specific rules on outdoor flight over people. Note that the FAA defines “sustained flight over an open-air assembly” as “hovering above the heads of persons gathered in an open-air assembly, flying back and forth over an open-air assembly, or circling above the assembly in such a way that the small unmanned aircraft remains above some part of the assembly.”

  • Category 1: Sustained flight over groups of people outdoors is allowed as long as your drone is Remote ID compliant. We’ll get to the Remote ID stuff in a bit.
  • Category 2: Sustained flight over groups of people outdoors is allowed as long as your drone is Remote ID compliant. The big difference between Category 1 and Category 2 is that with a Category 1 drone, you can make your own prop guards or whatever and weigh it, and as long as it’s under 0.55 pound, you’re good to go. Category 2 drones have to go through a certification process with the FAA. If you buy a drone, the manufacturer will likely have done this already. If you build a drone, you’ll have to do it yourself.
  • Category 3: No sustained flight over groups of people. You also can’t fly a Category 3 drone over even a single person, unless it’s either a restricted area where anyone inside has been notified that a drone may be flying over them, or the people the drone is flying over are somehow protected (like under a shelter of some kind). Remote ID is also required.
  • Category 4: There’s a process, but you’ll need to talk with the FAA.
What if I want to do stuff that isn’t covered under these new rules?

Part 107 pilots can still apply to the FAA for waivers, just like before.

I fly recreationally and don’t have my Part 107. Can I fly at night, over moving vehicles, or over people?

Definitely not over people or vehicles. Maybe at night, but honestly, best not to do that either?

What’s Remote ID?

The FAA describes Remote ID as being like a digital license plate for your drone. If you’re following the rules, you’re currently required to register your drone (unless it’s very small) and then make that registration number visible on the drone somewhere.

This isn’t particularly useful if you’re someone on the ground trying to identify a drone flying overhead, so the FAA is instead requiring that all drones broadcast a unique identifying number whenever they’re airborne.

The FAA is requiring that all drones broadcast a unique identifying number whenever they’re airborne Does my drone have Remote ID?

Most likely not. This is a brand new requirement.

What drones will be required to broadcast Remote ID?

Every drone that weighs more than 0.55 pounds (0.25 kg). Drones weighing less than that may be required to have Remote ID if they’re being flown under Part 107.

If you have a drone that weighs under 0.55 pounds and fly recreationally, then lucky you, you don’t have to worry about Remote ID.

What kind of broadcast signal is Remote ID?

The FAA only says that drones “must be designed to maximize the range at which the broadcast can be received,” but it’ll be different for each drone. The target seems to be 400 feet, which is what the FAA figures maximum line of sight distance to be. There was some discussion about making network identification an option (like, if your drone can talk to the Internet somehow, it doesn’t have to broadcast directly), but the FAA thought that would be too complicated. 

What information will Remote ID be sending out?
  • An identifying number for your drone
  • The location of your drone (latitude, longitude, and altitude)
  • How fast your drone is moving
  • Your location (the location of the drone’s controller)
  • A status identifier that says whether your drone is experiencing an emergency
Who can access the Remote ID broadcast?

According to the FAA: “Most personal wireless devices within range of the broadcast.” In other words, anyone with interest and a mobile phone will be able to locate both nearby drones and the GPS coordinates of whoever is piloting them.

Only the FAA will be able to correlate the drone’s ID number with your personal information, although they’ll share with law enforcement if requested.

Only the FAA will be able to correlate the drone’s ID number with your personal information, although they’ll share with law enforcement if requested Can I turn Remote ID off?

Part of the Remote ID specification is that the user should not have the ability to disable it, and if you somehow manage to anyway, the drone should then refuse to take off.

When do I actually have to start worrying about Remote ID?

September 2023. You’ve got some time!

What are drone manufacturers going to do?

Manufacturers have 18 months to start integrating Remote ID into their products.

What happens to my old drone when the Remote ID requirement kicks in?

The good news is that at least in some cases, it sounds like even the current generation of drones will be able meet Remote ID requirements. As one example, we spoke with Brendan Groves, head of policy and regulatory affairs at Skydio, about what Skydio’s plans are for Remote ID going forward, and he made us feel a little better, saying they are tracking this issue closely and that they are “committed to making Skydio 2s in use now compliant with the new rule before the deadline.”

Of course, different drone makers will have different answers, so if you own a drone you should ask the manufacturer about for more information.

What if my drone isn’t going to get updated for Remote ID?

Remote ID doesn't have to be directly integrated into your drone, and the FAA expects that add-on Remote ID broadcast modules will be available.

Can I make my own module?

Sure, but the FAA has to approve it.

Remote ID sucks and I won’t do it! What are my options?

The FAA will partner with educational and research institutions and community-based organizations to establish defined areas in which drones can fly in line of sight only without Remote ID enabled. 

Is there an upside to any of this?

Besides the obvious impact on safety and security, Remote ID will be particularly important for drones that have a significant amount of autonomy. According to the FAA, Remote ID is critical to enabling advanced autonomous operations—like routine flights beyond visual-line-of-sight—by providing airspace awareness.

Where can I find more details?

Executive summaries are here and here, and the full rules are available through the FAA’s website here.

Pages