Main menu

Pages

Drawing lessons from cybersecurity to combat disinformation

featured image

Mary Ellen Zurko remembers feeling disappointed. Shortly after receiving her bachelor’s degree from the Massachusetts Institute of Technology, she was working on her first job evaluating secure computer systems for the US government. Her goal was to determine if the system complied with the “Orange Book,” the government’s authoritative manual on cybersecurity at the time. Was the system technically secure? Yes. actually? Not really.

“We never had any concerns about whether our end-user security demands were realistic,” says Zurko. “The concept of a secure system was about technology and assumed perfect, obedient humans.”

That discomfort kicked her off on Zurko’s career-defining track. In 1996, after returning to her MIT for her master’s degree in computer science, she published a seminal paper introducing the term “user-centered security.” It has grown into its own field concerned with ensuring that cybersecurity and usability are balanced. Lessons learned from available security have inspired the design of phishing warnings when visiting unsafe sites and the invention of the “strength” bar when entering a required password.

Now a cybersecurity researcher at the Massachusetts Institute of Technology Lincoln Lab, Zurko is still troubled by the relationship between humans and computers. Her focus is on influence operations, or efforts to counter foreign adversaries’ attempts to spread deliberate misinformation (disinformation) on social media with the aim of disrupting U.S. ideals. We are moving to technology.

In a recent article published in IEEE Security & Privacy, Zurko argues that many of the “human problems” within the usable security arena are analogous to those of tackling disinformation. To some extent, she faces similar challenges during her early career. Convincing her colleagues that such human problems are also cybersecurity problems.

“In cybersecurity, attackers use humans as one of the means to subvert technical systems. Disinformation campaigns aim to influence human decision-making. It’s kind of the ultimate use of cyber technology to make things happen,” she says. “Both use computer technology and humans to achieve their goals. Only the goals differ.”

Stay ahead of influence operations

Research into countering online influence operations is still in its infancy. Three years ago, the Lincoln Institute began research on this topic to understand its implications for national security. As one RAND study reveals, the field has since exploded, especially since dangerous and misleading Covid-19 claims, perpetuated in some cases by China and Russia, have gone viral online. . Dedicated funding is currently being provided through the Institute’s Technical Office for the development of Operation Influence countermeasures.

“For us, it will strengthen democracy and make all citizens resilient to the kinds of disinformation campaigns targeted by international adversaries who seek to disrupt our internal processes. It’s important to do that,” says Zurko.

Like cyberattacks, influence operations often follow a multi-step path called the kill chain, exploiting predictable weaknesses. Researching and strengthening these weaknesses works just as well in combating influence operations as it does in cyber defense. Lincoln Labs efforts to develop technologies to support “source tending,” or adversaries begin to find narrative opportunities that are divisive or misleading, and begin building accounts to amplify those kills. It is to strengthen the initial stage of the chain. Source trends can help give U.S. intelligence personnel clues as to how to foment disinformation campaigns.

Several approaches in laboratories are aimed at sourcetending. One approach is to leverage machine learning to study digital personas. This is intended to identify the same person behind multiple malicious accounts. Another area focuses on building computational models that can identify deepfakes, or AI-generated videos and photos made to mislead viewers. Researchers are also developing tools to automatically identify which accounts are most influential in narratives. First, the tool identifies a story (in one paper, researchers investigated a disinformation campaign against French presidential candidate Emmanuel Macron) and collects data related to that story, including keywords, retweets, and likes. . We then use an analytical technique called causal network analysis to define and rank the impact of a given account.

These technologies are being used to develop a test bed for counter-influence operations led by Zurko. The goal is to simulate a social media environment and create a safe space for testing counter technology. Most importantly, the testbed will allow human operators to join the loop to see how new technology can help them in their work.

“Our military’s information operations personnel have no way of measuring impact. We can grow our metrics to see if we can actually identify disinformation campaigns and the actors behind them.”

This vision is still ambitious as the team builds a test bed environment. Simulating social media users and what Zurko calls “grace cells” — those who unwittingly participate in online influence — is the greatest tool for emulating real-world situations. He is one of the challenges. Restructuring social media platforms is also a challenge. Each platform has its own policy for handling disinformation and its own algorithms that affect the extent of disinformation. For example, The Washington Post reported that Facebook’s algorithm gave “extra value” to news that received an outrage response, making him five times more likely that a user’s news would appear in his feed. These often-hidden dynamics are important to replicate in test beds to study the spread of fake news and understand the impact of interventions.

Adopting a full-system approach

In addition to building testbeds for combining new ideas, Zurko also advocates an integrated space that disinformation researchers can call their own. , psychology, policy and law researchers come together to share cross-cutting aspects of their research with cybersecurity experts. According to Zurko, the best defense against disinformation requires such diverse expertise and a “full systems approach of both human-centric and technical defenses.”

This space doesn’t exist yet, but it could be on the horizon as the field continues to grow. “Just recently, major conferences started incorporating disinformation studies into their call for papers, which is a true indicator of where things are going,” says Zurko. “But some still cling to the old-fashioned notion that nasty humans have nothing to do with cybersecurity.”

Despite these feelings, Zurko still trusts her early observations as a researcher. She wants to continue designing technology and approach problem solving in a human-centered way. “From the beginning, what I loved about cybersecurity was that it was part of the mathematical rigor, sitting around the ‘campfire’ telling stories and learning from each other,” she said. says Zurko. Disinformation derives its power from humans’ ability to influence each other. That ability may also be the most powerful defense we have.

/University Release. This material from the original organization/author may be of a point-in-time nature, edited for clarity, style, and length. Views and opinions expressed are those of the author is. View the full text here.

Comments