Disinformation for Infosec Wonks
(How to think about fake news if you mainly think about malware)
As an information security professional, I've spent years safeguarding digital assets, mitigating cyber threats, and ensuring the integrity of data systems. However, a growing threat has emerged that challenges the very foundations of what I understood about what that work entails: disinformation.
While it might seem outside our traditional purview, I argue that disinformation is very much an infosec problem; one which we as an industry will soon need to begin to address and consider our own.
Low on time? Listen to a 10m podcast discussion about this topic:
The Problem Set
Disinformation campaigns represent a primary concern in the realm of information warfare due to their intentional and sophisticated nature.
Unlike misinformation, which is spread unknowingly, disinformation is deliberately created and disseminated with the explicit intent to deceive, utilizing advanced techniques to maximize reach and impact.
Misinformation: False information shared unintentionally without the aim to deceive (e.g., spreading an unverified rumor).
Malinformation: True information shared with harmful intent to cause damage or distress (e.g., leaking private information to discredit someone).
Disinformation: False information shared deliberately to deceive or manipulate (e.g., creating a fake news story for political gain).
The main difference is the intent behind sharing the information. I will use “disinformation” throughout this paper to cover all types of toxic data without assignation of intent.
It is also important to distinguish between disinformation campaigns and the wider phenomena of influence campaigns. A sophisticated influence campaign might incorporate elements of disinformation to achieve its goals, while a disinformation campaign is ultimately a form of influence operation. The key distinction lies in the primary reliance on false information in disinformation campaigns versus the broader, potentially more nuanced approach of influence campaigns.
The scalability afforded by modern technology and social media platforms allows disinformation to spread rapidly and reach vast audiences, while its persistence makes it extremely difficult to correct once it takes hold.
This deliberate manipulation of information has far-reaching consequences, including undermining trust in legitimate institutions, influencing geopolitical landscapes, manipulating markets, and posing significant threats to democratic processes.
The deliberate nature of disinformation campaigns makes them particularly challenging to combat, as those behind them continually adapt and evolve their tactics in response to counter-measures. Furthermore, these campaigns often exploit psychological vulnerabilities and cognitive biases, making them more effective than accidental misinformation.
The cumulative effect of disinformation can erode trust not only in specific institutions but in the very concept of objective truth, potentially destabilizing societies and undermining the foundations of informed public discourse.
This combination of intentionality, sophistication, scalability, and psychological exploitation makes disinformation campaigns a uniquely potent threat in the modern information landscape.
Drawing From the CIA Triad
In the infosec world, we commonly refer to the “CIA Triad”, a foundational model in which C, I, and A stand for Confidentiality, Integrity, and Availability.
Not all aspects of the Triad apply equally to the challenge of disinformation. Of the three components, Integrity is most currently relevant to the disinformation problem of today.
Let's quickly examine each to understand why:
Confidentiality: This aspect is less relevant in the context of disinformation, which primarily deals with public information.
Availability: While important for information systems, availability (commonly referred to as ‘uptime’) is not a central concern in addressing disinformation, which often thrives on over-availability of false information.
Integrity: This is the primary concern in combating disinformation. Integrity involves ensuring data is trustworthy and free from tampering–that the data under our purview remains authentic, accurate, and reliable as it spreads through networks.
Consider a typical malware attack which compromises system integrity, potentially altering or corrupting data. Disinformation operates similarly, but on a societal scale. It corrupts the 'data' (information) that people rely on to make decisions, form opinions, and understand the world around them.
Therefore, my focus in this position paper is primarily on preserving information integrity in the public sphere by adapting infosec principles and methodologies to this unique emergent challenge. Just as we work to prevent unauthorized alterations to data in our systems, we now face the challenge of maintaining the integrity of the broader information ecosystem.
I argue this has always been our industry’s responsibility, despite the lack of concern from the people who pay our salaries.
Infosec Threats vs. Disinfo Campaigns: Four Parallels
To illustrate why disinformation falls neatly within the purview of the business of infosec, we will examine four key parallels between traditional infosec threats and disinformation campaigns:
1. Threat Actors: APT Groups vs. Disinformation Campaigns
In traditional infosec:
Advanced Persistent Threat (APT) groups are sophisticated, often state-sponsored entities that conduct prolonged, targeted cyber attacks.
They have clear objectives, substantial resources, and employ advanced techniques to evade detection.
In the disinformation landscape:
Disinformation campaigns are often orchestrated by equally sophisticated entities, including state actors, political groups, or other organizations with specific agendas.
These campaigns are persistent, well-funded, and employ advanced psychological and technological tactics to spread false narratives.
Parallel: Both APT groups and disinformation campaigns represent organized, persistent threats that require ongoing vigilance and sophisticated countermeasures.
2. Attack Vectors: Network Vulnerabilities vs. Cognitive Biases
In infosec:
Attackers exploit vulnerabilities in software, hardware, or network configurations to gain unauthorized access or control.
These vulnerabilities might include unpatched systems, misconfigured firewalls, or weak authentication mechanisms.
In disinformation:
Campaigns exploit cognitive biases and psychological vulnerabilities to manipulate beliefs and behaviors.
Common ‘vulnerabilities’ include confirmation bias, emotional triggers, in-group favoritism, and the tendency to accept information from perceived authorities.
Parallel: Just as we work to patch software vulnerabilities, we must also work to ‘patch’ cognitive vulnerabilities through education and critical thinking skills.
3. Payload: Malicious Code vs. False Narratives
In infosec:
The payload is often malicious code designed to perform unauthorized actions, such as data exfiltration, system control, or destruction.
This code is typically hidden or disguised to avoid detection.
In disinformation:
The payload consists of false or misleading narratives designed to influence opinions, decisions, or actions.
These narratives are often crafted to seem credible and may be mixed with truthful information to avoid detection.
Parallel: In both cases, the payload is the core of the threat, designed to cause specific effects once it reaches its target. Detection and neutralization of these payloads are critical in both domains.
4. Impact: System Compromise vs. Belief Manipulation
In infosec:
The impact of a successful attack might include data breaches, financial losses, operational disruptions, or damage to reputation.
The compromise of critical systems can have far-reaching consequences for organizations or even national security.
Attacks can lead to loss of customer trust, regulatory fines, and long-term brand damage.
In severe cases, cyber attacks can disrupt critical infrastructure, affecting essential services and public safety.
In disinformation:
The impact involves the manipulation of individual and collective beliefs, potentially leading to societal divisions, election interference, or undermining of democratic institutions.
On a larger scale, widespread belief in false narratives can influence policy decisions, public health outcomes, or even geopolitical stability.
Disinformation can erode trust in legitimate institutions, experts, and media outlets, creating a “post-truth” environment.
It can exacerbate existing social tensions, leading to increased polarization and potential civil unrest.
In the context of elections, disinformation can undermine the democratic process by influencing voter behavior or decreasing participation.
During public health crises, health-related disinformation can lead to non-compliance with safety measures, affecting community health outcomes.
Economic impacts can occur when disinformation campaigns target specific industries or financial markets.
Parallel: Both types of attacks can have severe, long-lasting impacts that extend far beyond the immediate targets, affecting organizations, societies, and even global dynamics. The ripple effects of both cyber attacks and disinformation campaigns can be felt across multiple domains–social, political, economic, and in some cases, physical security.
A few key similarities in impact between cyber attacks and disinfo attacks:
Trust Erosion: Both can significantly damage trust in systems, institutions, or information sources.
Economic Consequences: Financial losses can result from both types of attacks, though through different mechanisms.
Operational Disruption: While cyber attacks directly disrupt operations, disinformation can indirectly cause disruptions through altered behaviors or decisions.
Harm to Individuals and Communities: Both have the potential to cause lasting physical and psychological harm, and can negatively impact social bonds and networks.
National Security Implications: Both can pose serious threats to national security and stability.
Cascading Effects: The impacts of both can cascade through interconnected systems or social networks, amplifying the initial damage.
Long-term Consequences: The effects of both can persist long after the initial attack or campaign, requiring ongoing efforts to mitigate and recover.
Understanding Disinformation in Infosec Terms
To better grasp how disinformation intersects with our field, I propose the following five-pillar framework:
Information Integrity: This is the cornerstone of our approach. Just as we ensure data integrity in our systems, we need to consider the integrity of information in the public sphere. Disinformation directly compromises this integrity.
Source Credibility: In infosec, we use concepts like certificate authorities and digital signatures to verify the authenticity of sources. Similarly, combating disinformation requires robust methods for verifying and maintaining the credibility of information sources.
Propagation Control: We often deal with containing the spread of malware or the impact of a breach. With disinformation, we face the challenge of controlling the propagation of false or misleading information across networks and platforms.
Cognitive Resilience: While we focus on system resilience in traditional infosec, disinformation introduces the need for 'cognitive resilience'–equipping people with the tools to critically evaluate information and resist manipulation.
Ecosystem Monitoring: Just as we continuously monitor our systems and networks for threats, addressing disinformation requires ongoing surveillance of the information ecosystem to detect and respond to disinformation campaigns.
This framework provides a structured approach for us security pros to contribute our expertise to the vital task of combating disinformation.
By applying these concepts to the challenge of disinformation, we can develop more effective strategies for maintaining the integrity of our information ecosystem.
Technical Challenges and Solutions
The intersection of disinformation with infosec presents a unique set of challenges that require innovative solutions.
This section explores five key areas where it seems possible that current principles and methodologies in digital data security can be adapted to combat the adjacent information security problem of disinformation.
1. Scale and Speed
Challenge: Disinformation can spread rapidly across vast, low- or no-boundary networks, much like fast-propagating malware.
Solutions:
a) Real-time Detection
Develop AI-powered systems to identify potential disinformation as it emerges
Parallels next-generation firewalls detecting zero-day threats
b) Rapid Response Mechanisms
Create automated systems to flag, quarantine, or contextualize suspected disinformation
Implement efficient retraction and correction propagation systems
Develop protocols for quick revocation or flagging of repeat offenders
c) Cross-platform Coordination
Establish information sharing protocols between platforms and organizations
Monitor known disinformation sources and track emerging narratives
Share threat intelligence across the information ecosystem
d) Scalable Architecture
Design systems capable of handling massive data volumes from social media
Draw inspiration from cloud security solutions for large-scale data processing
e) Rate Limiting
Implement systems to slow viral spread until veracity can be established
Apply principles similar to DDoS prevention in traditional cybersecurity
f) Early Warning Systems
Develop mechanisms for early detection of emerging disinformation threats
Parallel intrusion detection systems in cybersecurity
2. Attribution
Challenge: Identifying the source of disinformation can be as challenging as attributing a sophisticated cyber attack.
Solutions:
a) Digital Forensics
Apply forensic techniques to trace disinformation origins
Analyze metadata, stylometry, and social network patterns
b) Behavioral Analysis
Study patterns in content creation, posting times, and amplification strategies
Identify coordinated disinformation campaigns
c) OSINT Techniques
Leverage Open Source Intelligence to correlate online activities with real-world events and actors
d) Deception Detection
Develop AI models to identify linguistic patterns and inconsistencies indicative of deceptive content
e) Identity Verification
Implement multi-faceted approaches to verify information source identities
Mirror multi-factor authentication principles in cybersecurity
f) Chain of Trust
Develop systems for tracing information provenance through intermediaries
Parallel to PKI chain of trust for digital certificates
g) Honeypots
Create monitored channels to attract and study disinformation campaigns
Develop an understanding of current tactics without enabling broader propagation
3. Adversarial Machine Learning
Challenge: Disinformation tactics will adapt to circumvent fact-checking algorithms, similar to malware evolution.
Solutions:
a) Robust ML Models
Develop resilient machine learning models using adversarial training and ensemble methods
b) Continuous Learning
Implement systems that quickly adapt to new disinformation tactics
Mirror EDR systems' continuous threat detection updates
c) Multi-modal Analysis
Combine text, image, video, and network analysis for comprehensive detection
d) Explainable AI
Develop models providing clear reasoning for decisions
Enable human analysts to verify and refine system judgments
e) Data Validation
Implement robust fact-checking and consensus-based truth verification systems
Analogous to checksums and hash functions in data integrity verification
f) Reputation Systems
Develop dynamic reputation scoring for information sources
Similar to reputation-based antivirus systems
g) Continuous Assessment
Continuously evaluate community and platform resilience to disinformation threats
Parallel ongoing vulnerability assessments in infosec
4. Network Analysis
Challenge: Understanding disinformation propagation through social networks requires advanced techniques.
Solutions:
a) Graph Analysis
Apply graph theory to map information spread across social networks
Identify key nodes and dissemination/amplification patterns
b) Influence Mapping
Develop tools to identify and track influential authors and social accounts
Analyze their role in spreading information or disinformation
c) Temporal Analysis
Study time-based patterns of information spread
Distinguish organic viral content from coordinated campaigns
d) Cross-platform Tracking
Create methods to follow narrative spread across different platforms and communities
e) Version Control
Implement systems to track narrative and claim evolution over time
Identify when and how information gets distorted
f) Network Segmentation
Develop ways to contain disinformation spread within specific network segments
Mirror network segmentation principles in cybersecurity
g) Anomaly Detection
Develop systems to identify abnormal information spread patterns
Similar to behavioral analytics in cybersecurity
5. Inoculation Platforms
Challenge: Building cognitive resilience against disinformation at scale.
Solutions:
a) Specific Disinformation Training
Use AI to generate ‘fake fake news’ for training purposes
Allow practice in identifying and refuting specific disinformation instances
b) General Tactical Education
Provide comprehensive education on disinformation TTPs
Parallel to signature-based and heuristic analysis in antivirus systems
Suggested potential components of a ‘tactical education’:
Deceptive Narrative Structures: Teaching readers to recognize common story structures used in disinformation, such as false dichotomies, straw man arguments, or slippery slope fallacies.
Emotional Manipulation Techniques: Explaining how disinformation often exploits emotional triggers like fear, anger, or tribal loyalty to bypass critical thinking.
Source Manipulation: Educating people on tactics like creating fake experts, misattributing quotes, or using deceptive website designs to appear credible.
Amplification Strategies: Demonstrating how disinformation campaigns use bots, coordinated sharing, or hijacking trending topics to artificially boost their message's reach.
Cognitive Bias Exploitation: Illustrating how disinformation leverages cognitive biases like confirmation bias, anchoring effect, or the illusion of truth effect.
Media Manipulation Techniques: Teaching about tactics like selective editing, decontextualization, or deep fakes used to create misleading content.
c) Critical Thinking Skills
Develop abilities to analyze sources, question claims, and seek verification
d) Cognitive Resilience
Equip people with tools to evaluate information and resist manipulation
Could include forewarning, exposure to common manipulation techniques, providing factual counterarguments, and encouraging active refutation of false claims
The Role of Infosec Professionals
As security pros, we possess a unique set of skills and perspectives which can strongly contribute to addressing the disinformation challenge:
1. Threat Modeling: Our experience in identifying potential vulnerabilities and attack vectors can be applied to modeling how disinformation might spread and exploit weaknesses in information ecosystems.
Attack Tree Analysis: Just as we map out potential paths an attacker might take to compromise a system, we can create ‘disinformation attack trees’ to visualize how false narratives might propagate through various channels.
STRIDE Framework Adaptation: We can adapt the STRIDE threat model (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) to categorize different types of disinformation threats.
Red Team Exercises: Similar to how we conduct penetration testing, we can assemble teams to create and disseminate (harmless) false information, testing the resilience of our defensive measures.
Threat Intelligence Application: Our skills in gathering and analyzing threat intelligence can be applied to monitoring known disinformation sources and predicting emerging narratives.
2. Incident Response: The principles we use in responding to security breaches can inform strategies for rapidly identifying and countering disinformation campaigns.
Triage and Classification: Just as we categorize security incidents, we can develop systems to quickly assess and prioritize responses to emerging disinformation threats.
Containment Strategies: Our experience in containing security breaches can be applied to limiting the spread of disinformation, possibly through temporary content restrictions or targeted fact-checking interventions.
Forensic Analysis: The tools and techniques we use for digital forensics can be adapted to trace the origins and spread patterns of disinformation campaigns.
Post-Incident Review: Our practice of conducting post-mortem analyses after security incidents (‘After Action Reporting’) can be applied to evaluating the effectiveness of responses to disinformation campaigns and refining future strategies.
3. Risk Assessment: Our frameworks for assessing and prioritizing security risks can be adapted to evaluate and prioritize responses to different types of disinformation threats.
Impact Analysis: Just as we assess the potential impact of security breaches, we can develop models to quantify the potential harm of different types of disinformation.
Likelihood Estimation: Our methods for estimating the likelihood of various security threats can be applied to predicting the probability of different disinformation narratives gaining traction.
Risk Matrices: We can adapt cybersecurity risk matrices to create visual representations of disinformation risks, helping decision-makers prioritize responses.
Continuous Risk Monitoring: Our practice of ongoing risk assessment can be applied to continuously evaluating the changing landscape of disinformation threats.
4. Secure Design: The concept of “security by design” can be extended to “integrity by design” in information systems, building in safeguards against the spread of disinformation:
Input Validation: Just as we validate inputs in software to prevent injection attacks, we can develop systems to validate the credibility of information sources before allowing widespread dissemination.
‘Least Privilege’ Principle: We can apply this principle to information systems, limiting the ability to rapidly spread unverified information to accounts with established credibility.
Defense in Depth: We can implement multiple layers of safeguards against disinformation, including source verification, fact-checking, and user warnings, similar to how we use multiple security controls in cybersecurity.
Fail-Safe Defaults: We can design systems that default to treating unverified information cautiously, requiring additional verification before allowing broad distribution.
‘Shift Left’ Approach: In software development and security, ‘shifting left’ means addressing issues earlier in the development process. By 'shifting left in our approach to disinformation, we can address potential issues earlier in the information lifecycle, making it easier to maintain the integrity of the overall information ecosystem:
Early Detection: Implement AI-driven tools to flag potential disinformation during the content creation or submission process, before it enters the main information flow.
Proactive Fact-Checking: Integrate automated fact-checking systems that can verify claims in real-time as content is being created or shared.
Author Accountability: Develop systems that encourage content creators to provide sources or evidence for claims during the creation process, rather than after publication.
Educational Interventions: Implement just-in-time learning modules that educate users about disinformation tactics as they're creating or sharing content, helping to prevent inadvertent spread of false information.
Collaborative Filtering: Design systems that allow for community-driven identification of potential disinformation at the earliest stages of content virality.
5. Training and Awareness: Our experience in security awareness training can be adapted to develop effective inoculation programs. We can design scenarios and simulations that expose people to disinformation tactics in a controlled manner, helping to build cognitive resilience on a large scale.
Phishing Simulation Adaptation: Just as we use phishing simulations to train employees, we can create disinformation simulations to help people recognize manipulation tactics.
Gamification: We can apply gamification techniques used in cybersecurity training to create engaging, interactive experiences that teach critical thinking and fact-checking skills.
Micro-Learning: Our experience with bite-sized security training can be applied to developing short, frequent lessons on identifying and resisting disinformation.
Metrics and Evaluation: We can adapt the methods we use to measure the effectiveness of security awareness programs to assess and improve disinformation resilience training.
By leveraging these skills and approaches, infosec professionals can play a crucial role in developing comprehensive strategies to combat disinformation.
Our unique perspective on protecting information integrity, combined with our technical expertise and strategic thinking, positions us to make significant contributions to this critical challenge facing our digital societies.
Ethical Considerations
As we extend our focus to include disinformation, we must be mindful of the ethical implications. Addressing disinformation requires intersecting with thorny issues of civil rights and democratic values in more complex ways than typically seen with traditional infosec challenges–which raises important ethical questions:
Free Speech: How to balance information integrity with freedom of expression?
Privacy: What are the implications of increased monitoring and analysis of online communication?
Power Dynamics: Who decides what constitutes disinformation? How to prevent misuse of these tools for censorship?
Transparency: How to ensure accountability in automated disinformation detection systems?
Unintended Consequences: Could our efforts inadvertently suppress legitimate discourse or minority viewpoints?
We must obviously strive to maintain information integrity without enabling censorship or infringing on individual rights. Therefore, these considerations necessitate ongoing open dialogue between technologists, ethicists, policymakers, and the public.
Conclusion
Disinformation represents a significant threat to the integrity of our information ecosystem, one which has direct parallels to the challenges we face in traditional infosec. By applying our expertise in new ways and expanding our conceptual frameworks, we as infosec professionals can begin to play our crucial role in addressing this emerging threat.
The battle against disinformation is not separate from our mission to protect information integrity—it is an essential extension of it. As guardians of data in the digital age, it's incumbent upon us to rise to this new challenge by adapting our skills and perspectives to protect not just the integrity of computerized data, but the integrity of information itself.