Disinformation and influence operations – why should we care?
Disinformation is deliberately false or misleading information intended to deceive. When used as part of a broader campaign to shape public perception or influence the decision-making of government or businesses, it becomes part of what are known as influence operations.
These operations are typically coordinated by state or proxy actors and are designed to shape the information environment in ways that benefit the perpetrator’s strategic interests.
Digital platforms and artificial intelligence act as force multipliers for threat actors by increasing both the reach and speed of communication. AI models and deepfake technology make it easier to produce high-quality, tailored content at scale. In addition to being a powerful tool for threat actors to generate and disseminate disinformation, AI models can also repeat disinformation present in their training data . Individually and combined, these challenges reinforce the need for critical thinking and source vetting.
Why This Matters for Businesses
Disinformation and influence operations also affect businesses and industries in multiple ways:
- False narratives can damage corporate and brand reputation and may influence investor confidence and customer loyalty through false claims or manipulated reporting.
- Public trust in sectors like energy, health, or technology can be undermined through distorted messaging.
- Coordinated information campaigns can distort public or political debate around regulatory issues or commercial decisions.
For example, companies may find themselves part of a broader narrative (e.g., being accused of war profiteering, environmental harm, or foreign influence) that has been seeded and amplified by foreign actors for strategic purposes.
Threat Actors
Several governments use disinformation and influence operations systematically, employing tools, such as fake social media profiles, state-aligned news outlets, anonymous websites, and coordinated amplification networks.
Russia remains a leading actor in this domain and has created an ecosystem of official channels, state-aligned media, proxy websites, and manipulated social media content. This includes mimicking websites of established media outlets to publish fabricated stories. A known Russian tactic in influence operations has been to produce high-volume, repetitive messaging with little concern for consistency used to confuse and overwhelm audiences.
China’s operations often take a longer-term, narrative-shaping approach, designed to frame China as a responsible, cooperative global power. This includes using networks of fake accounts to promote pro-China messaging and to counter criticism, especially during elections or geopolitical crises, and state media, embassies, and think tanks to amplify strategic narratives about trade, diplomacy, and development.
Iran has also conducted targeted operations. For example, Iran has used fake personas spreading content on both sides of divisive political issues in the U.S. with the goal of creating polarization and distrust.
Targeting Norway
Norway is regularly targeted by foreign influence operations, particularly from Russia and China. Russian networks have published thousands of social media posts, also in Norwegian, promoting narratives that Norway is escalating the war in Ukraine, is hostile to Russian culture, and blindly follows U.S. and NATO policy. These campaigns often exploit local tensions, especially in Northern Norway, where Russian actors have tried to amplify regional discontent with central authorities in Oslo to create societal division. Several anonymous Norwegian websites linked to Russian domains further complicate detection, mimicking legitimate news outlets and using AI-generated content and fake identities to make disinformation appear locally credible.
China’s approach is more subtle. While Chinese actors mentioned Norway in fewer posts, the messaging was mostly positive, emphasizing climate cooperation and trade. However, the underlying strategic goal is to promote China as a constructive global partner while reinforcing narratives that weaken transatlantic unity.
Norwegian businesses have also been targeted directly. Companies in the energy, finance, and health sectors face operations aimed at influencing decision-makers, or damaging reputations. These risks are part of a broader threat landscape where disinformation and digital influence increasingly intersect with national and corporate security.
Protecting your business against disinformation
Disinformation and influence operations can harm a company’s reputation, distort public debate around its operations, or be used to pressure regulators, investors, or partners.
To counter this, businesses should actively monitor their information environment and validate sources and verify information used in their decision-making. This also applies when using AI. Source verification and critical thinking are essential.
In an increasingly contested information space, awareness, credibility, and rapid response are key. Companies should integrate disinformation risk into their operations and crisis planning.
Please stay tuned for our upcoming newsletter on how we at OpenHorizon are working on source validation and information verification to provide trusted intelligence to our clients.