Introduction
An information revolution has been underway for about the last 20 years. As technology has developed, its power has grown dramatically to the point where most people in Europe get some or all of their information from online digital media.
This has radically altered both the volume and quality of the information we can access and how we consume it. The migration of information users away from traditional media to online sources has given rise to a phenomenon loosely defined as disinformation which has been increasing concerning the European Union. It has been recognised as a threat to “fundamental values like freedom of expression and right to information”.
Defining the problem
Disinformation can be defined as information which is misleading or false and has been created and spread intentionally for reasons which could include political or economic gain or to cause public harm.
The deliberate element of disinformation differentiates it from misinformation which refers to the sharing of false or inaccurate information while not intending to deceive.
Defining these phenomena is an important aspect of mitigation efforts. The European Commission highlights the strategic intent behind disinformation and its false nature. However, keen to protect freedom of expression, it balances this by also recognising that “satire and parody, or clearly identified partisan news and commentary” do not constitute disinformation.
The difference between misinformation and disinformation largely centres on the intent to deceive.
However, in practice, clear definitions are difficult to achieve. For example, if content starts as disinformation, that is false information with strategic intent, but is shared by people who believe it but do not intentionally set out to deceive, does it then become misinformation?
There is also the problem of deciding what is true or what is false. Who decides and based on what information? Disinformation content is often not illegal so legislative solutions are often difficult to formulate.
These definitional and legal uncertainties contribute to the challenges of acting to proactively mitigate against misinformation and disinformation.
Disinformation is also a two way-process and its success relies not just on a malicious sender but also on a receptive audience. It is “a cause of political polarisation, populist support, and anti-establishment feeling; it is also a symptom of a political system that already finds itself on shaky ground.”
Therefore, while online digital media plays a key role in facilitating spread, failures by public institutions also impact public discourse. For example, health scandals in France, such as the contaminated blood scandal in the 1990s, or the rollout of the H1N1 vaccine in 2009, can be linked to vaccine scepticism.
What is the EU doing?
The first comprehensive response to online disinformation was a Communication published by the European Commission in April 2018. This set out a policy approach based on increasing transparency around the production and distribution of information, supporting journalism and enhancing the credibility of information.
A major initiative arising from this was the Code of Practice on Disinformation. This is a voluntary, self-regulatory framework signed by major online platforms, social networks and advertisers. The Code is limited due to its voluntary nature and has attracted criticism for concentrating regulatory power around online platforms. However, the counter point is that heavy public regulation can be characterised as censorship, undermining trust in the overall information ecosystem and increasing the “anti-establishment feeling that drives disinformation in the first place.”
That said, making the Code mandatory remains an option. A recent paper from the College of Europe suggests: “If, after a thorough review of digital platforms’ performance during the pandemic, the EU finds insufficient response by a majority of them, it should drop the voluntary character of the ‘Code of Practice’ and move to a binding regulation.”
In December 2018, the Action Plan on Disinformation was adopted by the Commission and the High Representative for Foreign and Security Policy. As well as focusing on disinformation within the Union, it looks beyond the EU borders towards disinformation arising from the EU’s strategic rivals (particularly Russia) and other malicious actors.
The Action Plan provides for the EU to improve capabilities to detect, address and expose disinformation; strengthen joint responses; mobilise the private sector; and raise public awareness and societal resilience, including:
- The establishment of a Rapid Alert System on disinformation to facilitate information sharing between platforms, authorities and international partners such as the G7 and NATO;
- Strengthening the EU External Action Service (EEAS) strategic communication taskforces in the neighbourhood countries with additional expertise, staff resources and increasing budgets with a view to improving disinformation detection and responses. Russian disinformation in the Eastern Neighbourhood is a major focus of this work;
- An assessment on the Code of Practice on Disinformation.
The Covid-19 pandemic resulted in further waves of disinformation, particularly from states like Russia and China. New coordination efforts were launched to build on the 2018 Action Plan, such as strengthening the Rapid Alert System. This has proven to be effective in dispelling myths about various remedies and hoaxes about the origins of the virus, according to the European Commission.
In December 2020, the Commission published the Digital Services Act (DSA) package. It includes proposals for new rules for online platforms on content moderation, advertising and algorithms based around increased accountability and transparency.
Also launched in December 2020, the Commission’s European Democracy Action Plan places countering disinformation at the centre of future efforts to promote free and fair elections, strengthen media freedom and pluralism.
Towards a broader response
Trying to combat disinformation through policy and legislation is just part of the solution. It is not a coincidence that the meteoric rise of online disinformation has coincided with a decimation of traditional media business models. This has resulted in less quality information being available to counter both misinformation and disinformation.
Good journalism and a healthy media industry are expensive and the advertisers who once paid the bill have largely migrated to the online platforms. Some media outlets have developed sustainable, alternative models through subscriptions. However, for most of the news industry the problem of how to fund journalism remains.
Therefore, as well as policy and legislative responses, supporting a healthy media system based on quality journalism must also be a priority. Direct public funding for journalism is a fraught subject as it can lead to questions of state or public control. However, mechanisms of indirect support have been identified as a key part of the response to disinformation. This challenge is made more difficult by the fact that media freedom and plurality are under threat in some Member States.
In Ireland, a Future of Media Commission is examining issues related to sustainable funding sources, changes in audience behaviour and technology and is due to report in summer 2021.
Promoting and supporting media literacy relevant for the online world has also been identified as a key action to counter misinformation and disinformation. Two thirds of EU members states “have underdeveloped media literacy policies, or no media literacy policy at all.” Supporting citizens with the knowledge and skills to critically assess the information they encounter is important, particularly for older age groups. Research has suggested that older age groups are more likely to share misinformation and disinformation that younger users.
NGOs and other civil society groups often have remits to counter disinformation and many have strong interests in carrying out this work. For example, an important part European Movement Ireland’s remit is to be a trusted and factual source of information across the EU to facilitate public debate and inform public policy. Directly supporting this sector is less contentious than direct media support and is another important pillar in the response.
Finally, the role of the private sector should not be overlooked. Disinformation harms private companies who can be a target. A recent development in the United States has been the willingness of companies to use litigation to counter claims made about them both online and on broadcast media. For example, Dominion Voting Systems and Smartmatic USA have separately launched legal proceedings against former President Donald Trump’s personal attorneys and Fox News for spreading false information about the company’s products.
Conclusion
The challenge posed by misinformation and disinformation is fundamental, sophisticated and well resourced. Therefore, there may not be simple solutions.
The EU has made progress on these issues over the last number of years. However, as highlighted above, the difficulty defining the scope of the problem predicts the difficulty arriving at solutions.
The response to disinformation must be broad and take place on many levels as: “It shouldn’t be taken for granted either that protection of pluralism will necessarily follow from the adoption of related digital policies…”
Combatting disinformation requires EU leadership but also collective action. That involves Member States, online platforms, media organisations, NGOs, civil society and the private sector.
It also involves effective public institutions that engender trust, public discourse that promotes decency and tolerance and transparent political, media and civil society ecosystems that support democracy, in all parts of the European Union.