Election interference in the digital age – building resilience to cyber-enabled threats in the EU

shutterstock_1097536961 copy

By Sir Julian King & Ann Mettler | Brussels

At the onset of the digital revolution, there was significant hope – and indeed an expectation – that digital technologies would be a boon to democracy, freedom and societal engagement.
Yet today – although it is clear that this cannot necessarily be attributed to digital technologies – we note with concern and disquiet that the world has experienced twelve consecutive years of decline in democracy and freedom. At the same time, we are witnessing the rise of what might be dubbed as ‘digital authoritarianism’.

At stake is nothing less than people’s trust in our institutions – without which our democracies cannot function.

Against this backdrop, it is time to better stress-test our assumptions, as well as the emerging technologies that might be put to misuse in an effort to undermine elections and democracies – be it Artificial Intelligence, deep fakes or cyber mercenaries. Given
the confluence of potential challenges, we must find the courage to take an honest and unsentimental look at the state of play of election interference driven by cyber enabled threats.
In May 2019, more than 300 million voters will be invited to the ballot boxes across 27 nations, and, in doing so, participate in one of the world’s largest democratic exercises.
Free and open elections are the foundation of our democratic societies.
They make Europe what it is – a place where you can speak your mind without fear of being arrested or prosecuted. A place where voters trust that election results reflect open and transparent public debate.
Protecting the integrity of our elections is therefore an absolute priority; for the European Union, for the Member States, and for all European citizens. But the threat to them has been growing in the past years, which have been marked by a series of attempts to manipulate electoral processes in at least 18 countries, including in the EU.
The threat can be split into two vectors: attacks that target systems and data to interfere with the electoral process or voting technology, and threats that manipulate voting behaviour.
In terms of the first, although this approach is relatively crude, even the suggestion that it has happened or could happen is corrosive to public trust and confidence. For the second, we can break it down further into three categories: targeted hacks and leaks to change public opinion; fake news to influence the results; and the use of psychometrically targeted messaging based on mined user data – such as in the Cambridge Analytica case.
Our work through the Security Union is designed to tackle both of these threats. The European Commission has been working for some time now to create tangible solutions for tackling disinformation, political campaigning, and election integrity in the digital age.
Together with Member States and other stakeholders, we have delivered:
– The Communication on Tackling Online Disinformation (April 2018);
– A Compendium on Cybersecurity of Election Technology – with Member States in
the lead (July 2018);
– The Communication on Securing Free and Fair European Elections (September 2018);
– A High-Level Conference and Member State Workshop on Election Interference as well as a new Code of Practice for industry and civil society (October 2018);

The most recent step was the Action Plan against Disinformation, which was published on 5 December. It responds to the calls of the European Council in June and October 2018 to develop a coordinated response to the challenges in this field, especially in view of the forthcoming European elections, and focuses on how to deal with disinformation both within the EU and in its neighbourhood.
The Action Plan is accompanied by a progress report on the April Communication. This report sets out the progress achieved, notably regarding the Code of Practice, to foster a secure, trust-worthy and accountable online ecosystem with appropriate awareness and media literacy as well as support for independent media and quality journalism.
At stake is nothing less than people’s trust in our institutions – without which our democracies cannot function. Our adversaries certainly know that, which is precisely why they are using digital tools to disrupt and sow doubt. This is proving not only much more potent than many traditional forms of attack, but also significantly cheaper and more difficult to prove – and ultimately prosecute.

What more can be done to strengthen our resilience?

Together, the Action Plan and the progress report are critical further steps towards robust and vibrant democracies for the future. But more steps are needed, and the measures we need to take can be roughly categorised into the following: people, protection, pockets, and platforms.

First, we must ensure that people – European citizens – understand what kind of threat we are facing in order to bolster the resilience of our societies against interference, by supporting innovative approaches by start-ups, NGOs and volunteers wanting to help protect democracy at this critical moment in time.

In May last year, two Dutch fourteen-year-olds perfectly spread a fake story about an upcoming heat wave that attracted 800,000 unique visitors in just one week. It was a school project – done with the help of a civil society organisation, trying to increase awareness about the impact of disinformation, during class, with their teacher’s encouragement. It shows that we need to accept that disinformation is an easy and powerful instrument. It perfectly exploits our human weaknesses and is successful at dividing societies. It is the first choice of weapon for demagogues and authoritarian regimes and can wield great power in mobilising the public, sometimes more so than journalism or politics.

The lesson to be drawn here is that civil societies and governments need to step up their engagement with the public ahead of the elections, to ensure we have the appropriate level of media literacy, digital skills and culture to cope with these issues. But they cannot do it alone.

Second, we need protection of the critical institutions and processes that underpin our electoral systems – and which deserve to be classified as critical infrastructure. As the tools and mechanisms underpinning Europe’s democratic systems and everyday life, political parties, election systems, infrastructure providers – and potentially also media groups and services – should be included within the concept of ‘essential services’ by Member States covered by the Directive on Security of Network and Information Systems (‘NIS’ Directive). One example is the German decision to classify all election-related infrastructure as critical, with adequate response protocols, for two weeks prior and two weeks after an election. Furthermore, electoral process and components relevant for elections of the EP should be qualified as European Critical infrastructure and as such covered by any existing or future EU legislation.

Disinformation is the first choice of weapon for demagogues and authoritarian regimes and can wield great power in mobilising the public, sometimes more so than journalism or politics.

Furthermore, relevant national authorities should implement as a matter of urgency a risk assessment based approach to identify vulnerabilities against cyber threats into their electoral process and components with a view to mitigate identified gaps and allocate appropriate resources.

Thirdly, we need to delve into our pockets to invest in communities and the means we need, as well as minding how funds are being spent on campaigns online, and making it more transparent.

Fact-checkers and journalism need money to thrive. We have to be honest about this. And so do initiatives which seek to harness technology for our common good. Democracies can only function if their citizens have the information they need to participate in civic affairs. Purveyors of false information know this, which is why they target the citizens of the world’s democracies. False information can spread quickly, crowding out reliable information, if citizens have no help in determining which is which.

Over the past two decades, global spending on newspaper print ads shrunk to less than 10 percent of the market share, while spending on digital ads rose to 33 percent – forcing many publishers to go digital, seek alternative sources of funding and, in some cases, rethink the types of content they publish. The EU is already supporting a wide range
of journalistic efforts but more needs to be done to innovate the business model that in previous times used to represent a fundamental pillar of democracy
– and still does, but under increasing pressure. This will be a long-term effort that needs close attention in the coming years, where governments and civil society can do their part to ensure a healthy public debate and support journalism.

At the same time, we need to ensure the money going into political campaigns is sufficiently transparent. Regulators and election bodies during campaigns are now struggling to apply the existing tests to social media content or foreign material. This is a huge challenge but the principles do not change, and in fact they are well expressed in the Venice Commission’s Code of Good Practice on Electoral Matters from 2002.

They include equality of opportunity for parties and candidates, including a principle of proportionality, which e.g. applies in particular to ‘radio and television air-time’ and stipulates that ‘[i]n conformity with freedom of expression, legal provision should be made to ensure that there is a minimum access to privately owned audio-visual media, with regard to the election campaign and to advertising, for all participants in elections’. Furthermore, they state that campaign funding must be transparent and mean that equality of opportunity can also, in some cases, lead to a limitation on political party spending, especially on advertising.

In the new digital world, manipulation of social media during an election campaign can undermine that equality of opportunity, and so these principles must be taken to heart, and properly embedded into our growing digital society.

Fourth, and finally, we need to keep platforms clearly involved and hold them accountable.

We have the first iteration of a code of conduct agreed by platforms – it is a good start. But to be effective it needs to go much further, much faster. We need to make it easier for users to see the provenance of content, allowing them to assess its trustworthiness, while also reducing the visibility of disinformation.

Nor should we be afraid to consider requesting that platforms better know their customers at a time when foreign or domestic actors so actively polarise our societies under the shelter of anonymity or fake accounts. Would we still see similar levels of hatred, bullying, disinformation and insults if it were otherwise? Is it not time to have an earnest debate about how to restore civility to our public discourse?

But let’s be crystal clear. We are not proposing that the platforms – or anyone else – be the judge of what is true or what is false. The issue at stake here is different. We are asking for increased transparency about the sources and provenance of information. What we ask of social media
is to make political advertising traceable, transparent and accountable.

In parallel, platforms should step up their efforts against the use of bots. We are for free speech, not artificial speech.

Pre-empting future evolutions

As we look ahead to future elections, a far more dangerous tool will enter the election interference toolkit: deepfakes. These are Artificial Intelligence-based human-image synthesis techniques, that combine and superimpose existing images and video onto source video with a view to creating an alternate reality.

Deepfake technology will enable malign actors anywhere to
create a video of virtually anyone, doing and saying whatever they want them to. Deepfakes are becoming less prohibitively costly to produce just as they become more convincing. This technology will soon be available not only to malign states, but to malign individual actors.

Imagine what could happen to public trust and civic discourse around elections as this technology spreads. Put bluntly, deepfakes could transform not just election interference, but politics and geopolitics as we know it.

So what can be done to prepare ourselves for the next wave of election interference via deepfakes?

First off, we need to step up our game. We need Artificial Intelligence specialists if we are to beat other specialists with malign intentions. Artificial Intelligence can also be utilised to sniff- out imperfections in manipulated video invisible to the human eye, through watermarking algorithms and metadata built into authentic video. Deepfakes can thus be identified and stopped before they spread. The development of this detection technology must therefore be our top priority.

Second, there is a need for private sector platforms to embrace this detection challenge as a priority of shared public interest. They should turn their research-and-development firepower towards this urgent threat, before it appears and spirals out of control on their own platforms. The key here will be to focus on detecting manipulation of source video (not evaluation of political content).

But perhaps most importantly – and thirdly – civic education about the threat of deepfakes has to be incorporated as an essential element of democratic defence against this next generation of disinformation. Governments, civil society and private industry must come together to facilitate comprehensive public education campaigns to inoculate the public – before deepfakes spread virally, dramatically impact public opinion, or change the outcome of election.

The bottom line is that without greater public awareness of this danger, deepfake technology has the potential to cause electoral chaos and, eventually, geopolitical instability. Democratic governments need to get ahead of the threat by engaging the public to safeguard our democracies – and building citizen resilience to deepfake disinformation must become a shared public interest priority.

When looking towards the European Parliament elections in 2019, the need for action is urgent – doing nothing risks the robustness of democracies and our democratic processes being undermined, at both the imminent elections and further beyond.

The European Union itself rose from the decline of autocratic regimes, forging a unique shared destiny with the new liberal- democratic world order, and thus has an existential stake in preserving it. Its continued strength and vitality relies in part on a wider network of institutions and norms committed to the same fundamental values of democracy, human rights, and rule of law. The bedrock of these values are our democratic elections, and we have a vital interest in defending them. We have a lot to do, if we want to save this project from falling overboard – for ourselves, and for our children.

That is why it is so important to bring together all the relevant players – from the EU, Member States, and the private sector – to ensure that we form a united front in the battle against those who wish us, and our way of life, harm.

__

SIR JULIAN KING is Commissioner for the Security Union, European Commission.

__

ANN METTLER is Head of the European Political Strategy Centre, European Commission.