Fighting disinformation: Taming the trolls
Posted on Aug 10, 2020 by Neal Romanek
From coronavirus consipiracies to fascistic rhetoric, disinformation is going global with the help of social platforms. How much of it can democracy withstand?
For years now, the media industry has looked with envy at the social media giants, but what once seemed a licence to print money is becoming a toxic battleground being used for disinformation and bullying on an industrial scale.
Investigations into the Leave.EU campaign and its partnership with Cambridge Analytica have shown that Facebook was a critical tool in swaying voters in the run-up to the UK referendum to leave the European Union. And evidence from the Oxford Internet Institute suggests that one third of all Twitter traffic just prior to the EU referendum was actually produced by bots, not humans.
Disinformation disseminated on Facebook-owned WhatsApp was implicated in disproportionately boosting Jair Bolsonaro prior to his election in Brazil. Messages that included doctored photos and fake ‘fact checks’ were widely spread in a country where mobile devices are the main portal for information. Brazil’s highest court has since established an advisory board on internet and elections to investigate disinformation.
The coronavirus pandemic has proved fertile ground for the planting of rumours and the spread of disinformation. The European Commission has devoted a page on its website to addressing widely circulating disinformation about the virus. The page alerts citizens to common coronavirus falsehoods, including that ingesting disinfectants can treat the virus, that the pandemic is a deliberate act of biological warfare, and that Covid-19 has been caused by 5G technologies.
On the face of it, a society interconnected by instant communications technology should be in a stronger position to share information to solve common problems, but – at least in their present configuration – the opposite seems to be true. Our ‘social’ media seems to excel at undermining society.
We were very, very disappointed by the unwillingness of the two main political parties to engage with this
Resurrecting trust
Last summer, the UK’s House of Lords formed a Democracy and Digital Technologies select committee, chaired by Lord David Puttnam, legendary British film producer and a dogged campaigner for a media space that benefits citizens. The committee has just published its report, Digital Technology and the Resurrection of Trust, which makes no bones about framing online disinformation as an emergency. The report advises the UK government to take action “without delay” to ensure tech giants are held responsible for harm done by falsehoods spread on their platforms.
“This is a virus that affects all of us in the UK – a pandemic of ‘misinformation’ and ‘disinformation’,” says Puttnam in the UK report’s foreward. “If allowed to flourish, these counterfeit truths will result in the collapse of public trust, and without trust, democracy as we know it will simply decline into irrelevance. In the digital world, our belief in what we see, hear and read is being distorted to the point at which we no longer know who or what to trust. The prospects for building a harmonious and sustainable society on that basis are, to all intents and purposes, non-existent.”
This problem is not new. In 2016, the UK referendum and US presidential election highlighted how vulnerable the online information space is to manipulation, but Lord Puttnam hasn’t seen action commensurate with the gravity of the problem.
“We’ve been remarkably toothless,” Puttnam, accompanied by committee member and Paralympian gold medallist Lord Chris Holmes, tells FEED and other publications in a briefing. “I’d even go so far as to say that governments have been nervous about tackling this issue for 20 years. When I first talked to Jeremy Wright [former UK secretary for culture, media and sport] almost three years ago, he was passionate about it, but somehow that passion has evaporated.”
Both the UK Conservative and Labour parties declined to give in-person evidence to the select committee. The Liberal Democrats did send a representative. Given that the same tools used for disinformation are also used by political campaigns, often in a race to the bottom, this shouldn’t be surprising.
“We were very, very disappointed by the unwillingness of the two main political parties to engage with this. They didn’t turn up for their oral evidence, and some of the evidence they gave us in writing proved to be questionable. And this points to parties wanting an edge – ie. they have to skate along what’s legal, and sometimes cross it, in order to be effective,” says Puttnam.
The committee’s report makes 45 recommendations, chief among them that the UK government should introduce online harms legislation within a year of the report’s publication.
“Unfortunately,” says Puttnam, “the evidence we took indicated that it may not be until 2022 or 2023. That is crazy, given the pace with which this industry moves.”
It’s high time that we put our foot down on sock puppets
Move fast and fix things
The report also recommends that UK communications regulator Ofcom has greater powers to police and penalise digital companies that aren’t acting in the best interest of citizens.
“It is not just up to advertisers to ensure the technology giants deal with the pandemic of misinformation on their platforms,” says Puttnam. “The government also has an important role to play and should not duck its responsibility.”
Lord Holmes, who specialises in new technologies, as well as diversity and inclusion, demanded greater transparency in looking at the companies that have developed such an intimate relationship with modern society. He and Puttnam also urged the government to hold platforms accountable for the amplification of messaging through fake accounts, bots and AI.
He explains: “The reality is that the algorithms have to be auditable. But even before we talk about the auditing of algorithms, it’s high time that we put our foot down on sock puppets.”
“One of the most interesting things in our report,” says Puttnam, “is how information gets amplified. So it’s not just a question of two or three people with nutty views. When that news get recommended, and the whole thing takes off in the search, it can create damage. We’re not coming down on the ability of an individual to have free speech and making their views known. But that free speech gets amplified in a very distorted and unregulated way. That’s when we all run into terrible trouble.”
“The great claim of the companies is, ‘We don’t even really know what the algorithm’s up to,’” adds Holmes. “Well, you absolutely do, to the extent of how it’s constructed and what its mission is. And its mission has been constructed in a way to drive extreme content because that content drives dwell time, and that dwell time drives monetisable views.”
The report also makes recommendations on electoral reform, including clearly marking online political ads and requiring greater transparency about who is bankrolling them. Media companies should also provide easily accessible online databases of political advertisers. Mozilla provided the committee with guidelines and a suggested API for such an open advertising archive.
Public interest
Implementing better public education was another important recommendation. The report cited the Open Society Institute Media Literacy Index, which ranked the UK 12th out of 35 countries across wider Europe at promoting societal resilience to disinformation.
The committee’s vision for digital literacy goes beyond mere technological skills and includes education in being able to distinguish fact from fiction, including misinformation, understanding how digital platforms work and how to influence decision makers in a digital context. Estonia and Finland were cited as particular successes in providing citizens with digital literacy skills. It was noted that their proximity to Russia – a known wellspring of disinformation in Europe – was one incentive for keeping their societies well informed.
The report included research from Doteveryone that 50% of people surveyed accepted that being online meant someone would try to cheat or harm them in some way: “They described a sense of powerlessness and resignation in relation to services online, with significant minorities saying that it doesn’t matter whether they trust organisations with their data because they have to use them.”
Holmes notes that the very business model of the platforms might not be congruent with healthy democratic discourse. “One of the core problems is that this isn’t a question of freedom of speech. It’s a question of freedom of reach. The difficulty is that if more radical content has a greater dwell time and can drive more revenues off it, then the algorithm gets trained to hook on to that and proliferate that content. That’s what makes this different to fake news and extreme views of the past. There’s nothing new in that. The difference is the pace and proliferation of those views.”
Puttnam also points out the need for digital platforms to support the journalism that fuels them. “There is no question that the platforms feed off of the traditional news organisations. There’s a real need for some form of reciprocal relationship, where journalism gets underpinned and supported by the digital platform. That seems to me axiomatic, and I think those conversations are taking place.
“It’s unlikely that society itself can ever build ‘herd immunity’ against lies and manipulation,” he concludes.
Read the full UK House of Lords report here.
This article first featured in the August 2020 issue of FEED magazine.