Read Latest Issue now

Social Media Survival Kit

Posted on May 30, 2023 by FEED Staff

In the always online era, it’s important to understand digital risks – and how to protect ourselves against them

Words by Katie Kasperson

Social media dominates our attentions, with many people spending hours online each day. It’s been compared to the Wild West – a largely unregulated place where (almost) anything goes. And the market is constantly evolving, with new platforms – like TikTok – spawning and old ones – like Twitter – changing administrative hands. In such a dynamic, untamed space, how can companies and policymakers protect users? And should those bodies fail, how can users protect themselves?

Social media is a means of connecting with like-minded people without necessarily knowing their names – liberating for some, dangerous for others. “A double-edged sword”, says Mie Oehlenschläger, co-founder of Tech & Childhood and a media safety expert for Denmark’s National Council on Ethics. “We still have this belief from the mid-nineties that if no one knows who you are, you can be anyone. And that’s in a way a very tempting thought.”

Catfishing, or assuming a fake identity, is one way to ‘be anyone’. It’s a tactic often used to target unsuspecting individuals by pressuring them to disclose personal information or build a fictitious relationship. Luckily, users can take steps to avoid being catfished, like disabling geolocation and avoiding check-ins, thinking twice about what they share (and with whom) and paying extra attention to data privacy – the latter being one of today’s biggest issues, according to Mie.

“Data collection issues are underpinning the whole ecosystem,” she says. “Every time we’re online today, our data is being collected.”

A government weapon

Take, for instance, the Black Lives Matter protests of summer 2020. Thousands took to the streets – and their feeds – to fight racial injustice and police corruption. In cities like Portland and Oregon, a pattern emerged: people were being arrested, upon arrival, on baseless charges. It was later revealed that the US government had spied on ‘people who posed no threat to homeland security’. Information was drawn from ‘publicly available social media’ and names of ‘social media associates’.

The key phrase to note is ‘publicly available’. Users have the option to go ‘private’ on platforms like Twitter and Instagram. Meta (owner of Facebook, Instagram and WhatsApp) now offers a ‘privacy checkup’, educating individuals on how to manage their data. Besides platforms’ built-in settings, VPNs, browser extensions and two-factor authentication are useful for protecting digital privacy.

While privacy is good, Mie says, “the encrypted digital environment as a radical answer to surveillance capitalism is not always good.” There’s generally a trade-off between privacy and anonymity; offenders can hide ‘behind closed doors’. In Denmark, certain individuals are turning to private groups on the social media platform Telegram, where they’ve shared hundreds of images and videos of Danish young men and women in sexual, violent and degrading situations. They’ve also exposed personally identifiable information about their victims – otherwise known as doxxing.

Some privacy issues have been around for over a decade, impacting users on an individual level. Others are less obvious, more calculated and maybe more damaging. Facebook famously laid its users’ data bare, allowing Cambridge Analytica to push political propaganda – and Mark Zuckerberg stayed silent for five days after headlines broke. Facebook then promised privacy in the form of ‘clear history’, encryption and audits of third-party apps. But ‘the overwhelming consensus from privacy experts’ was that this plan had ‘little to do with protecting privacy and everything to do with protecting market share’, according to Julia Carrie Wong in The Guardian.

Regulations can’t keep up

Technology and its uses are rapidly evolving within the demands of a capitalist market. These advances, combined with an ever-changing geopolitical climate, mean that regulations like the GDPR and CCPA – which ‘haven’t really been implemented yet in a satisfying way’, says Mie – are often reactive rather than proactive. “Legislation will always be behind tech developments,” Mie says. “But we should be better at asking the devs and the industry for more analysis and risk management before putting things on the market.”

Take ChatGPT, a chatbot or text-only AI program launched last November, which does its best to respond to any question asked of it. “It was just released – in schools, to everyone. I mean, is that a good idea?” Mie wonders. It’s already being used to write academic essays and job applications, acting as an essentially undetectable tool in plagiarism. Its only giveaway: “The answers are confident and fluently written, if sometimes spectacularly wrong,” writes Ian Sample for The Guardian.

Right and wrong often get conflated on social media. To lessen the impact of misinformation, platforms can implement content filtering and flagging, as well as account removal. Companies have now begun flagging content related to elections, Covid-19 and other genuinely influential topics as potentially inaccurate – but only after much damage had already been done. Donald Trump, who used Twitter to falsely claim that he’d won the 2020 presidential election, was finally banned from the platform only after inciting an insurrection on the United States Capitol. Gone are the days of pretending that digital actions don’t have tangible consequences.

Sinister algorithms 

Thanks to intelligent algorithms, users can get sucked into metaphorical rabbit holes of misleading or even extremist content. This, in turn, can spark harmful ideologies which lead to real-world violence. Several perpetrators of deadly shootings have been linked to incel (involuntary celibate) culture, which finds its voice in the web’s darkest corners.

“Sometimes I download TikTok and Snapchat just to check the algorithm, and I say I’m a 13-year-old boy,” Mie says. “It feeds me a lot of highly sexualised, even perverse content.” According to Mie, pornography is another major issue pertaining to online safety, especially when it comes to minors. Revenge porn and deepfake scandals abound, raising concerns about legality and consent.

Moira Donegan in The Guardian calls deepfake pornography an ‘assault on consent’. “The non-consent is the point; the humiliation is the point”, she writes. And she raises important questions: “How will viewers know the difference between fact and AI-generated fiction? How can non-consensual material be removed when the internet moves so much faster than regulation?” Targets of revenge porn have almost no means of protection, but this is changing.

Take It Down, operated by the US National Center for Missing & Exploited Children and funded by Meta, is a tool that lets minors remove their explicit content from the internet. While it’s far from perfect, the tool protects users’ privacy by requesting a digital fingerprint (‘hash’) rather than the image or video in question. As of February, Facebook and Instagram – along with Yubo, OnlyFans and Pornhub – have agreed to participate. Take It Down claims to work with deepfakes and other AI-generated content, as well as original images.

When does the accountability begin?

Take It Down specifically seeks to help minors – a group that’s more susceptible to unsafe environments. “The digital world was not designed with child safety in mind,” explains Mie. But as she points out, “how can you protect a child if you don’t know it’s a child?” Age assurance is one way, but this is another double-edged sword; some forms of assurance require legal documents or biometrics. While academics and policymakers strive for solutions, child protection is undoubtedly a work in progress.

No matter the niche, these issues beg the question: should platforms be held responsible for users’ actions?

In an ideal world, the answer is a resounding yes. But in reality, the blame often shifts to the victims of online safety issues rather than the enablers. “Tech companies can do more, but it requires the governments to be brave,” says Mie. While nations have started to push back against big tech, efforts are slow. Until public and private sectors show eagerness to protect people, it’s down to users to do it themselves. 

Originally published in the Summer 2023 issue of FEED.

Digital Security archives. 

Start-up: Salsa Sound, United Kingdom, 2...

March 21st, 2019

It was during a trial with Chelsea Football Club that Salsa co-founders Dr Rob...

Wellbeing: Shooting for zero

January 8th, 2021

No one can say now they don’t know how to make a production sustainable....

Sport from Space

December 24th, 2023

Livestreamed from the Faroe Islands, the Champions League qualifier between Faroese KÍ Klaksvík and...

Weavr Consortium make esports a dream

October 16th, 2019

Weavr Consortium is a UK-funded project using data and AI to make esports a...