Intelligent AI
We stand on the cusp of an AI revolution. But whose side will the winners be on?
Digital technologies are giving us the ability to realise what was once considered science fiction. From virtual assistants on our smartphones to automated transactions in financial services, Artificial Intelligence is creating ever-more efficient, effective and powerful solutions – sometimes to problems we didn’t know we had.
Last year, a piece by the World Economic Forum on the challenges AI creates for journalism predicted that the AI industry would grow by 50% each year until 2025. This growth will send seismic shifts throughout the media and entertainment industry. PwC reports that AI is set to contribute around $150 billion a year to this industry, with a potential total contribution to the global economy of a staggering $15.7 trillion.
The potential of AI to transform the way news organisations source, edit and distribute content is huge. AI can create new efficiencies, reduce inconsistencies and add value in as yet unforeseen ways, combining content and personalisation to build a more meaningful and useful news experience for audiences worldwide. But while AI is likely to empower some news organisations to do more with less, it is also likely to challenge the very concept of news itself.
A future without AI is highly improbable, so it is essential that we develop a smart approach to the technology from the very beginning
What’s certain is that a future without AI is highly improbable, so it is essential that we develop a smart approach to the technology from the very beginning – starting with ethics. AI tools can be powerful amplifiers of business strategies and can put a lot of influence in the hands of a very few people. The rush to develop or adopt the latest AI technologies is accelerating across every touchpoint, from content creation to content consumption, and brings a growing responsibility for both users and vendors to consider their ethical impact.
How do we ensure – particularly in a sector like news whose principle worth is as a public service – that the use of AI will be to the benefit of all and will stay true to core journalistic principles and obligations for the public good?
Fighting fake news
If we believe recent research by Reuters, almost three quarters of media organisations are now looking into how AI can help them create and distribute content more efficiently, through the use of applications like speech-to-text, facial recognition and edit corrections. News organisations such as Bloomberg are already committing strongly to automation to cover so-called ‘commodity news’ – reports on finance markets, for example, which are essentially reports on stats – with the hope that it will free up journalists’ time for more involved feature stories. By some accounts, 90% of all articles in the world will be written by AI before the end of the next decade.
The potential for AI to transform news and content production is clear and, over the next few years, it will take on a prominent role in deciding what content we see and read in our daily lives. But what level of power and control should be given to AI? Whilst technology that ‘thinks’ is rapidly becoming more useful, it needs to adhere to some form of ethical principles. This is particularly important in the fight against fake news.
Right now, AI is being actively used to operate ever-more clever bots and fake social media accounts, some of which are hard to distinguish from real people or real information outlets. Machine learning, the science of computers learning from data, identifying patterns and making decisions with minimal human intervention has become a tool for obfuscating the truth and also for promoting it. The hope would be for machines to eventually improve their performances – in fact their production over time – and become progressively autonomous.
Before we get to this point, machine-learning algorithms must be trained and programmed by humans to improve their accuracy. This is vital given machines without high-quality human input lack the ability to put things into context, causing much difficulty when it comes to accurately identifying an element in a piece of content. If news organisations left AI to run its own course without human input – that is, without context – AI is unlikely to produce much of long-term use.
But even if AI starts to positively support a real news organisation, there is a danger the technology and economies of scale it provides will start to change what news ends up being reported. AI-driven journalism might produce and manage huge amounts of facts and data. But will it be able to describe the suffering in a country torn apart by war? Or will it just offer numbers on a scoreboard of wounded and dead? Will it be able to select stories that help humans better handle climate change? Or will it just provide temperature and CO2 levels – even as it also provides users with oil company stock prices?
Bursting the bubble
The personalisation of content can create higher-quality experiences for consumers, as we’ve seen from streaming services such as Netflix recommending shows based on personal watching history. News organisations are no exception, and they are already using AI to meet the demands for personalisation.
For example, a service called James, developed by The Times and The Sunday Times for News UK, will learn about individual preferences and automatically personalise each edition (by format, time, and frequency). Its algorithms will be programmed by humans but will improve over time through machine learning to provide whatever experience The Times’ owners find best engages readers.
While algorithmic curation – the automated selection of what content should or shouldn’t be displayed to users and how it is presented – meets consumer demand for personalisation, it can go too far. What if consumers are only hearing and reading the news they want to hear rather than what is actually taking place?
Most commonly known as the ‘filter bubble’ concern, designed by platforms to keep users engaged, this quickly leads to audiences only seeing content that reinforces their views – and makes opinions outside that bubble seem increasingly alien. In the absence of legislation, the onus is on media organisations to get the balance right between providing tailored content, and ensuring consumers are actually being informed rather than pandered to.
We must ensure the use of AI will benefit all, but it needs to be in a way that is ethical and in line with journalistic principles. To do that, we have to hold media organisations to account and put ethics front and centre in AI’s deployment. Humans must take every step to ensure AI is used with the right controls in place, from unbiased training to collecting data transparently. Otherwise, in the long term, the use of AI might create far more problems than it solves.
For more information about Sony’s developments in AI, click here.
Click here to visit the Sony website.
This article first appeared in the December 2019 issue of FEED magazine.]]>