AI in broadcasting: Welcome to media 4.0
Posted on Apr 9, 2019 by FEED Staff
AI in broadcasting: AI, cloud computing and big data are converging to create a new age of hyperpersonalised content
Words by Paul Shen, CEO of TVU Networks
What’s the story of AI in broadcasting?
The fourth iteration of ‘video’ will see production becoming story-centric, rather than programme centric, with content automatically produced, targeted and delivered to viewers in a more efficient manner.
Most businesses are undergoing rapid alterations of their business models as AI, big data, Internet of Things and cloud computing combine for a potential step-change in efficiency. Industry 4.0 has been used as a descriptor for the way these technologies come together to produce the ‘smart factory’ which gradually learns to make its own decisions and perform autonomously. This evolution comes at the end of several earlier waves of technological innovation: steam-powered mechanisation, electric-powered mass production, and most recently, computerised automation.
At TVU, we believe a similar change in the way that video is produced, distributed and consumed is about to happen. We call this Media 4.0, and it will drive a huge increase in efficiency through the development of the
The New Wave
The first iteration of ‘video’ was the movies, with each version distributed physically and projected onto a screen in a purpose-built building. Media 2.0 was born with broadcasting, which made for increased access through personally owned devices, namely TV and radio. The third wave saw increased digitally driven versioning and greater access in terms of location, time and device, through multimedia and TV Everywhere. But throughout these developments, production has not fundamentally changed. Yes, we’ve moved from black and white to Ultra HD, but there are still only a (relatively) small number of versions for almost all content. The next wave – Media 4.0 – will see this change. The combination of AI, big data, cloud computing, IoT and content metadata are ready now to work their magic in the media space.
Instead of looking at a TV programme as a single piece of edited, linear video, we believe it’s transformative to think of a show as individual items that have been assembled into a personalised show, based upon the tastes of each audience segment.
Media 4.0 enables video production to move from a programme-centric to a story-centric process, where content is automatically produced, targeted and distributed to a viewer.
This approach is now being enabled by a number of parallel developments and technologies.
Bumper Crop for Broadcasters
Consider how a major broadcaster shoots and distributes live content. Most broadcasters throw away up to 99% of their raw material, which is content that could be shared and monetised with consumers. Right now, technology can enable the production of video tailored to individuals on specific social media platforms, on cell phones, on streamed channels and TV. With Media 4.0, much more of this footage will be used, from day one, and smart search processes will ensure that it continues to be used days, weeks and years after a televised event has taken place.
For example, a content owner could search for a specific phrase such as ‘Donald Trump talking about new environmental legislation’ and rather than searching through an entire speech to find the information, the search engine would take you to the frame in which Trump begins to talk about this topic. This allows producers to save copious amounts of time looking for relevant content and increase their productivity. Additionally, it opens up a future in which this segment could be inserted automatically into a programme strand.
A Cure for Versionitis?
Artificial intelligence engines with object and voice recognition already exist and can be used to automate the process of tailoring clips to the appropriate distribution, whether that’s streaming to cell phones, tablets, television or social media channels. The presentation will differ, depending on what the content is, who the audience is and where and how the content is being viewed.
Right now, producers handcraft the multiple versions of a TV show. In the ‘smart studio’ AI engines will automate assembly of the material and deliver it in the most effective way to the target audience.
Using today’s technology and an entirely cloud-based model complete with voice and object recognition, it is possible for video clips to be located and indexed down to the exact frame and then shared instantly. This automation provides the ability to share with different audiences as well as sell assets or share with business partners and customers.
Creating a story-centric workflow, which removes the barriers erected between video production departments and the audience, will be key for the future of production. It will open the floodgates for how media is produced, distributed and consumed, simplifying and automating distribution over social media, digital and broadcast, creating one port that can feed all channels.
Just for You
Initially, for many broadcasters and media companies, this kind of automation will increase efficiency through simplifying the editing and distribution of different versions of content for different devices. Here we are talking about automating the production of 10 to 100 versions. And the great news is that this technology is ready now!
The next wave of automation will see personalised editing and distribution at increasingly granular audience levels. Perhaps a broadcaster will automate the editing and distribution of sports highlights depending upon the viewer’s favourite teams and sports. This might mean 100 to 1000 different versions – with the ultimate goal of automating the production and distribution of perhaps 1,000,000 versions, enabling broadcasters to automatically produce and distribute programming aimed just at you!
This article originally appeared the June 2018 issue of FEED magazine.
AI and machine learning archives
Sign up to our free magazine!