Masterclass: The dos and don’ts of an Olympic broadcast

Snowstorms, signal congestion and split-second timing: inside the technical and operational realities of delivering a live Winter Olympic Games broadcast

The Panel

  • Ophir Zardok, Head of sports strategy and business development, LiveU

  • Alex Redfern, Chief technology officer, EVS

When the stakes are global and failure is not an option, what can’t be compromised while planning a broadcast at an Olympic scale?

Alex Redfern: A defined level of redundancy. Redundancy in technology is a broad concept: it may take the form of N+1 or N+N configurations, but determining the appropriate level is essential. There are cost implications and financial constraints even for large-scale events. Simply duplicating everything is not always viable, and adding a single backup is often insufficient at scale. As complexity increases, so does the risk that the required recording or signal path is not the one protected.

Redundancy, therefore, is not a simple arithmetic exercise such as one-plus-one or N+1; it is a philosophy. It requires a mindset focused on prioritisation – identifying the most critical elements that must be made redundant across the entire chain – from acquisition to processing to contribution. At this level, failure is not an option.

Requirements differ between operational domains as well. In real-time environments, where signals pass through processing devices, the loss of even a single video frame may be unacceptable. By contrast, the recording domain often allows for additional safeguards, such as backups stored in the cloud, on cameras or in separate locations. As such, the level of redundancy must be determined by the criticality of each stage within the chain, including considerations such as critical and diverse paths.

People must also be thought of as potential points of failure. Productions often only fully understand this through experience, when individuals become unavailable due to illness, communication failure or other unforeseen circumstances. Then, significant dependence on individuals becomes clear.

As a result, resilience is not solely a function of technology, but also the processes and people surrounding it. When failures do occur, outcomes are determined as much by human response as by the technology itself. Ensuring the right people are in the right roles is therefore a non-negotiable for successful event delivery.

Preparation is equally critical and cannot be compromised. This includes planning, testing and defining the workflow to the fullest extent possible. Every effort should be made to minimise uncertainty through thorough preparation. This extends to early engagement with rightsholders and stakeholders to define expectations, desired outcomes and the means of achieving them consistently.

What are the biggest technical hurdles when it comes to extreme weather conditions?

Ophir Zardok: Winter Games production introduces unique weather-related technical challenges that simply don’t exist at summer events. At the Winter Games 2026, snowfall, sub-zero temperatures and remote mountain locations produced a combination of physical and connectivity pressures.

The most immediate issue is cellular and wireless network reliability. In high-density crowds and on mountainous terrain, traditional bonded cellular can become congested or degraded. This is precisely why we focused on the first large-scale global deployment of LiveU IQ (LIQ) – our AI-driven predictive congestion management system – at these Games. LIQ uses real-time network analysis to optimise transmission paths dynamically, anticipating congestion before signal quality is impacted. Across nearly 12,000 live sessions and 15,000+ hours of live broadcast at the Winter Games, around 60% of supported sessions used LIQ, achieving over 36% higher average bit rates and enabling consistent 4K and HDR coverage even in remote, bandwidth-constrained mountain environments.

Logistics were also a major constraint. At distributed Games like in Italy, moving tech equipment between venues – even for repairs – is time-consuming.

On top of this, weather introduces power vulnerabilities, increased equipment failure rates and challenges for outdoor studio set- ups. IP-based cellular contribution is inherently more resilient, weather-proof and faster to deploy than traditional fixed infrastructure in adverse conditions.

Alex Redfern: Much of today’s infrastructure is increasingly resilient to both extreme heat and extreme cold. Video servers, for instance, are now capable of operating in harsh environments, including an ability to continue functioning reliably in desert conditions with significant sand and dust.

There remains, however, an inherent advantage in custom-designed hardware. As the industry moves towards commercial off-the-shelf (COTS) systems, it is important to recognise that these are fundamentally general-purpose computers. As a result of this, these systems are often less resilient to environmental extremes, including but not limited to transportation stress, temperature fluctuations and non-ideal operating conditions. Dedicated hardware, by contrast, is typically designed with greater tolerance to these factors.

Extreme weather introduces challenges beyond hardware resilience. Conditions such as snow can create significant compression artefacts, affecting image quality in ways that are difficult to predict or test in advance. While adjustments to compression ratios or codec selection may mitigate these issues, they are not always straightforward to plan for, particularly as such conditions may not persist consistently throughout an event. Bright sunlight can introduce extreme contrast, often necessitating HDR for optimal results, while heavy snowfall against a clear sky can further exacerbate compression challenges. These variables can have a direct and unpredictable impact on visual output.

There are also physical and operational considerations. Extreme weather may affect the performance of kit such as drones. Heavy snow, for example, can influence operational range, flight behaviour and signal reliability. Similarly, specialised systems such as cable cameras – used, for example, along ski runs – must contend with the mechanical and material challenges posed by extreme cold. How these factors can affect performance and reliability requires careful consideration.

How do you balance innovation with reliability at a global event?

Alex Redfern: Innovation does not typically occur at the event itself. It takes place well in advance – often six, 12 or 18 months beforehand. The role of the live event is to validate innovation. It would be highly unusual to introduce something entirely new and untested on site. Broadcasters, particularly host broadcasters, require assurance that systems are proven and reliable because they cannot assume unnecessary risk in high-profile productions, especially when delivering a world feed to a global audience.

In most cases, innovation is developed and tested in controlled environments ahead of time. This includes clearly defining your expectations and ensuring systems perform as required. Successful implementation and delivery at the event is therefore the culmination of weeks, months and even years of detailed planning. The event itself serves as a showcase for these efforts.

Examples such as drone deployments at the Winter Olympics, or the increasing use of AI-driven tools for motion analysis, illustrate this point. These technologies are not introduced overnight; they are the result of sustained development. While generative AI has recently attracted increased attention following major winter sports events, many of these capabilities have been in development and use for years. Growing confidence in such tools is built on accumulated evidence and long-term application, even if their visibility increases more suddenly. In some cases, elements may still be introduced close to the event, but only after sufficient validation.

Ophir Zardok: The Games are uniquely paradoxical: they are both the highest-risk environment and the most compelling opportunity to advance technology.

The answer is to test innovation under fire – but not for the first time on the big stage. Our philosophy is to build confidence through iteration. LiveU technology has been a fixture at Summer and Winter Games for several cycles now. At the Winter Games 2026, we deployed LIQ, our AI-driven transmission intelligence, at scale for the first time globally – but not without a foundation. LIQ had been developed and refined over prior events, and the Winter Games validated it with 980+ LiveU units from 37 countries delivering 134TB of live video. Innovation should extend broadcast capability, not replace proven reliability.

A major European broadcaster deployed LiveU-equipped crews in Cortina while maintaining familiar IP workflows mirroring their production environment. The innovation is invisible to the operator – it just works.

What does resilience really look like in 2026?

Ophir Zardok: Resilience in 2026 is no longer just about having a backup. It is about having intelligence built into the system that responds to threats before they have the chance to become failures.

At the Winter Games, that resilience took multiple forms.

AI-driven network intelligence: LIQ continuously analysed network conditions across all transmission paths and would dynamically reroute to maintain quality. In environments where thousands of devices are competing for cellular bandwidth – athletes, media and spectators – traditional bonded cellular approaches are reactive. LIQ is, by contrast, predictive.

REMI as a resilience model: By centralising critical production infrastructure at home, broadcasters insulated their core production from disruptions at remote venues. When equipment at a venue fails, the production centre continues uninterrupted.

Operational resilience through IP flexibility: The flexibility of placing a LiveU unit anywhere and transmitting instantly means broadcasters can rapidly reroute coverage when planned positions become untenable due to weather, access restrictions or technical failure.

Alex Redfern: Resilience operates at different levels across the whole broadcast chain. On the contribution side, feeds are transmitted between venues, broadcast centres or through the International Broadcast Centre. These may or may not be under direct control, and may or may not utilise specific products. In this context, resilience relies heavily on avoiding single points of failure through the use of diverse transmission paths, such as a combination of cloud-based and satellite delivery.

At the server level, an N+1 approach is often adopted, with a limited number of spare channels relative to the number of inputs. Even when handling hundreds of inputs, only a small number of backup channels may be provisioned. This reflects the level of reliability expected from such systems. The likelihood of failure is considered low enough that extensive redundancy is not always required.

With distribution, resilience depends on the delivery method, whether SRT, JPEG XS or file- and non-file-based workflows. Diverse paths remain key. More broadly, resilience varies by layer: hardware resilience is well established and consistently demonstrated, particularly in video servers, while software resilience still raises questions. Cloud infrastructure hails in a different model of resilience. While generally expected to be robust and scalable, failures, when they do occur, can be more absolute in nature. Systems can be designed with redundancy and scalability in mind, but they are dependent on underlying infrastructure, so outages may have widespread impact.

Are there any common coordination issues between broadcasters and vendors – and how can they be avoided?

Alex Redfern: A stronger focus on outcomes can help reduce coordination issues. When all stakeholders are aligned on the end goal, the specific method of achieving it becomes less critical.

However, established working practices can present challenges, particularly when individuals or teams are accustomed to
operating in a certain way based on previous events. As a result, misalignment tends to arise more frequently around processes than outcomes, which are typically well defined – for example, delivering a file to a specific destination or transmitting a stream to a defined endpoint.

Version misalignment is another common issue. Broadcasters often operate across multiple software versions within ecosystems. While this may function effectively at an individual level, it introduces complexity from a vendor perspective, particularly in terms of support and integration. Supporting multiple versions simultaneously increases operational overhead and can create challenges when integrating systems between different broadcasters, host broadcasters and rightsholders. In some cases, differences in software versions can hinder interoperability.

Coordination is further complicated by the global nature of major events. Teams are often distributed across multiple locations, including the International Broadcast Centre, regional hubs and broadcaster headquarters. So, ensuring consistent communication and information sharing across time zones presents an additional challenge. Vendors frequently act as a central point of coordination, facilitating communication between these groups. However, managing alignment across geographically dispersed teams remains a complex task, especially when operating at scale.

Ophir Zardok: Yes, and they tend to cluster around three particular areas: planning timelines, equipment logistics
and access rights.

Planning timelines: The most effective vendor-broadcaster relationships at the Games are those that begin planning at least 12-18 months in advance, ideally with joint technical reviews of the venue maps and signal architecture. One of the reasons why LiveU’s operational footprint in Italy was so effective – 980+ units, 37 countries, near-zero reported operational failures – was because we embedded with key customers during pre-event planning. Some customers, for example, worked through their LiveU workflows during the Paris Games 2024 before scaling it massively for Italy in 2026.

Equipment logistics: Distributed events such as the Winter Games 2026 exposed how unforgiving logistics can be. The lesson: zone-based pre-positioning of equipment is not optional. Vendors must be briefed on the full zone deployment plan and have local support in each zone.

The overarching principle: It is necessary to treat key technology vendors as production partners, not just suppliers. Bring them into the planning process early, share your runbooks and establish communication protocols for the live window.

If you could give one don’t and one must-do for an event of this scale, what would they be?

Ophir Zardok: Don’t deploy new technology for the first time at the event itself. The Games is not the place for a proof of concept. Every novel workflow – whether it’s a new compression format, a cloud-based routing architecture or an AI driven transmission system – needs to have been proven in live broadcast conditions before the Opening Ceremony. Our LIQ technology went through extensive pre-testing cycles before large-scale Games deployment. Innovation is absolutely essential – but it must arrive at the Games already battle-hardened.

For must-do, build a redundancy plan assuming your primary plan will partially fail.

At this scale, something will go wrong. Weather, access restrictions, equipment failures or network congestion – the event is too complex and distributed for everything to go perfectly. The broadcasters who performed best at the Winter Games 2026 were those who had designed every layer of their operation with the assumption of partial failure. Another European broadcaster relied on LiveU bonded cellular as its primary transmission path for roving teams. The must-do is simple: build the redundancy plan.

Alex Redfern: A key principle is to avoid deploying software or workflows that have not been thoroughly tested. Ideally, these should already have been validated in live production environments – such as test events – and represent already-proven, well-established technologies and processes. At the highest level, particularly for rights owners, there is a strong emphasis on using hardened, field-tested solutions.

Further along the chain, rightsholders may have slightly greater flexibility to take measured risks, though the importance of validation remains. The so-called ‘demo effect’ underscores this – success at a small scale does not guarantee performance at full scale.

The primary recommendation, therefore, is clear: do not deploy anything without rigorous testing. Workflows should be extensively stress-tested or proven in real-world conditions before implementation.

Conversely, testing must be taken seriously and conducted continuously. Assumptions based on previous success can lead to failure; systems that worked in one context may not perform identically in another. Consistent and repeated testing is essential.

Ongoing communication is equally important. Assumptions should not be made on behalf of partners; instead, regular dialogue is required to ensure alignment. Practices such as factory acceptance testing provide an opportunity not only to validate systems, but also to identify gaps, clarify expectations and strengthen collaboration. This process extends beyond technical integration to include alignment at a human level, making sure that all stakeholders share a common understanding of objectives and requirements.

An additional consideration for projects of such scale is your team’s composition. Increasing the diversity of skill sets within event teams can have a positive impact on performance. While experienced engineers remain essential, incorporating individuals from other areas of the organisation can enhance empathy, communication and responsiveness. These perspectives can be particularly valuable in support functions, such as first-line response and stakeholder interaction. So broadening team composition can contribute to more effective delivery, complementing technical expertise with a wider range of capabilities.

This article appeared in our NAB 2026 issue of FEED

Sign up to FEED Signal

Your monthly fix of long-form features, news, webinars & podcasts, delivered direct to your inbox