Lawo: Spinning Elastic Demand
Posted on Apr 25, 2023 by FEED Staff
Navigating the peaks and troughs of a dynamic broadcast environment is never easy. Lawo is here to guide you
When sitting on a terrace somewhere warm, soaking in the evening sun and sipping a glass of wine with your friends, it never takes long for the conversation to veer towards the delightfully absurd.
One memorable example is a get-together that featured a semi-serious discussion about a blow-up butler that would be inflated when needed – to bring more wine and snacks – then deflated and stowed away when its service was no longer required. Those present enjoyed trying to outdo each other with the implementation and features of such a convenient concept.
Though a bit wacky, this isn’t unlike some recurring agility brainstorms in the broadcast world.
Usually, they pertain to an urge for squeezing budgets, making it harder to justify capital expenditure on infrastructure that is only used sporadically or at peak times.
Channelling the Flux
For larger organisations, one way of balancing fluctuating capacity requirements is to connect the various sites to a distributed WAN IP infrastructure and leverage the compute power in Los Angeles, say, from Sydney or Milan. IP has made this a reality, not least thanks to the ST 2110 suite of open standards, AES67/RAVENNA and ST 2022-7 for rock-solid redundant set-ups.
Other initiatives have focused on pooling compute power in a central location – or two, for redundancy. In such data-centre-based infrastructures, the devices doing the number crunching are remotely controlled, using hardware- or software-enabled UIs, and often selected on a first-come-first-served basis. Chances are, the A__UHD Core or V__matrix devices you use today are different to the ones you had before. But as long as all features perform in the expected way, you’ll find no problem.
Broadcast control systems like VSM or bespoke software solutions make allocating required production capacity a breeze. Most operators are now comfortable with a centralised approach – even if they don’t know exactly where their audio and video data is being processed.
Just think of a commentary studio in Ankara (Turkey) connected to one of 20 Power Core processors in either London (UK) or Hilversum (the Netherlands); or a video processor that performs up/down/cross conversions in Oslo (Norway) for a production in Bergen (Norway again).
All of the above works like clockwork, while there are enough computational units available to serve all production needs. For Tier 1 sporting events, host broadcasters and broadcast service providers often rent additional equipment for the duration of the event. Renting infrastructure at peak times is, after all, a good approach to avoid investing in extra production capacity that may not be needed again until next year.
If requested, the company providing additional equipment can send a small team to the event to assist the broadcaster with setting everything up. For decades, this approach has made broadcast operations more agile, keeping budgets from spiralling out of control.
This business model is likely to endure for anything to do with hands-on control, such as video switching and audio mixing. On the other hand, pure audio and video processing like signal conversions, colour grading, multiviewer or mosaic generation could be migrated to the cloud.
In theory, bandwidth consumption and latency may be higher than you expect, while you can never be
certain that only authorised users actually have access to your content. Budgeting for this approach could turn out to be far trickier than you initially thought.
Costing it out
Cost is undeniably a crucial factor. As a result, there seems to be a divide: those who argue that the less you need to write down, the more cash flow you have, and those adamant that hardware used after it has been written off is essentially free of charge – with subscription models only moving in one direction: up.
This estimate doesn’t usually include expenses for the bandwidth required to send data into the cloud for processing, and then hauled back – AKA cloud egress. As a result, it may turn out to be a bigger expense item than anticipated.
Hardware infrastructure evidently generates sizeable data transfer costs – increasingly in the guise of dark fibre lines – when the compute power is at a different location. This can be offset by cheaper leases and the fact that one device may be shared by at least three locations at different times.
A rental-based approach, on the other hand, requires thorough planning – and may not work at times of unexpected peaks, chances being that the equipment you need by yesterday is already taken or would take too long to arrive – or to be online. Long-term leases are almost certainly more costly than purchasing the kit outright, and so are not really an option.
This brings us to the what-if section. Imagine a set-up that lives partly in your private cloud – a central location or scattered across the world – and partly on one of your premises where it’s more convenient. Add to that the possibility of soliciting a public-cloud-based service for a short period of time. This would make your operation seriously agile.
Next, look at how – and how much – you would be willing to spend on such an infrastructure, knowing that your CFO is a fan of predictability and lean expenses. What would be the best way to solve this conundrum?
Then, picture for a moment what would happen if there was no benefit to be gained from updating your number-crunching tools for the job at hand or simply because you preferred the previous version. Upgrading takes time we no longer have.
Add the option to take the concept of software-defined hardware to the extreme, so as to be prepared for almost anything your broadcast activity may throw at you. This goes beyond agility as we currently know it.
Such a concept has the power to make your infrastructure and operation as elastic as a rubber band, without ever snapping. It goes without saying that you would have to remain in control at all times, from a location of your choice. Think of what that could do to the carbon footprint of your OB fleet and to the space you can save in your equipment room.
It may not be quite as fun as an inflatable butler, but it would certainly change the broadcast world.
Originally featured in the spring 2023 issue of FEED.