Broadcasters Tap Cloud For Content Management
The public cloud is an increasingly attractive option for broadcasters’ content sharing and archiving workflows, according to top engineers who gathered for a TVNewsCheck webinar last week. But stations need to do their homework first to achieve the same reliability they currently get from on-premise hardware. And, the industry as a whole needs to improve the process for creating, capturing and preserving metadata throughout the content chain in order to take full advantage of cloud storage.
Those were the key takeaways from Storage, the Cloud & Optimizing Content Management, which featured content management experts from Sinclair Broadcast Group, Hearst Television, Warner Bros. Discovery, Vice Media Group and Vizrt and was moderated by this reporter.
Sinclair’s Head Start
Sinclair already has a significant portion of its archives and media management in the cloud. It moved the central archives for its 16 regional sports networks to the Amazon Web Services (AWS) public cloud platform last fall and is now in the process of moving regional storage to the cloud as well, said Mike Palmer, senior director, media management for Sinclair Broadcast Group.
Across Sinclair’s TV stations a good number have already completely moved their archive to the cloud, while the remainder are in a hybrid mode where they are still migrating their legacy tape libraries. Palmer estimates that migration process will be completed by the first or second quarter of 2023. The company is also receiving and processing its syndicated programming and commercials through the cloud today.
Palmer said that new remote workflows created during the COVID-19 pandemic helped prove the efficacy of cloud storage.
“The key to all of that was to make sure the users that were operating on-prem and in different random locations as they were working from home had uninterrupted access to that content, and at the end of the process they either had the same access or better access as they had before,” Palmer said. “In most cases, they had better access.”
Advances in compression technology and better visibility into how to “tier” storage at different bit rates is making the cloud more affordable. While Sinclair’s RSNs initially stored its legacy archive material at 50 megabits per second using MPEG-2 compression, it wound up storing them in the cloud at 17 Mbps using MPEG-4, with no visible impact on image quality. With news content, Sinclair is able to drop down from a 35 Mbps production bit rate to 8 to 10 Mbps for archive storage.
“That dramatically reduces storage costs, especially over the long term,” Palmer said.
Palmer expects that the end state for Sinclair’s operations will see most operations winding up in the cloud, including playout. But he emphasized the importance of metadata in making that shift.
“If you don’t have metadata, you can’t find the content, and if you can’t find the content, it doesn’t have value,” Palmer said. “So that’s a major concern for us. Some of the content we have received through acquisitions doesn’t have a lot of metadata on it. So, we’re going through lots of processes to make sure we have more complete metadata with that.”
Hearst Stresses Metadata Importance
Hearst Television began archiving its news content in the private cloud over a dozen years ago when it made the shift to file-based production. Over the past five years it has been archiving promos and other non-news content in the cloud as well, said Joe Addalia, director of technology projects for Hearst Television. The group has also started the process of digitizing legacy content that was stored on tape or film at individual stations, with an eye to eventually moving that to centralized cloud storage as well.
That time-consuming work has been completed at about a half-dozen stations to date, said Addalia, who echoed Palmer in emphasizing the important of metadata in achieving an efficient archive.
“The most important thing is to be able to find what you’ve archived,” Addalia said. “Notice I didn’t say ‘search’ — because you can always search — but the real goal is the ‘find.’ In our news world, we’ve done very well at that. We have a good metadata set and good taxonomy to link to our editorial system, ENPS. So, we can certainly find our news archive quite well and restore [content] as needed.
“That find piece is so important. It’s not the technology, it’s the user being able to find what they need almost immediately and then have it at their disposal,” he said.
Vice’s Centralized Cloud Archive
Vice Media Group has found the public cloud to be a unifying force for a rapidly growing company with offices spread around the globe and diverse production teams including branded operations, studios and documentary crews in the field. The company has centralized its archive in the cloud with a common technology stack available across all of its locations, said Dominic Brouard, director, media engineering for Vice. It now has the agility to quickly shift a production from one office to another, such as from London to New York.
“When it comes to content management and adopting the cloud, it really was a great opportunity for us to try and centralize from many different offices into a single location without necessarily investing a huge amount of infrastructure in one given place,” Brouard said. “It’s sort of democratized the technology a bit to our offices regardless of the scale, because it meant the same technology solution was being offered up even to the offices that had far fewer productions.”
Warner Bros. Discovery Bullish On Cloud Migration
The company Renard Jenkins works for grew a lot bigger this past April, when Warner Bros. completed its merger with Discovery. As SVP, production integration and creative technology services for Warner Bros. Discovery, Jenkins is steering archiving and content management for the company’s Hollywood film and entertainment studios while helping to integrate technology with the broadcast side, where Discovery was an early adopter of cloud playout. Jenkins said that Discovery’s technology leadership has been very open in sharing workflows and steps they took in their cloud migration.
“They are definitely bullish in this area, while legacy Warner Media, and now Warner Bros. Discovery, was a little more cautious,” Jenkins said. “Especially on the film production side where cloud workflows have been explored and there have been a lot of POCs, but not a lot have been adopted in that space right now mainly because of security concerns.”
Jenkins said there are there are a lot of options for cloud technology to help WBD’s film business, including HLS [Apple’s HTTP Live Streaming] playout and using the cloud for remote editing, and that the newly combined companies are working to identify “best of breed” technologies from both groups for use on both the studio and broadcast sides. Some film production functions like high-end visual effects creation are likely to remain on on-premise hardware and storage. But workflows like viewing dailies can be accomplished very efficiently in the cloud today, Jenkins said.
While security continues to be the main priority for high-value film content, the pandemic forced the Hollywood production community to dive in and stop testing some workflows and put them to day-to-day use, he added.
“How we can do it securely through the cloud is what our focus is on,” Jenkins said. “But those are workflows that are starting to see a little more push behind them.”
Hybridity Abounds
Media asset management (MAM) vendor Vizrt has some customers that are cloud native, like Amazon Prime Video, and others like TV Globo that are traditional broadcasters that have aggressively shifted most of their operations to the public cloud. But most of the company’s broadcast customers are in a state of transition with a hybrid architecture that mixes cloud and on-premise storage, said Paulo Santos, senior solutions architect for MAM and cloud for Vizrt.
As broadcasters look to move their workflows into the cloud, Vizrt recommends they perform careful due diligence, and ideally, take a multi-cloud approach that spreads storage and compute across different vendors in order to achieve redundancy. But they need to look beyond the cloud platforms themselves, Santos said.
“One customer can choose two cloud providers, but if they are using the same telecom infrastructure you’re not protected,” Santos said. “So, you need to double-check this, such as what kind of fiber connections they’re using. You have to guarantee if you have a failure on Cloud Provider One, that you’re going to be able to reach your content on Cloud Provider Two. That’s what we recommend to our customers. There’s a very deep study that needs to be done, but in the end, you can guarantee very good security and reliability in your system.”
While Sinclair uses a hybrid cloud approach, just doing that alone isn’t enough to guarantee reliability, said Palmer, who agreed that a careful analysis of the overall system architecture is required. He noted that some broadcast vendors might put their control plane in one cloud and their data plane in another, such as their control plane in Google Cloud Platform [GCP] and their data plane in AWS. That means that if there is an outage in GCP that knocks out the control plane a customer’s application isn’t going to run in AWS, even though there might be reams of compute resources that are unaffected by the outage. Without the control plane, the customer still wouldn’t have access to them.
“So, you put things in both clouds or in hybrid, you may think you’re more secure but you’re not,” Palmer said. “It all depends on your architecture. It’s a much more complicated environment with many levels of subtlety, and you really need to take a look at how your vendors are working in that environment and what interdependencies they might have.”
Comments (0)