C2PA Archives - TV News Check https://tvnewscheck.com/article/tag/c2pa/ Broadcast Industry News - Television, Cable, On-demand Thu, 21 Dec 2023 11:01:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 Metadata Is Key To Archive Monetization https://tvnewscheck.com/journalism/article/metadata-is-key-to-archive-monetization/ https://tvnewscheck.com/journalism/article/metadata-is-key-to-archive-monetization/#respond Thu, 21 Dec 2023 10:30:09 +0000 https://tvnewscheck.com/?p=304593 Executives from Fox News, Sinclair and Hearst Television discussed efforts underway to organize and capitalize on their massive archives at last week’s NewsTECHForum, where efficient — and more potentially inexpensive — methodologies are beginning to emerge.

The post Metadata Is Key To Archive Monetization appeared first on TV News Check.

]]>
Broadcasters want to derive more value from their archives by enriching daily news production, creating original programming for multiplatform distribution and generating new revenues from third-party licensing. But to do so they need to be able to easily search through and access old content, no easy task for legacy broadcasters with decades of analog tapes, and even film canisters, sitting in storage.

Several groups have undertaken large-scale digitization efforts to tackle the problem, with some exploring new AI and ML (machine learning) tools to more efficiently tag and index video. Regardless of the method, generating accurate metadata is key to any archive efforts, both for old content and fresh material being created today, said broadcasters last week at TVNewsCheck’s NewsTECHForum in New York City.

Metadata’s Critical Role

“Before we can actually monetize the archives in a reasonable way, we have to have metadata on it,” said Mike Palmer, AVP, advanced technology/media management for Sinclair. “And in many cases, most cases, we have not been putting good metadata on it.”

Palmer, speaking on the panel “Harvesting the Archive for New Content and Opportunities” moderated by this reporter, said archive metadata must not only include enough information to find content using a media asset management (MAM) system. It also needs to have information about the rights attached to the content, since most call-letter stations have a mix of content they shot themselves, and fully own the rights to, and derivative content originally sourced from a network news service.

There isn’t any technical means today to tell whether a station owns a piece of content or not, Palmer said. That question can usually be answered only by calling and (hopefully) finding an employee who was there when it first aired.

“How long have we been talking about archives and metadata, but we’re not bringing back basic information about ownership, what camera it was shot on, the date, the geolocation, all this metadata that is in the cameras that we should be carrying forward,” Palmer said. “And we’re recreating the same problem that we’re trying to solve today with AI and ML because we’re simply not putting the right metadata on that content as it moves into the archive.”

Palmer said the culprit for lost camera metadata is often nonlinear editing systems that strip it out during the production process. To combat the problem going forward he sees a solution in the Coalition for Content Provenance and Authenticity (C2PA) standard, as promoted by the Content Authenticity Initiative (CAI). C2PA specifies provenance metadata that survives all the way from camera to distribution. C2PA not only addresses content ownership, but also content authenticity, an issue of growing importance in the age of AI-generated fake images.

‘A Wildly Human Process’

To improve accessibility of content for its journalists and producers, Hearst Television began digitizing the archives across its stations in 2021. To date it has digitized about 20%-25% of its archive material, representing roughly 45,000 hours of video.

“We parachute into a couple of stations at a time and help them digitize their archives in a systematic way,” said Devon Armijo, director, digital news integration for Hearst Television. “We bring in archival staff that handles not only the physical media but also the paper data that associates with it. Not only do we focus on digitization, but they also are not only tagging. They are looking at it in a discovery way. making sure they’re telling about the editorial opportunities, the promotional opportunities and sometimes the sales opportunities that are there in the archives — things that are sealed in the tapes that folks may know or not know that they have.”

While Hearst makes some use of automation, Armijo said that digitization remains “a wildly human process,” particularly when dealing with physical media that is beyond its end of life, such as 40-50 year-old tapes. That is where Hearst’s archivists serve as “the first line of defense.”

“They’re putting tapes through on a daily basis and making so many human decisions, up front at the beginning of digitization, that helps you with any sort of automation that rolls through afterwards,” Armijo said. “We had some automation processes throughout, like black [frame] detection. But that stuff is all secondary to the human decisions, the conversations, and understanding the history of not only the station but the content that’s there in your archive.”

Hearst licenses archive content to third parties, Armijo said, but the group itself remains “our first customer.” So far this year, Hearst has used its archive to produce over 370 pieces of digital original content along with a handful of linear specials and some local streaming content, including the popular true crime series Hometown Tragedy.

Fox is digitizing the archives across its station group as well as Fox News and Fox Business and bringing them into cloud storage. It has taken a different approach than Hearst by outsourcing the work, which encompasses tens of thousands of U-matic, one-inch and two-inch tapes, 16mm and 35mm film and various digital tape formats.

“We have tractor trailers come and pick up the entire library and it goes off to one of our five digitizing vendors, and then it works through their process,” said Ben Ramos, VP, Fox Archive, field and emerging tech, Fox News. “They have around 35 metadata enhancers who watch every frame of it, and kind of tag it as they’re going through it. It’s very manual, we haven’t gotten to too many AI/ML tools yet.”

Fox’s first goal was to preserve “at-risk” content like one-inch, two-inch and U-matic libraries, with the second objective being to generate ROI by licensing content to third-party documentary filmmakers. The initial effort was aimed at 5,000 U-matic tapes.

“What do we have in there, what’s the failure rate, and can we find ROI?” Ramos said. “We found ROI within six months, so that kind of supercharged the process, and then we got to do the rest of the 70,000 U-matic, two-inch and one-inch, and then we started dipping into the more expensive 16mm.”

Fox has experienced a failure rate of 3%-5% on that older content, and those impaired assets are now sitting on two pallets “awaiting further remediation,” Ramos said. That could involve baking them for several weeks to remove moisture, or even cracking tapes open to clean them and rehouse them.

Overall, it is a slow process, and so far, Fox has only digitized about 8% or 9% of its total physical media assets. One of the surprising findings is that newer formats like Beta, DV and DVCPRO tapes are also experiencing similar 3%-5% failure rates during the digitization process, and some of the older one-inch and U-matic tapes are actually playing better depending on how and where they were stored.

“Now everything feels a little bit at risk,” Ramos said.

Finding Answers With AI, ML

Sinclair was early in archiving some of its content in the public cloud, and last year struck a deal with producer Anthony Zuiker to mine its news archives to create original content that can be licensed to third parties. The group has around 23 million assets that were “born digital,” Palmer said, which means they been archived from a newsroom computer system with a script attached to it. Those assets have accurate metadata, allowing one to search that content across the entire enterprise and access it. Sinclair also has another roughly 10 million assets sitting on shelves on varied physical media.

“The question at this point is what do we want to invest in to bring this back?” Palmer said. “We look at news content, and it’s a fact that most news content has no value in the archive. It is the rare jewel that justifies the expense of all the rest of the work that you put into that. So, we’re focused right now in trying to determine, to the best of our knowledge, which portions of the archive have the highest probability for containing those jewels, and then go mining in that direction. And we may not — I say may, because there are no hard decisions at this point — but we may not want to go back to those 10 million assets and actually digitize them all. It depends on what we find.”

Sinclair has worked with archiving vendor Memnon to digitize cutsheets and tape labels on stored media at a few stations. It plans to use AI tools like optical character recognition (OCR) to analyze them and hopefully generate good descriptions that it can then use to determine what is worth digitizing.

Fox Sports has spent several years on its own complex archive project with Google to create a system that allows producers to quickly call up old footage, such as to enhance a halftime package. Ramos said he has been given access to it and “playing with it for about six months.” The system uses two kinds of metadata: metadata created by human loggers, as well as metadata created by the same ML algorithms that form the basis of YouTube search. A user has a choice of searching by either type.

“It’s definitely working,” Ramos said. “It’s a massive, massive archive, it’s huge. They’ve got a lot of content in there, so it would be really hard to search otherwise.”

Ramos’ own budget for AI/ML tools is more modest, so his team has focused on the least expensive AI tools, speech-to-text and OCR, and runs content through the AI tools themselves.

“Usually when there’s an anchor or a reporter talking about something, it relates to the video that’s covering that,” Ramos said. “So that’s been a really good way for us to inexpensively find most of what we need. But it’s not 100% of the way there.”

Finding Affordability

French company Newsbridge wants to make indexing archive content and searching through it more affordable. The company has developed a cloud-based AI engine called MXT-1 that can quickly sift through archive video and generate human-like descriptions, and do it more affordably than conventional AI systems, said Newsbridge CEO Phillippe Petitpont. Its indexing technology can also be applied to ingesting live content.

“With 1,000 hours of archive, there might be three hours that are hidden gems that have a lot of value,” Petitpont said. “So, you need to analyze 1,000 hours but there are maybe only three or four that are relevant. The problem is that current AI, monomodal indexing technology is very expensive. You don’t want to spend $10 million to index something that might be valuable for just two or three hours. So, we took this problem and have been working on it for a few years. We need AI with video understanding that is able to be very efficient, so that it can meet business realities in terms of pricing.”

Petitpont said a key differentiator for Newsbridge’s AI that it is multimodal, which means that it doesn’t just analyze speech or recognize text but considers multiple types of data within video as a human would. And instead of analyzing each individual frame of video, MXT-1 employs “smart subsampling” and only looks at a few key relevant frames. This cuts down on the use of expensive graphics processing units (GPUs) on public cloud compute and avoids wasting money by “overindexing” content.

“We only process a frame that will really best illustrate the content,” Petitpont said. “So then we’ve reduced by an order of magnitude a lot of traditional sampling.”

Sinclair is not currently a customer of Newsbridge, but Palmer said when he spoke with them he was impressed by their smart subsampling approach. The company obviously had arrived earlier at the same conclusion that his team at Sinclair had reached.

“That was, that you don’t need to look at every frame of video,” Palmer said. “You don’t need to do some of these massive tagging things for every frame of video. Some of these AI models will create pages and pages of metadata for each frame of video, and that is not appropriate for news. Less in some cases, and probably this case, is better.”


Read more coverage of NewsTECHForum 2023 here. Watch this session and all the NewsTECHForum 2023 videos here.

The post Metadata Is Key To Archive Monetization appeared first on TV News Check.

]]>
https://tvnewscheck.com/journalism/article/metadata-is-key-to-archive-monetization/feed/ 0
Talking TV: How News Content Authentication Is Battling AI https://tvnewscheck.com/tech/article/talking-tv-how-news-content-authentication-is-battling-ai-2/ https://tvnewscheck.com/tech/article/talking-tv-how-news-content-authentication-is-battling-ai-2/#respond Fri, 27 Oct 2023 09:28:05 +0000 https://tvnewscheck.com/?p=302192 In this repeat of the Talking TV episode from Aug. 18, Pia Blumenthal, design manager for the AContent Authenticity Initiative at Adobe and co-chair of the UX Task Force at the Coalition for Content, Provenance and Authenticity, explains how the proliferation of generative AI is making that job a lot harder. A full transcript of the conversation is included.

The post Talking TV: How News Content Authentication Is Battling AI appeared first on TV News Check.

]]>
The post Talking TV: How News Content Authentication Is Battling AI appeared first on TV News Check.

]]>
https://tvnewscheck.com/tech/article/talking-tv-how-news-content-authentication-is-battling-ai-2/feed/ 0
Talking TV: How News Content Authentication Is Battling AI https://tvnewscheck.com/journalism/article/talking-tv-how-news-content-authentication-is-battling-ai/ https://tvnewscheck.com/journalism/article/talking-tv-how-news-content-authentication-is-battling-ai/#comments Fri, 18 Aug 2023 09:30:23 +0000 https://tvnewscheck.com/?p=299634 Pia Blumenthal, design manager for the AContent Authenticity Initiative at Adobe and co-chair of the UX Task Force at the Coalition for Content, Provenance and Authenticity, is on the front lines against news disinformation. She explains how the proliferation of generative AI is making that job a lot harder. A full transcript of the conversation is included.

The post Talking TV: How News Content Authentication Is Battling AI appeared first on TV News Check.

]]>
The Coalition for Content, Provenance and Authenticity (C2PA) — a group comprised of technology and media companies — was formed to help combat disinformation by authenticating news content at its source. It was a tough job at the outset, but the emergence of generative AI is making it much harder as bad actors are equipped with ever-better tools.

Pia Blumenthal works with C2PA as co-chair of its UX Task Force, which she does alongside her day job also fighting disinformation as design manager for the Content Authenticity Initiative at Adobe. In this Talking TV conversation, she explains the work she’s doing in each capacity.

It’s work with which every newsroom needs to become acquainted as opportunities for their own news products to be manipulated proliferate. Content authentication will likely become an essential tool to help retain trust, which is already heavily eroding in an age rife with disinformation and misinformation.

Episode transcript below, edited for clarity.

Michael Depp: The Coalition for Content, Provenance and Authenticity, or C2PA, was formed to tackle the prevalence of misleading information online by developing technical standards for certifying the source and history, or provenance, of media content. Essentially, C2PA is building tools to ensure that content is actually coming from where it purports to come from.

This coalition, which is comprised of Adobe, Microsoft, Intel, BBC, Sony and others, has its work cut out for it given the proliferation of misinformation and disinformation and the ever-growing sophistication of the tools used to propagate it.

I’m Michael Depp, editor of TVNewsCheck, and this is Talking TV. Today, a conversation with Pia Blumenthal, design manager for CAI at Adobe, where she leads design for Adobe’s Content Authenticity Initiative. She’s also co-chair of the C2PA UX Task Force. We’ll be catching up to the very latest on where this provenance authentication is progressing and how it is adapting to developments in AI. It’s an essential conversation for every newsroom concerned with the authenticity of the content it receives and disseminates, which is to say every newsroom. We’ll be right back.

Welcome, Pia Blumenthal.

Pia Blumenthal: Hi Michael. Thank you so much for having me today.

Thanks for being here. Pia, first, for the uninitiated, can you frame up the nature of the work that you do at C2PA? It’s not a droid. It’s an awkward acronym. It sounds a little bit like a Star Wars droid, but if you can frame up the work you do there and at Adobe and where this intersects with news content.

Of course. Well, actually, let me invert that order. So, the Content Authenticity Initiative is an Adobe-led initiative. We’re a community of, at this point, about 1,500 members, including media and tech companies, NGOs, academics, others working to promote the adoption of an open industry standard for content, authenticity and data transparency. So, the C2PA, on the other hand, the Coalition for Content Provenance Inauthenticity, is a collaboration between CAI and another previously existing entity, Project Origin, led by Microsoft and BBC. And so, these two projects merged to form what is the technical standards body driving best practices and the design of how we implement provenance across all media and content types or really any type of implementation from a publisher to social platforms handling a variety of concerns, especially today, AI being one of those concerns and how we might make content more transparent.

To your knowledge, are newsrooms sufficiently aware of what C2PA is and what this work is all about?

We do have a number of both wire services and news media publishers who are investing in CAI. We hope that they soon begin their own implementations of the C2PA standard and to assist with that. The CAI has developed a suite of open-source tools built on the C2PA aspect that really, again, anyone but especially publishers of these media can begin to integrate into their systems, to help their consumers — and really beyond just their platforms — understand where the content is coming from, who’s responsible, what may have happened to it along the way.

And as I understand it, there’s been a little bit of a road show going on the last year or so to kind of proselytize this, get the word out in media circles.

Yes, that’s certainly true. Our mission began to address myths and disinformation concerns, which, of course, are being accelerated with all of the new generative AI technology that we’re seeing today. But even several years ago, which is roughly 2019, when I was first introduced by Adobe, we saw what happened with the Nancy Pelosi cheapfake. It was a simple edit to slow the speed of a clip of Nancy speaking to make her look like she was slurring her words. That’s something that we call a cheapfake. And so, of course, those concerns are accelerated now. And it’s very hard to actually detect whether or not it’s something is actually a source of truth.

So, do you call it a cheapfake because it was sort of simply done and it wasn’t very sophisticated and easy to spot?

Correct. Exactly. You know, we don’t need a ton of sophisticated technology to still intentionally mislead people.

Right. OK. So, tell me about the progress that you are making, generally speaking, in terms of being able to authenticate more types of content, of media content, now.

Of course. So, I would say largely implementations have started with images, photos or images created in software like Adobe Photoshop. We are working towards, at least on this C2PA UX best practices side, implementations around video, provenance, audio. Soon documents like PDFs. We try to outline again how people need to interact with different media types in a variety of scenarios

So, the best practices have to really be super flexible to handle any type of content, content theme, a place where it could be surfaced, and more importantly, the types of information that could be unique to that content. So, anything from identity associated with the creators or editors to the types of edits that might have happened, the ingredients that were used to create those pieces of content, and then we attach that to the content itself. So, it follows it wherever else it may go, and then over time builds this rich trail of provenance information that someone can look back to and hopefully find the origin points.

Where does it follow it exactly? Is it sort of a metatag string, or how does it manifest inside of this piece of content?

What we do is we take all of this metadata, some of which already exists, and as much as possible, the C2PA relies on existing metadata frameworks like schema or exist for cameras or IPC, of course for photography, and we package that into the content itself. I think the biggest differentiator for us between other types of metadata is that we apply a level of verification through a digital signature.

And so that really means that there’s a responsible entity, the signer who says that this is the state of this data at the time that it was exported or saved or created, and then that data either lives within the content itself or is referenced on an external remote cloud so that if the data is ever stripped, there is actually a record that can be repaired through something that we call soft binding or digital content fingerprinting.

So, we basically look at that content and say this is actually what it matches on the cloud itself. And therefore, if that data is stripped off, we can refer back to it through the cloud.

How does the content originator make that digital signature? Is that something that’s embedded in the Adobe program, for instance, on which it’s being edited?

This could go a little bit beyond my expertise as a designer. Our signature model, our trust model, is based on the existing one that you might see across the internet. How do you know a website that you go to is trustworthy? You look for that little browser lock, right? There’s an SSL trust certificate that a series of different entities disseminate and also look forward to respect in the absence of that trusted certificate. It signals to you as the viewer that, you know, you might not want to look at this or you proceed with caution.

And so that’s essentially how our trust model works. Adobe is in and of itself a trusted entity that’s issuing signatures for applications like Adobe Photoshop or Lightroom, where we have a beta experience being developed and any of the other upcoming soon-to-release features. Adobe in this case is the signer. Photoshop would be the machine validator of any sort of edits that someone might take on a piece of content and then kind of going down at the trust signal list. We also have anything that a person can manually enter about their content. That’s where identity comes into play.

But in order to support your identity claim, we, at least within the Adobe ecosystem, have created a series of connected accounts. Social media or Web3 accounts that someone can head off into and then include in their content credentials to help give them that social proof. Kind of in the absence of having a verified identity service, which is something that we are collectively working towards.

So, this will serve as a good proxy until you can get that retinal scan?

Hopefully it’s not to that level as certainly that would be off-putting for many. But there are countries in the E.U. that already support verified identity. We’re looking at those as models, even states within the United States that are moving towards a more digitally secure identity service.

AI and its ability to generate images and videos is complicating this whole process, it would seem. Can you describe how?

Well, AI has reached a level of adoption and sophistication where it’s in the hands of many. And there isn’t much regulation around the world, although there’s certainly an increased effort in the EU and one trending in the United States. And so, at scale, there’s a huge concern that as the technology just continues to get better and better, it’s harder and harder to detect. That’s the biggest concern right now. And so, we offer a proactive way for people to claim attribution and transparency about how something was made. And we think that this is going to be a really powerful way for consumers of content all around the world to be able to look for that provenance data and then make more informed trust decisions about that content.

Maybe this is a little too sci-fi a question, but is it getting closer to the place where it could outsmart you on the user authentication front, that it could generate these triangular kind of identities that you verify and make you think it’s an actual person?

I think we are moving in that direction as these tools get better, the detection mechanisms need to also keep up, and it’s going to outpace that effort, I mean, fewer and fewer detection processes. You may be able to catch this type of content at scale.

Are you building tools that can delineate content that has been built by AI specifically?

Well, in the case of Adobe, where we have our own generative AI platform called Firefly, we have built content credentials directly into the core experience. So, Adobe is tackling this in a number of different ways from sourcing the content for training ethically using Adobe Stock material and open licensable imagery to, of course, including something that we’re calling an AI disclosure and that’s within the content credential itself. Every Adobe Firefly image comes with a content credential that says this was made with an AI tool.

Is this more difficult when some of the content has been created with AI but not all of it?

Yes. There is now in Adobe Photoshop a beta feature called Generative Fill that essentially takes an existing image and then allows users to fill in areas of that image with new generated content. It’s also called inpainting. There are other tools that allow you to do this. And again, as part of the larger initiative, they are also thinking about this type of CGP disclosure that says some or all of this content was made with an AI tool on the C2 side. How we tackle that is again, looking at an existing framework created by IPC called Digital Source type. You can say this is a synthetic composite. We can have a little bit more nuance in terms of the type of labeling that you might expect to see based on how these tools are being utilized.

And that warning or that caveat is visible to the user. I mean, you’ve got to make sure, of course, that that gets to the consumer when you’re talking about a news context here, because if a consumer can’t see that, then the caveat is meaningless.

Absolutely. So, I think the way to think about content credentials and really the implementation of C2PA data more broadly is that there are multiple parties. There’s the creator side that chooses the types of information they want to include in the content credential, which then appears on the consumer side. The consumer side is really the more challenging aspect to design for because we need to make sure that for the uninitiated, this information is understandable.

There’s also an incredible behavior change, which is how do we let people know that this type of data is available? How do we inform them of the trust model? Through the CGA UX Task Force, we created a series of progressive disclosure experiences, starting with just an icon that indicates the presence of a content credential data followed by this lightweight summary, which is where you would expect to see that type of disclosure. And then for those who need to dig in more and see the entire provenance chain, they should be able to do that. And then, of course, for the forensic experts who need to see the raw code itself and really the rest of the rich information that just might not be consumer friendly or understandable, they should also be able to do that.

It seems like there’s a lot of work that needs to be done here, not just in terms of individual newsrooms catching on to this system, but consumer literacy here. And media literacy is already a pretty challenged area, almost everywhere. So, this can’t be too complex of a system for the average consumer to understand.

Absolutely. We like to talk about content credentials from this perspective as being part of a three-legged stool. You have detection, of course, but you have to help bolster that with a proactive measure. That’s where we have content credentials. And then the last leg is the need for better and increasing digital media literacy that now helps people understand what AI is, how it works, where they might experience that.

And on that front, the CAI has actually created a suite of educational materials for middle school, high school and higher education. We are actively working with academics to create that content and to disseminate it into classrooms around the world.

But that dissemination is tough because there’s not a central United States curriculum. And so, you’ve got to do that at every level of a school board almost, you know, and sometimes states or in Canada provinces why they have they have some media literacy programs in place, but not really at scale almost anywhere. So, that’s going to be a hell of a slog.

I mean, I would say nothing about what we’re working on is easy. But the best part is that there are multiple extremely intelligent individuals from many different companies covering a wide variety of verticals, all thinking about these problems. It truly has to be an industry-wide effort, but it also has to require government support from different countries that can trickle down to, you know, to classrooms, academics, researchers. It’s not one company can solve this problem. It really takes everyone to invest.

Do you foresee media companies, actual newsrooms, getting involved in direct consumer education on this front as well? Do you think that they’ll have to absorb part of the burden and go to their viewers or their readers and explain this periodically?

I can’t necessarily speak to their direct relationship to academic settings, but I can say that again, through the C2PA UX Task Force, one area of recommendations we’re actively working on is how to help implementers talk to their different audiences about what we’re doing. So again, that is a core concern for us, is we need to make this experience simple and understandable. A lot of research is involved in continuing to optimize for those things and so ultimately, we’ll have the set of best practices that we hope implementers can utilize for faster results based on our new understanding and design.

OK, I’m going to stop drawing you away from your end of the pool quite so far. I want to ask you about Adobe’s Do Not Train tag, which you’ve added for content creators to use if they don’t want AI to train on that piece of content. Can you explain why that would be something that they would want to employ and how that works?

Yes, of course. This is something that was introduced in a C2PA spec, and there’s kind of a number of different subtle, subtle differences in the ways that you may not want to train your content. But ultimately, we know that from Adobe’s perspective, our audiences are our creators who work really hard to develop a style and unique perspective on their art. We want to help them protect that content from web crawlers, just looking to build training models to train AI algorithms on. And so, the idea behind the Do Not Train is that this would be part of a content credential setting that web crawlers would respect and then exclude those images from their training models.

Is it suggestive or is it an absolute “you’re verboten to train on it” by the way it’s set up.

I would say, for implementers, it would be a hard preference to respect. But of course, this requires adoption at scale for the future, which, based on our volume of members between C2PA and I, we do anticipate would be the majority of places that you go to consume content.

Do you have the sense overall that you are able to keep up with the proliferating ways in which content can be convincingly fabricated?

I hope so. Yeah, certainly we work very closely within Adobe with the teams directly responsible for all of the new Firefly features. Content credentials has been a core part of developing those features and making sure again we’re doing it ethically with complete transparency.

All right. Well, you are fighting the good fight, Pia Blumenthal, so keep it up. Thanks for being here today.

Thank you so much for having me.

Thanks to all of you for watching and listening. You can watch past episodes of Talking TV on TVNewsCheck.com and on our YouTube channel. We also have an audio version of this podcast available in most of the places where you consume your podcasts. We’re back most Fridays with a new episode. Thanks for watching this one and see you next time.

The post Talking TV: How News Content Authentication Is Battling AI appeared first on TV News Check.

]]>
https://tvnewscheck.com/journalism/article/talking-tv-how-news-content-authentication-is-battling-ai/feed/ 1
Coalition Begins Work On Creating Algorithms To Detect Fake Videos https://tvnewscheck.com/tech/article/coalition-begins-work-on-creating-algorithms-to-detect-fake-videos/ https://tvnewscheck.com/tech/article/coalition-begins-work-on-creating-algorithms-to-detect-fake-videos/#respond Wed, 01 Sep 2021 23:30:52 +0000 https://tvnewscheck.com/?post_type=more_news&p=267046 The Coalition for Content Provenance and Authenticity (C2PA), a Joint Development Foundation project established to scale trust in online content, has released a draft of its content provenance specifications for community review and feedback. Driven by a […]

The post Coalition Begins Work On Creating Algorithms To Detect Fake Videos appeared first on TV News Check.

]]>
The Coalition for Content Provenance and Authenticity (C2PA), a Joint Development Foundation project established to scale trust in online content, has released a draft of its content provenance specifications for community review and feedback.

Driven by a commitment to tackle online disinformation, the C2PA’s technical specifications are designed to be an open standard that will allow publishers, creators and consumers to trace the origin and evolution of a piece of media, including images, videos, audio and documents.

“C2PA was established to accelerate progress toward the broad adoption of content provenance standards that will enable a future of verifiable integrity in media,” said Andrew Jenks, C2PA chair. “The release of this draft is an exciting and important milestone, representing a diverse and collaborative effort across industries to protect people from fabricated and manipulated media and drive transparency in the origin and history of content.”

Combatting online content fraud at scale requires transparency and an accessible and open approach that enables consumers to make informed decisions about what has been modified and what has not. The C2PA was launched in February 2021 with founding members Adobe, Arm, BBC, Intel, Microsoft and Truepic with the goal of developing an end-to-end open technical standard to address the rise of disinformation efforts leveraging tools for media fabrication and manipulations. The effort has expanded, bringing in additional members including Twitter, WITNESS, Akamai and Fastly.

Over the past six months, the C2PA has worked with industry experts and partner organizations, including the Project Origin Alliance and the Content Authenticity Initiative (CAI), to develop a standard for digital provenance that provides platforms with a method to define descriptive metadata, what information is associated with each type of asset, how that information is presented and stored, and how evidence of tampering can be identified.

This group of contributors spans a spectrum of industries including social media, news publishing, software technology, semiconductors and more. All have contributed to building these new technical specifications through a process of gathering requirements, considerations of scenarios and technical design.

Following the review period, the C2PA working groups will finalize the 1.0 version of the technical standards and once published, the group will pursue adoption, prototyping and communication through coalition members and other external stakeholders, providing the foundation for a system of verifiable provenance on the internet.

“The power of C2PA’s open standard will rely on its broad adoption by producers and custodians of content, which makes this review phase so critical to the development and finalization of the specifications,” Jenks said. “This is why we are making the draft specification available to the public. We encourage rigorous review and feedback across industries, civil society, academia, and the general public to ensure the C2PA standards reflect the complex nature of this effort.”

The draft specification can be accessed through the C2PA website, and comments will be accepted through a web submission form and GitHub until Nov. 30. C2PA is accepting new members. To join, visit https://c2pa.org/membership/.

The post Coalition Begins Work On Creating Algorithms To Detect Fake Videos appeared first on TV News Check.

]]>
https://tvnewscheck.com/tech/article/coalition-begins-work-on-creating-algorithms-to-detect-fake-videos/feed/ 0