Chain Reaction

Blockchain tech is working to combat AI-based deepfakes (w/ Melody Hildebrandt and Mike Blank)

Episode Summary

For this week’s episode, Jacquelyn interviewed Melody Hildebrandt, CTO of Fox Corporation and Mike Blank, COO at Polygon Labs. Why these two companies? Well, Polygon Labs, the layer-2 blockchain focused on scaling Ethereum, and Fox Corporation, the well-known media conglomerate, joined forces in January to tackle deepfakes as artificial intelligence becomes more prevalent. Fox released Verify, an open source technical protocol for media companies to register content and grant usage rights to AI platforms, while also allowing consumers to verify content through Polygon’s tech. This episode is a part of Chain Reaction’s monthly series diving into different topics and themes in crypto. This month we’re focusing on blockchain and AI integrations.

Episode Notes

For this week’s episode, Jacquelyn interviewed Melody Hildebrandt, CTO of Fox Corporation and Mike Blank, COO at Polygon Labs.

Why these two companies? Well, Polygon Labs, the layer-2 blockchain focused on scaling Ethereum, and Fox Corporation, the well-known media conglomerate, joined forces in January to tackle deepfakes as artificial intelligence becomes more prevalent. 

Fox released Verify, an open source technical protocol for media companies to register content and grant usage rights to AI platforms, while also allowing consumers to verify content through Polygon’s tech. 

This episode is a part of Chain Reaction’s monthly series diving into different topics and themes in crypto. This month we’re focusing on blockchain and AI integrations.

They discuss: 

(0:00) Introduction

(1:58) Safeguards against AI-created content

(3:33) Blockchain could protect content creators

(9:13) Improving AI trust and transparency

(12:42) Validating and protecting intellectual property

(16:03) Verifying content authenticity on the blockchain

(25:21) Combatting misinformation

(31:25) How blockchain could help larg companies verify their content

Episode Transcription

Jacquelyn Melinek  

Hey everyone, its Jacqueline melanic Welcome to chain reaction, a show that unpacks and dives deep into the latest trends, drama and news breaking things down block by block for the crypto curious. This year we're doing monthly series diving into different topics and themes in crypto last month was NF Ts. And this month we're focusing on blockchains. Integrating AI. We're speaking to a number of founders and startups about how they're looking at the two technologies. What are the best ways to bring AI on chain and the benefits that are yet to come from it? Hope you enjoy. Today we have two guests on melody Hildebrand, CTO of Fox Corp and Mike blanc CEO at polygon labs. Why these two companies together well polygon labs, the layer two blockchain focused on scaling Aetherium and Fox Corp, the well known media conglomerate joined forces in January, which is something we'll dive into throughout this episode and link in the show notes. But the TLDR is Fox released verify an open source technical protocol for media companies to register content and grant usage rights to AI platforms, while also consumers are able to verify content through polygons, technology. All in all, it's basically a way to help tackle the proliferation of deep fakes. And if you're lost with everything I just said, Don't worry, we'll get into it shortly. But given the rise of AI and our March series focus on blockchain and AI, we thought this would be a perfect topic to discuss with these guests. With that said, Melody and Mike, welcome to the show. Thanks for having us to be here. Thanks. Yeah. So I briefly touched on the partnership in the beginning talked about a few technical things in there. But I would love to know from both of you. How did this partnership come about melody? When did Fox realize that deep fakes and AI were becoming a problem for news organizations? And then Mike, for non technical users, how can you describe what polygons blockchain tech is doing to help solve this? Awesome?

 

Speaker 1  

Yeah, I think the relationship with polygon actually kind of predates our real tackling of this specific manifestation or challenge in front of us. We kind of had a initiative who years ago to explore blockchain as an opportunity for thinking about content distribution, and content financing and kind of a fresh way. We were kind of bullish on the underlying technical potential for kind of diversifying like a stream of content to a wide range of downstream providers and polygon. And Fox began to kind of work together on that problem set. I think the moment that it really crystallized internally at Fox around the challenge of deep fakes and the challenge just of also how to think about content licensing in an age of AI. As obviously, there's a big debate right now about how models are trained the inputs into models, how content creators are compensated for their contributions to those models. I think that was really, really tragic beauty, I think was the moment where the kind of broad Zeitgeist kind of came together, and we realized, okay, this is not a future problem. This is a today problem. And so we were happy to kind of already have this relationship with polygon rec, we really kind of understood the technology. And we had this immediate aha moment, I think that we could think about blockchain provenance as a bit of a antidote is maybe more extreme than we need. But like, it's a bit of like a grounding to thinking about the input into model question. And how to think about provenance and media kind of in an AI generated world. Got

 

Jacquelyn Melinek  

it. Okay. And then Mike, where do you come into this picture?

 

Speaker 2  

Sure. I think just to build on what Melanie was saying, both of our organizations have been at the cutting edge are at the forefront of thinking about how to use technology in order to solve interesting challenges across the media and entertainment space, underlying what Melanie has talked about as what the opportunity that blockchain provides. And polygon technically is building an aggregated blockchain network at that scale. So the level of the internet, but what does that mean in this context, it does, amongst other things, Blockchain helps establish the veracity of data, or the authenticity of content that is proliferating across the internet. And we're seeing today in this age of, as you described, Jacqueline, that there's proliferation of content, everyone is a content producer today, your kids, your parents, you literally everyone. And of course, all the brands out there today are building content at a pace at which we've never seen the likes of in history. And so blockchain enables this foundational layer is technology that enables you to prove the veracity of that content. And so it's a natural fit for the work that Melanie and Fox are doing and what we're doing to come together in order to help provide some comfort and trust in the marketplace about the content that people are consuming. And since the

 

Jacquelyn Melinek  

launch at the beginning of the year, how has it been like what have been some wins for verify or challenges that it's still facing?

 

Speaker 1  

We've been really excited about the reception that it's a, you know, a market driven solution to the problem. I think as we kind of look at this question of how are content creators compensated by large language model by AI companies and You're seeing these kind of extremes emerge as obviously very public lawsuits that are gonna play out in the courts. And then there's also these kind of all you can eat licensing deals, where content creators are providing kind of real time streams of all their content to AI companies for both training models, as well as for more retrieval based workflows. And then there's obviously a debate on the hill and because legislation about like, how should the government potentially step in and help navigate this environment? I think what we were trying to put out there with verify is kind of a market based solution to say, you know, this is the legitimate access point for our content. You know, we want to play in this space, we're generally AI optimist at Fox, we think there's an incredible opportunity for us to be part to not only survive, but thrive kind of in this age of AI. But there are certain guardrails around the content that we need to impose if we kind of want to continue to have investment in journalism, investment in original content, all these primitives that you care about, there needs to be guardrails and compensation. And so I think the number one thing that's been very validating to us is that we've talked to a large number of publishers that they want to impose these guardrails on their content. Most publishers do want to participate in this ecosystem, but they don't want to sign away all their crown jewels for a lump sum today, you know, where you really don't know it's a little too early, I think, to be signing away to value all your content in the age of AI right now, when there's so much to play out. So how can you impose some of these technical guardrails that allow you to get ahead of that and still maintain some optionality in the future? So I think we've had a great reception from the publisher community, that we've had a great reception from, I think the government regulatory community that again, would rather see a market driven solution emerge, than require kind of a major heavy handed government intervention, which, as we know, will take a long time to play out anyway. But we're there also, there's real debates about how much you want to potentially, you know, in any way imposed upon the great innovations that can happen from Ai, but this is gonna move so fast, you really need solutions today. So I think those are the kind of the two big validations that we've found so far.

 

Jacquelyn Melinek  

I think there's like the people that want to get involved in, you know, get into this to protect their creators, their content, etc, like you mentioned, but what will it take to get them on board fully, while still understanding that they can, quote unquote, protect their content, as you mentioned? Like, do you see a future where or more news outlets, media companies, etc, will integrate technology like this? Or is it going to be a slower process than kind of the proliferation of AI already has been accelerated on that point, but it might be slow on other fronts.

 

Speaker 1  

I'm pretty bullish, it's gonna move fast. Now, because of the alternative. You know, we kind of view this in a way as kind of like a Spotify moment or an Apple Music moment, right? Where you begin to have the legal way to consume content established, which kind of begins to put in stark relief, the alternative methods of scanning the internet and scraping content. And, you know, models had to get to where they are today, like through whatever means possible. But I think now that we're all looking at the environment and realizing that the current models like really risk undermining all the economics of the internet that publishers rely upon to fund their operations, we need to carve a different way. So every publisher is looking at this, and figuring out how do they want to participate, and some are saying we want out or we don't want to participate, but I think it's a minority view. But very few want this all you can eat model either, because they just know, you're gonna get undervalued today, if you try to sign like a bulk deal for all your content at once, which is so much of what AI companies want to get companies to sign right now are these kinds of deals. So I think there's going to be like pretty fast movement around this. But we we kind of do need to begin to coalesce a little bit. I think if we want to seize the moment. So I, I see a bit of urgency around it, because again, it's beneficial to the AI companies as well, I think to be able to have, we've validated this with in many discussions, to have the idea of a legitimate access point with a common format that allows them to ingest from 1000s of publishers, you know, with one technology, as opposed to potentially do bespoke integrations with every creator on the planet, which might be technically difficult. So we think there's a real opportunity to coalesce now around that standard, but it's going to take a few big moves from a bunch of publishers,

 

Speaker 2  

I guess I would I would offer that I think there's a groundswell of demand as well. It's not just that I think both of us believe that it's in a brand's best interest to engender more trust in the content that they're delivering. But from an end user perspective, every day, I think there is more and more concern from actual end user consumers about the content that they're consuming, and whether they can trust that content or not. If governments don't do it, then I think I think as melody already suggested, it's better if there's a market driven alternative here that we can help end user consumers understand what they're consuming is valid and authentic. And even today, we saw news I know this will be probably old news by the time this gets shown to the public but the Duchess today have that presented a Picture, which was proven to be edited, right with Princess now people are questioning whether that yeah, Princess Kate where that content was actually really not why was she editing that photo. And that's, that is such a mundane thing, right? It was just a picture of her and her children. But there are very real things that are happening across the Internet today that need to be verified. And so as engaging consumers, I think we have a vested interest in pushing brands as well to participate in this regard so that we can all feel as though the world that we're consuming is a real one, what

 

Jacquelyn Melinek  

else do you two think needs to be done to improve the relationship and trust with audiences as AI continues to grow? Mike, you pointed out Kate Middleton situation that was definitely the talk of the town over the weekend into Monday. And we're recording this on Monday. And honestly, this episode will go out on Thursday. So maybe by then people have moved on. But there's always a situation where an AI generated thing or something is up in the air where people are a little confused about it, or they're not 100% sure that it's legitimate. How do you see blockchain is playing a role in kind of verifying that legitimacy? And then maybe melody, you could talk to how verify also goes that route?

 

Speaker 2  

Sure. Yeah, I think there are three things that blockchains can do, and maybe a fourth, which is more of an economic, more economic in nature. But there are three things that blockchains can do to help begin to solve this problem. The first is, as melody already described, data provenance one of the key benefits of blockchain technology is the ability to provide unparalleled data provenance and storing data on blockchains can ensure the integrity of that data, increasing trust and transparency associated with an AI model that leverage large data assets can increase transparency and trust with data provenance solutions enabled by blockchains. That's one very important component. When we think about the data that's used to train the models that enable AI, like how does one know that the data models themselves were validly created an authentic and themselves not poisoned in some way. So this is one area that blockchains can help provide a very interesting solution. A second would be zero knowledge attestations. And we call that z k. And what that effectively means is that you know, polygon and others have built and invested in breakthroughs in z k technology that allow users and applications to make privacy safe at the stations that verify data and contribute to data assets used in AI models. And so maybe said more simply imagine a data model or an AI model that was created for which that the output of that model needs to be presented the public, but the data model itself, the entity that created that data model has a vested interest in preserving the privacy of it. And so think about the Department of Defense or other organizations that have very specific use cases around specific data models that need to be private, we can now do that auctions can now support that endeavor. And the third would be authenticity, verification. blockchains can help validate the authenticity of images and text and video and other types of media, which heretofore couldn't be done. And that can be cryptographically verified, to validate the authenticity of that content to prove that it's valid in its creation, that it hasn't been modified, or that it has been modified. Because I think, again, end users want to know, if that thing was created. Is it in its original form? Or was it something that was modified at some point thereafter, that's equally important for end user consumers to understand as well. So those are three areas that are enabled by blockchain. And the fourth, as I said at the outset, which may be more of an economic benefit is that blockchain provides digital property rights, right? So when content is created today, how does one know who created that and whether they should be attributed with the creation of that content. And when you put that content on chain, you can now validate that that content was created by a certain individual or brand or whatever, and then they can be economically rewarded for that activity. So those are the four ways in which in which blockchains can help support the validity or the verification of content lives across the proliferation of his content across the internet.

 

Jacquelyn Melinek  

How accessible are those four things though? Like, let's say it could be anyone from a news organization to someone who's just posting a Tiktok? Maybe they're like an influencer? How accessible? Is it for them to access these things in order to protect their IP and etc? Well,

 

Speaker 2  

I think that's a good question for melody, I think cuz this is what exactly what verifies trying to do? Yeah,

 

Speaker 1  

I think that is right. We purchase from like a fox perspective. But I think with the hypothesis that it's relevant for not only other news, media brands of our size, but also for the really diverse set of information ecosystem today from substack, writers to tick tock influencers to basically anyone who is an intermediary in some way between the public and like the information environment. And one of the kind of core hypotheses that we have is that in an age of AI generated media, the information space is going to zone will be flooded. And it's not necessarily always with malicious manipulation. But it's very difficult, I think, and not really fair to impose upon every consumer that they're gonna have to validate every piece of information themselves from first principles, right. Instead, I think increasingly I think brands will be important. In the age of AI, the importance of brands is going to rise because consumers are going to rely on those brands to help them navigate this information space. And again, that brand can be a fox, or it can be a subset writer is a really diverse type here. So we wanted to put out such an open source technology as the whole reason we open sourced it was to really think about adoption. And the reason we chose a technology like Polycom was to make it very low cost, so that you can have anyone who's potentially putting out information into that space, you can take advantage of some of these primitives that Michael discussed, to be able to both protect their content, and also to, I think, open up very interested in monetization opportunities, which I began to allude to that can that can be done with this original ownership, you know, this moment where you can say, I'm signing on chain, this piece of content originated with me, I think that really unlocks a lot of economic opportunity downstream. But fundamentally, we're trying to allow a publisher to be able to put out a piece of content so that a consumer can then say, okay, I trust Fox, did this piece of content actually come from Fox? And was it not manipulated from when they published it? And that's the core things we're trying to solve with verify. You could think about AI manipulation or how to tackle that problem in a few different ways. I think there's a lot of people are tackling it from the question, of course, is AI generated? Or was this AI manipulated? I think the other way to think about it is was it not tampered with from the moment it was published. And so that's where we can like, rely on these cryptographic approaches, with very high confidence, be able to say, Okay, this image was signed by Fox on this date, it was signed within we call it the content graph, which is a smart contract on chain, which essentially binds an image to the headline to the text, we think that context really matters. So much of the information, misinformation space right now has actually just been taken out of context, leave aside the fact that it's you know, AI generated, but see the content graph flux publishes piece of content. And these four images were associated with this article, headline, this article text. So now if a user is on Twitter, and they see an image, and they say, Fox reports, x, and there's an image there, they can say we did Fox actually report that is that a real image from Gaza is not a real image from Saudi Arabia, wherever the actual the news is meant to take place. And when we see and just get to spend some time on Twitter about what people say, Fox published, these are things we've never said, right? And there's images that are being attributed to us inappropriately. And so now we think we're okay, we're empowering users like consumers with the tools to actually be able to ask the question, did Fox say this or not, like right now before verify they really didn't have the ability to ask that question. But now you can go to verify dot Fox, you can like drag and drop an image like yes, this perfectly matches this was not tampered with. And by the way, Fox published this article on this image on this time in this context. So we did that for Fox. And that was important for us to go public with it. First, we signed our first piece of content last August, which was actually the what we consider the the beginning of the election season, which was the first day of the first Republican debate on Fox. And then we were thinking about the elections, you know, as one of our North Stars, you know, but now because I'm tech writer could do the same thing. And it's like, it allows an any publisher, large or small, to be able to put that line in the sand is my piece of content, I stand behind it. And then consumers because they will I trust that person, then I can inherit essentially the trust and trust that underlying asset,

 

Speaker 2  

just add one thing, I think, ultimately, it'll be in the in the brand's best interest to proactively indicate that they are part of verify and to indicate to their consumer that they are a trustworthy producer of content. We've seen this in other instances in other areas of the internet, where it's in the interest of a brand, that brand can be as large as Fox or as small as the individual content creator on Tik Tok, who will say that I am part of this solution, because now you can trust that what I'm doing is actually valid. So I think today, it's increasingly becoming through services like verify it will be increasingly it'll be simpler for end users to understand where they're receiving their content from and whether that's actually valid or, or verified in some way. And it'll be the brand messages to proactively publish that, to indicate that to their end user that they're actually part of the same solution.

 

Jacquelyn Melinek  

Right? What is this brands demand for this? What is the response been like since launching, those are their hesitations? Are people kind of coming and saying, Help? Where does it stand? From a

 

Speaker 1  

publisher perspective? There's kind of again, we think this is a consolidated consumer problem, which is this verification, but also the business problem about how we're being compensated for our content and these downstream use cases. And we really think this solves both problems. And I'll be honest, I think the one that feels more timely is this business problem that we kind of began with, which is how are large language model providers, compensating publishers for their content? And how can they move from a Napster model of taking the content to a verified model of taking content again, that publishers stand behind? So that piece of it feels very pressing, I think from a publisher community and everyone wants to do something about deep fakes. I think there's still a lot of technical debate about what the right approach is. We've obviously put a line in the sand around this provenance as the way to approach it. And there's some complimentary initiatives to that even go further upstream than us that starts at the camera collection, right. And all the way from how a camera up takes an image. And then it goes to Photoshop and up and because of that all really interesting metadata to include in the protocol. But we've kind of started at this moment of a publisher signs a piece of content. And that means something that means that publisher is standing behind this, whatever that means to them that for us, that means we have a newsroom, we have units that we do research and put that out. If you're a subset writer, again, you have a different process. But when you put out a piece of content, you're standing behind it, but it's a little scary, I think, you know, you put it on chain, it's tamper proof. That's one of the fundamental features of the system, right, is that it can't be modified. But it's also, you know, a little scary for news organization, you kind of put out there this record, but generally, the reception has been quite good. It's been it's meant today, just like I can see how this can solve my business problem. Or I can actually put out there and say I own this content. So if it's been taken by this other company, it's like, here's a record, I can't prove record that there i originated. And that means a lot. Mike,

 

Jacquelyn Melinek  

what has it been like for you?

 

Speaker 2  

Yeah, we've seen interest from every facet of content creator today. And I think it even accelerated with the announcement of verify, as we've talked about, this is large and small brands alike. And when I say small brands, I mean, every individual who creates content on the internet is thinking about how that content is attributed to them in a provable way. And when we think about it, from the larger grand perspective is how do they demonstrate themselves to the world as having content that they have produced and owned and can validate the fact that they produce and own that content. And that as Melody said, it wasn't tampered with. So we see every potential combination of content creator thinking about how to solve this problem. And when they come to us, we redirect them over to melody to think about how to best integrate into their solution. And as she said, there are other upstream solutions like the ctpa, which is a standard for establishing the veracity of content at the hardware level, which complements what melodies solution is trying to create. And I think we're getting to a place where whether it be the picture that's produced from the camera, or the brand that is creating text, video and audio content, and distributing that through whatever channel, those two worlds are coming together to deliver this, I think the world that we're looking to see, which is one that we can trust. So yes, we see every facet of content creator looking for a solution today, and they're starting to rapidly adopt,

 

Jacquelyn Melinek  

okay, and on that note, we're gonna take a quick break before we get back into it.

 

And now we are back and I want to go back into the conversation we were having before, but kind of focusing more on looking to the future. Do you both think that articles, photographs, videos, creators, etc, are going to become more secure as technologies like this launch and proliferate? Or will the risk of deep fakes and imposters technology also grow at the same rate, and still have the potential to harm consumers? Basically,

 

Speaker 1  

I mean, my hypothesis would be that most of the information space is not going to be verified, right, you're going to have proliferation of content. And the content is obviously getting extremely good. And AI generated content. It's extremely compelling. I think average consumers, or even non average consumers, people who spend time in this space have a hard time telling what's real and what's not. Yeah, so that's gonna get really, really good. And it already is really good. It's just gonna get better. And there's gonna be this question, I think of things, just as we've seen today are already out of context. You know, you just take images out of context to advance a certain narrative when people are out there trying to maliciously push narratives all the time. So I think that's kind of the headwinds that we as a, you know, as an industry are facing, and like, what what is the countervailing energy that we can put forward there, which I think is this sense of content that publishers I mean, that in like the really most diverse way possible, stand behind content. And then I think as we think about some of these aggregators, that's where you begin to see the problem at scale. So if like, the aggregators, like large English model are consuming verified content, then you can imagine that the models are going to be trained on real data that, you know, organizations are willing to stand behind. And that's where I think you'll really get the benefits downstream. Whereas if models continue to be trained on just the internet, and the internet is increasingly AI generated, you can kind of see how that becomes a bit of a death spiral of quality, where you're kind of training on AI generated stuff, which is training and age, I don't know, it's just, you could really see it going in a quite dystopian way. So I think it's really important for us to put forward this kind of like alternative path of content that, you know, again, a wide variety of institutions are standing behind, which might be in there still like a lot of difference of opinion. Obviously, it's not meant to be like there's going to be the true source or the one truth of content that the good is, I think, not feasible or really what anyone's asking for here. It's rather in a diverse information ecosystem. How can you allow organizations that are actually willing to put their name behind something and have that stamp are proud to put their name behind pieces of content in the broadest way possible, put that in the information ecosystem, and then allow AI generation technologies to actually train upon that content, which I think that's when you really begin to see the scale into the future. And I think the alternatives could be a little bit bleak.

 

Jacquelyn Melinek  

My mike, what are your thoughts on that?

 

Speaker 2  

I shared the sentiment of you this somewhat cynically, in the sense that I think there are a lot of bad actors out there who are trying to subvert truth, right. And we see this happening every single day, all the time. And I think AI enables that to happen at a scale, which we've never seen. My skepticism perhaps is met with what we're seeing, and what we're talking about here today, which is the opportunity that technologies that we're talking about will help solve this problem, I think nowadays, right, that the vast majority of content that consumers are receiving, that they're being bombarded with will likely not be verified, and they're gonna have a very difficult time assessing whether that content is something that they should rely upon or not, but increasingly, so and I think we were talking about this earlier, I think it's gonna be an invested interest of brands, large and small to establish that the what they're delivering is actually real, or as real as they can prove it to be. I think consumers will demand I think brands will require it, I think big tech will be forced into a position where they're going to have to adopt some standards, otherwise, the government will get more deeply involved in setting those standards for them. And we know that big tech doesn't like that. So I think we have a multi sided marketplace here of brands, large brands, small consumers, the government, or governments globally, and this technology that all comes together in order to create an environment where we actually can solve this problem at some level of scale, I think we have to be diligent, I think the technology will continue to improve that allows us to scale this to the size of the internet. I think one of the big challenges from a blockchain perspective is the vast amount of content that's being creative, I think, is unfathomable. And I think it's being generated at a pace at which like we said, We've never seen and will continue to be generated even more rapidly as AI becomes more capable. And that's happening at a pace that we also haven't seen before. So I think we all have to work together in order to ensure that the technological environment that we're creating enables the scale to solve this problem at a pace at which, again, the end user consumer can effectively consume. From a user perspective, it has to be one that simple, it has to be one that can ultimately give people the level of trust that they're desiring without the effort required for them to do the work themselves. And I think again, what verify is doing is little help do that, as other solutions will as well,

 

Jacquelyn Melinek  

aside from brands, specifically, which areas of content or the internet do you both think are the most susceptible to harm right now and like are in dire need of a technology like this?

 

Speaker 1  

I think in an election season, people are very much eyes on the news, news media. And I think particularly, you know, obviously the complex international environment now people trying to navigate what's happening on the ground and these global conflicts, images coming out of those environments? How are they been potentially validated by news media organizations or not? I think that's feels like the most timely thing I'm seeing here in LA, like, I think in Hollywood, there's obviously massive considerations also just for talent, how their name and like their likeness is being represented, you know, whether there's been, who was consent, in what circumstance or to use underlying content to create new content. So these IDs issues are all being navigated in real time. And the good news is, we're seeing it already happen at such scale, the manipulation, but I think in general, anything with talent and content, I think, as the name is a little bit behind, but everyone could see where it's going right now, and how our who has what consent rights and how does takedowns work? How do you adjudicate potential concerns, I think is like a very active area of like technical and legal exploration,

 

Speaker 2  

I'd say every type of content is susceptible today. And news is at the forefront. There's no question that news is at the forefront. We all rely on the news. Whatever we describe as news today, news can come from Fox News can come from the armchair journalist, and every piece of content is susceptible to manipulation. And so I think as we enter into this presidential cycle, and we live in a global environment, where we're all receiving news from everywhere, we're bombarded with it. And hence, I think that is a primary place to start.

 

Jacquelyn Melinek  

Yeah, as the US elections are coming up in the fall. The major ones obviously, we have elections going on right now. What other ways do you to see blockchain and AI as complementary technologies to one another in combating misinformation, especially during election season?

 

Speaker 1  

Well, one thing we kind of talked about, I think, is that by signing something on chain, you can't delete that, right? Like we put a piece of information out there, you signed it. It has its potential to really establish its accountability, I think, which I think is very interesting as you think about a long tail of new brands, right, that are trying to build reputation. So if you're substack writer, right, and you are signing on chain like all of these assertions over time, there's really a track record to evaluate you against you can't just be the leader in the tweets that didn't pan out the way you want. on it or that you didn't do sufficient research around, right, like there is that record. So I think it's another interesting feature of on chain provenance in this information landscape to really allow, again, emergent brands to be able to actually have a record to be evaluated against.

 

Speaker 2  

And given that the technology is open source, I think we're going to see some really interesting applications in terms of how it's used. We can't predict I think, at this point how far the technology will be used in order to solve these kinds of problems that we're talking about here today. I think what melody just described this idea of on chain truth on chain, the probability of something on chain, which cannot then be manipulated, perhaps will change behavior in and of itself in ways that we haven't yet discovered. As long as you said, if you can't delete the tweet, maybe you'll be more thoughtful about what you write. It's possible. We can Yeah,

 

Jacquelyn Melinek  

you know, I'm

 

Unknown Speaker  

not convinced we're optimists.

 

Speaker 2  

You know, I said I was I was a cynic and a skeptic. But I'm also optimistic about, hopefully about humanity that we can all be better. And again, I'll go back to my skeptical, cynical side, what anonymity has enabled on the internet is that people can say anything, anytime, anywhere about anyone in any way. It's almost like we've unfettered an individual's responsibility to the world, because you can do it with a veil, right? A cloak of privacy that you people don't know who you are. And so you can do anything you want. And there should be a social contract where you can't just say anything you want at any time in any way that's provably untrue. I think what blockchain is, it will enable ultimately, is perhaps that maybe a resurrection of a social contract, where maybe you can prove that someone said something that was real or true, or can be proved that it wasn't real or true. And my hope is that, if that's the case, then those who are trusted by the world will be more thoughtful about what they say and what they do, because it has such a significant impact on all of us, right? We are a global community. And something said on one side of the world impacts something that happens on the other side of the world or something and said in my community, amongst my small group can impact others within my small group. So I'm hopeful that through technology, and through maybe a resurrection of a social contract that can be delivered through the technology that we're delivering, maybe we can make the world a better place. And ultimately, that's my hope. Yeah. Amazing.

 

Jacquelyn Melinek  

I would love if we could end this on a note from YouTube about a piece of advice for our listeners, regarding the whole conversation we had, what would be something you tell them to do, whether it's protecting themselves not posting maybe the worst things online or something else? Well,

 

Speaker 1  

I got one self interested thing, which I love, we are, you know, we're opening for feedback kind of on our approach and what we put out there and really kind of hoping to bring out the community of builders, we heard from developers, like really interesting ideas for applications built on the protocol from, you know, a browser extension factory, like superimposing or whether we find confirmation, you know, on the protocol for images you see online, etc. So verified media, that comm is kind of where you can find it. And we have our whole GitHub repos out there and our documentation. And so yeah, I'd love to hear feedback from the community or hear from builders who are interested in potentially extending the technology or wanting features on the technology. And I guess the other thing would be to experiment with some of these technologies that are out there. Now, again, I've verified that box is kind of one application built on the protocol. For now you can actually take images, you see if you needed to Fox, like see if their matches, kind of interrogate that content graph, I think it's the kind of new norms for consumer behavior that, you know, we might hope to see. And I think Mike's right, like, you know, is it a behavior norm, change a behavior change a little bit that all of us are going to have to participate in? And how much is reasonable to expect to have a consumer or not? And so we're kind of navigating that and really would love to hear feedback. Okay, Mike,

 

Speaker 2  

on my side, I'd say from a consumers perspective, my hope is that consumers demand this. And I think if they do, then what we're creating here will become even more prominent in this new technological age that we're finding ourselves in. And we can help solve that problem for them. If they don't demand it, then it's less likely to be solved. And I think, again, it's in our best interest to have big brands participate so that we can prove the world to be one that we understand and trust. So I would say the first piece of advice would be demand it demand from your brands, that the content that they're delivering to you is something that you can rely upon. From a polygon perspective, I'd say we are always interested in supporting builders and finding novel solutions to interesting problems. And the relationship between AI and blockchain is one that is coming to the fore. There's so many facets in this environment, that the opportunities are abound. And so for those builders out there who are looking to solve novel problems in the AI space, and they believe that blockchain can be a supporter of that, we would love to hear from you and please reach out to us. Awesome.

 

Jacquelyn Melinek  

All right, Melody and Mike, thank you so much for coming on today was absolute pleasure. We'll be back next week with conversations around what's going on in the wild worlds of web three with top players in the crypto ecosystem. You could keep up with us on Spotify, Apple Music, or your favorite pod platform and subscribe to our companion newsletter, also called chain reaction, links to the newsletter and stories we talks about can be found in our show notes. And be sure to follow us at chain underscore reaction on Twitter. Chain Reaction is hosted by myself Jacqueline melanic and produced by Maggie Stamets with assistance from the Shaku Kearney and editing by Kel. Bryce Durbin is our Illustrator and Henry pick Yvette manages TechCrunch audio products. Thanks for listening in. See you next time.

 

Transcribed by https://otter.ai