Transcripts

Transcript of Conference Call on AI Licensing Markets and Platform Power with Courtney Radsch

May 05, 2026

On May 5, The Capitol Forum hosted a conference call with Dr. Courtney Radsch of the Open Markets Institute to discuss the organization’s new report, “Same Gatekeepers, New Tollbooths,” examining the emerging market for AI content licensing. The full transcript, which has been modified slightly for accuracy, can be found below.

TEDDY DOWNEY: Hello everyone, and welcome. I’m Teddy Downey, Executive Editor here at The Capitol Forum.

Today, I am very pleased to be joined by Courtney Radsch of the Open Markets Institute. She is the author of the recent report, “Same Gatekeepers, New Tollbooths,” which examines the emerging market for AI content licensing and the implications for competition, media markets, and platform power. Courtney, thanks so much for doing this today.

COURTNEY RADSCH: Thanks so much for having me, Teddy. Looking forward to our conversation.

TEDDY DOWNEY: Yeah, I’d love to start off with you just framing for us how you approach this report. Why is this so important to talk about right now?

COURTNEY RADSCH: Sure. So, I will say that we began the research for this report about two years ago when we first got introduced to a couple of new AI startups like TollBit and ProRata that were aiming to create a licensing marketplace for publishers to connect with AI firms. And, of course, we’ve seen that expand to a wider range of intermediaries as well as some of the Big Tech platforms.

We were also following all of the bilateral deals that were happening between large publishers, large content producers like Record Labels, Reddit, and some of the largest AI firms. And looking at that as someone who comes from an anti-monopoly perspective and an interest in media sustainability, thinking this is not sustainable and this is not scalable because you cannot do bilateral deals between every content creator, every publisher, and every AI company, which is much broader. Every company practically now is an AI company that needs access to data.

So, we started talking to the companies. We started talking, continued talking, to publishers. And we have been holding every couple of months a strategy working group with publishers from around the world, with journalism support groups, and bringing in the founders of these intermediaries to talk about what their business proposition is, what they had to offer publishers, what they saw the market developing as.

And then, of course, this past summer, we saw Cloudflare get into the dynamic by default blocking web crawlers. We saw continued reporting that lots of Big Tech AI companies were bypassing the voluntary standards designed to give publishers control.

And it culminated in this report, which really is the first comprehensive political economy analysis of the AI content marketplace that has emerged. And it’s focused in on journalism as a particular dynamic that’s particularly important to AI systems in that it is the fact-based, empirical, timely, reality-grounded, grounding for many of these AI systems. And it’s emblematic of crises that are facing content creators and publishers across many industries.

And so, with this report, we were able to identify some of the key dynamics of the market and what the role that copyright, AI licensing deals, startups, all of these dynamics mean for both the content production supply side and the demand side by both Big Tech and small tech. Really interested in also what is this doing to the emerging market for the AI licensing startups and what that might do more broadly to the AI development market.

So, we also wanted to propose some policy approaches to address the evolving dynamics, particularly as we’ve seen since ChatGPT launched, an evolution from going from just needing training data to what’s called grounding or RAG, Retrieval Augmented Generation, which gives those chatbots and answer engines their more updated information.

And so, lots of new developments, very fast-moving field. The marketplace has been evolving significantly. And so, now with this report, we’ve really tried to give a comprehensive view of what’s happening, how to understand and how to approach this and the types of policies that we need.

TEDDY DOWNEY: I would love to talk about how you see these different intermediaries and their role and how that market is emerging and then get to some of the failures or problems that you see and then the solutions. But walk us through maybe how these marketplaces have emerged, these smaller competitors and increasingly the bigger ones entering this marketplace, sitting in between the AI companies and the publishers.

COURTNEY RADSCH: Sure. So, first of all, the market has three layers. So, you’ve got the bilateral deal layers, which is kind of outside of the actual marketplace that’s emerging. Then you’ve got this intermediary startup ecosystem that’s trying to serve publishers. But you’ve also got a long tail of uncompensated local, regional, ethnic, non-English language media, global media, that falls entirely outside of the market. Although, some of the intermediaries are trying to serve that.

And so, there’s a structural problem that some of these companies are trying to address, which is lots of companies, AI firms, need access to data and content. And just to say like not all data they need access to is content. One of the AI startups said that a client was looking for magazine layout formats. So, there’s really all sorts of embedded value that different types of firms might be looking for.

And so, these startups have emerged to sit between supply and demand. So, you’ve got an intermediary ecosystem that includes companies such as TollBit, Sphere AI, ScalePost, ProRata. And one of the things that they’re all doing is providing publishers with the analytics and greater control that they don’t get from bilateral deals or from simply relying on robots.txt.

So, part of what they need to understand as analytics is how are these AI crawlers and AI agent systems trying to access their content? And so, most of these are really aimed towards AI crawlers, mostly from a pay-per-crawl or pay-per-access framework. And these are creating a system where either AI companies can pay for access to content. Or if they actually use the content, some of the companies will provide attributions so that you can figure out how much your results contributed to a certain output. Others are simply offering access or kind of like a tollbooth. So, that before you can even crawl the site to find out if you’re going to use it, you would pay a fee.

And they’re charging all sorts of different types of rates. So, some are not charging publishers anything, for example, TollBit. Others are charging publishers 50 percent, like ProRata, which has its own answer engine and is aiming to split advertising revenue should that get up and running with the intermediaries.

Right now what we see is there isn’t a clear divide. We’ve also seen that now Big Tech companies are getting into this intermediary marketplace, which really started with a handful of AI startups, venture-backed, mainly out of Silicon Valley.

So, now we see Cloudflare, which launched a publishing marketplace. Microsoft launched a publisher’s marketplace, which is really useful for them. Because, of course, they have a built-in demand side for their own AI systems. We understand Amazon is getting into this space. So, the marketplace is now dominated by these Big Tech players that have an infrastructural role, often have built-in demand.

And in the case of Cloudflare, for example, which kind of intermediates about 20 percent of the entire world’s internet traffic, by shifting the default from anyone can access to actually a default blocking without permission to web crawlers, the dynamics of the marketplace is really open right now. A lot of the startups that we spoke with said it’s going to be difficult to create a functioning market if the supply remains freely available and if companies do not respect the voluntary robots.txt instructions, which either allow or disallow sites.

And, of course, you’ve got the perennial elephant in the room, which is Google, who, as we know, is an illegal monopolist in search. And given that it controls about 85 percent of search traffic and is responsible for a lot of referrals, it’s really difficult for publishers to block Google, which means that its AI crawler can also get an advantage. And unfortunately, in the Google Search case, the judge did not decide to see this as part of the same problematic dynamic.

So, I think right now, the intermediary layer is providing a useful service but is at risk of getting bought up by Big Tech or not be able to attract financial or venture capital anymore given the entry of some of these Big Tech firms. And again, they’re built-in demand and ability to tie that to other parts of their ecosystem.

TEDDY DOWNEY: One thing you mentioned in the report is something that these intermediaries are doing, which is they’re doing a better job of blocking. They’re doing a better job of controlling the content. And so, that makes it so that the AI is less subject to poison data, bad data, less at risk of copyright. That seems pretty interesting. So, they’ve got these benefits they’re providing. But you also sort of look out and see it as pretty unsustainable without some kind of policy intervention.

And so, what are the threats or the problems? You mentioned the high take rates, 30 percent to 50 percent take rates, which is just a reflection of monopoly rents in other monopolized apps like the app stores or Google’s take rates historically. You mentioned the entrance of Big Tech. You just mentioned Google still having such a dominant role. You talk about data asymmetry and Google still having such a tremendous advantage when it comes to real time intelligence and data.

But from a big picture standpoint, it did feel like without some kind of intervention, this sort of nascent startup ecosystem is unlikely to persist. And I wanted to test that. Is that kind of your view? Or how do you see this playing out in terms of could this market, theoretically, solve the publisher problems on its own?

COURTNEY RADSCH: I do think that the market is a really important development that is specifically important for the fourth prong of fair use. Fair use, there are essentially four prongs that enable a content user to bypass the restrictions of copyright.

And one of those is that there is not a market. It’s not having a market replacement function. And I think what this intermediary market shows is that there is very clearly a market. I mean, hundreds if not thousands of publishers  actually, I think I can say thousands because a Danish collective with 99 percent of the publishers of the country just signed up with ProRata as well. Thousands of publishers are using these platforms to engage in the market in the transaction for data for payment for these AI systems. So, that’s, I think, the really important development.

But I think in order to answer your question, I want to just talk about a couple of really critical trends. One is that the entire web, the premise of the Internet, and even the platforms that it has evolved into was an exchange of referral traffic for use of bits of content on search or on social media in order to exchange that value.

So, my search platform is better when there’s factual information on it or when there’s interesting content on it. And in exchange, we’re going to send traffic to you. So, that allowed digital advertising to make money for both sides. And this applies whether you’re talking about journalism, content creators, whoever.

However, that has radically shifted. When Google started, there was a crawl rate of 2 to 1. So, every two crawls, it sent back one referral. Okay, that’s a pretty good exchange of value. Now it is upwards of 16, maybe 13, to 1. With Anthropic, it’s as high as 73,000 crawls per one referral. Perplexity’s was 369 to 1.

So, what we’re seeing is that these AI systems, chatbots, answer engines, et cetera, are essentially driving 0.04 percent of external referral traffic to publishers versus even Google’s 85 percent, which was found to be a monopoly. So, this is very problematic because the entire economic basis of the content marketplace has cratered.

TEDDY DOWNEY: And you put a number on that. I think you mentioned $2 billion annually is taken out of the ecosystem.

COURTNEY RADSCH: Exactly. Those are industry estimates about what’s lost when you look at the traffic referrals compared to digital advertising. But there’s a concurrent problem in AI development.

As we’re seeing more and more publishers and high-quality content providers put their content off-limits to web crawlers, which the Data Providence Initiative has documented, looking at the most important quality URLs and information sources for AI training and development, an increasing percentage is off-limits to crawlers. Concurrently, there is a rise in AI slop. We’ve all seen all the crap out there of economic actors, propaganda, misinformation campaigns, et cetera.

Now we’re in a situation where you have scraped all of this free data, which was copyright protected but publicly accessible because there was originally that exchange of value. And you have the rise of all this crappy content. So, now we’re building our entire information system of AI with low-quality content. And if you want to answer questions or get factual information, you’re going to need to use these RAGs or grounding techniques to go out and retrieve more accurate information or more relevant information.

Well, more and more of that is off-limits. So, this is going to degrade the quality of the AI systems themselves. So, this is actually putting the entire—entire—economic system at risk. And smaller AI companies cannot afford the legal risk of not having rights-protected data. And they need to make sure they have access to quality data because they’re using smaller scale. They’re developing bespoke products.

And so, they want to have a marketplace where they can connect to those buyers. They don’t have massive sales teams to go out and do those deals. Even OpenAI only has a couple of people in their partnership department doing those deals with publishers. So, that is not scalable. They need access. These smaller firms cannot afford to offer indemnity to their users the way that Microsoft and Google can because of their trillion dollar valuations.

I think that there will need to be a marketplace, especially in the absence of public policy such as statutory licensing. So, one approach could be that we decide  our policymakers, if they could ever get their act together to pass some legislation, could decide to pass statutory licensing requirements the way they have in music, for example. We don’t make every musician go out and track which radio stations are playing their songs and who’s performing covers, et cetera. And we have different types of licenses.

So, I think we need to see something like that for this industry. But even then, you still need the marketplace to collect the type of data, the content providers, to enable those transactions. So, I don’t think that the marketplace is going anywhere. But I think the question is, is it going to be cannibalized again by the Big Tech firms? Or are we going to see this kind of innovation happening?

And where are we going to end up with those take rates? There’s a whole range. And when we look at a 30 percent take rate, I mean, again, like you said, these are the levels that have been found to be like monopoly rents. So, Spotify has a 30 percent take rate. Google’s illegal ad tech, the Apple and Google Play stores, all of these are 30 percent.

That’s a lot, especially when we’re talking about this idea of attribution. So, this is about how much of your content is used in an answer? And I think this is where the industry—actually all content industries—really need to think outside of what the tech sector has told them the value of their content is worth.

So, first off, trying to redo the concept of value and exchange according to the rules of referral traffic is not the right approach. There is nowhere near the same level of referral traffic. And that completely misses the point that you could not have built these systems or these products, chatbots, AI companions, search engines, answer engines, whatever, without access to that content. There’s value in that.

Furthermore, what you’re seeing now is this division between valuing the training data, which many see as kind of just we’re screwed because the courts are going to figure it out, and probably it’s not going to go in our direction. And they’re going to argue fair use, and it’s going to be caught up in litigation for years. So, let’s just focus on RAG. Let’s focus on the inference that’s being used to go out and get that more relevant information. But here’s the thing—

TEDDY DOWNEY: Because they need that in an ongoing basis.

COURTNEY RADSCH: Right.

TEDDY DOWNEY: They can’t just steal it forever if you prohibit them from stealing it.

COURTNEY RADSCH: Right. And so, if you remember when ChatGPT first came out, you couldn’t ask it about anything from before 2021. So, then they developed inference and this RAG process where it can essentially go out and get more accurate, relevant, timely information and mix that into the model and give you your result.

Well, from my perception, you shouldn’t just be thinking about RAG as a standalone because RAG doesn’t work, inference doesn’t work, without the underlying model. And that’s really important to think about because that also means that if you think about the statute of limitations on a copyright claim and it’s three years, well, I think you could see every time an inference system re-pings an underlying model, it’s another copyright violation. You’re using that training data and the inference data. So, both of those things need to be valued and we need to figure out what is the value also just for the underlying system, not just for that one response.

Also, there’s a whole evolution of AI agents. And what the tech companies are saying is, well, this AI agent is working at the behest of a user. So, we’re not going to respect robots.txt. This is different than our crawling for AI.

TEDDY DOWNEY: That strikes me as a sort of re-envisioning of Section 230 for the AI age. It’s just like, oh, it’s the user prompt. So, we get to do whatever we want, kind of like user-generated content. You can’t enforce any laws against Big Tech when it’s like that.

COURTNEY RADSCH: I mean, I think it’s a little bit different than Section 230 because that’s about liability for the outputs. Although, I think that’s a huge elephant in the room because right now it is not clear whether or not the AI companies will have liability for that. I do think they should so that we develop it in a way that is pro-social.

But I think more importantly is this idea that tech companies are like, no, no, no. That’s a user’s agent, provided by us, where they’re probably paying a subscription fee for it or we’re charging advertising against it and earning money. But hey, that’s a user. We aren’t going to respond to robots.txt and we don’t owe you anything.

Again, we cannot give into that because these companies are trillion-dollar companies, OpenAI, Google, Microsoft, with these really advanced general-purpose chatbots that are doing all of these things on behest of their users and just taking everyone else’s content and information that they’ve gone out and spent money on and trying to run businesses to get and yet returning no value there.

TEDDY DOWNEY: I wanted to stay on this for a second, which is in the absence of any kind of legal framework from Congress or the states, which obviously could still happen. You have some big elections coming up. The states are poking around on AI issues.

But you mentioned over 100 big copyright cases. You mentioned the point that you could theoretically argue that they continue to violate copyright law, not on just the retrieval side of things, but also on the underlying models every time they ping them based on illegal copyrighted content.

You just mentioned this disrespect of terms of service and the protocols put out by the publishers on basically a made-up theory of user-generated bots can violate the law because they’re user-generated or whatever that excuse is.

You also mentioned a number of other—and I think all of this is kind of dance—so, on the one hand, this seems like an opportunity. And you mentioned that the transition to AI actually is an opportunity to get out from under the thumb of the monopolist in the search space, the way that the web has been evolved around publishers. We have this temporary dramatic pullback, actually, this $2 billion loss in the ecosystem. But theoretically, this could actually be an opportunity should these laws push the AI companies in a new direction to create an ecosystem for publishers that actually returns some money to them, actually includes them in this ecosystem from a financial standpoint.

How do you think about those laws? You mentioned that the uncertainty and the timing of copyright is just going to be so long. That stands to the benefit of the AI companies. But on the other hand, copyright law has held up over time. There’s hundreds of billions, if not trillions of dollars of liability.

Assuming that the content companies can survive through these copyright decisions, the courts could theoretically establish a pretty robust regime here if the AI companies lose. I’m curious how you think about that in terms of, is it just going to take too long? Or are judges just not going to—they’re just going to throw up their hands like, well, we can’t actually make these people pay for what they stole, even though it’s statutory damages and things like that.

So, I’m just curious to get your thoughts on all that. And then that’s kind of a hypothetical layer. Then maybe we can move into some of the solutions that you discuss more in-depth in the paper.

COURTNEY RADSCH: Yeah, I think with copyright, a few things. So, one is it feels odd to me. My background is in digital rights activism, human rights journalism. And typically, copyright has been seen as a barrier. But what I’ve come to understand by really delving into this topic is that copyright is what made the open internet work and as one of the most amazing public resources that the world created.

So, what we’re seeing now is this kind of distortion of the idea of copyright leading to the internet closing off. And I think what is actually happening is we’re seeing the decline and closure of the open internet because of the lawlessness and lawbreaking led by Big Tech companies. So, we need to reframe that, first of all.

I think the courts, a couple of things. One is there is a debate over what is the technical thing that is happening on the back end. Is it actually copying? What is it copying? Is it facts and symbols? Or is it context and semantics? So, there’s that whole debate.

TEDDY DOWNEY: That strikes me as a made-up debate. I mean, they’re copying. Let’s just be honest. They’re copying.

COURTNEY RADSCH: I think when you come down to it, whether you’re copying the word embeddings or whatever.

TEDDY DOWNEY: Every time I ask someone about this, they’re like, oh no, it’s like a human reading something. I’m like, it’s not. It’s fundamentally not like that.

COURTNEY RADSCH: It’s not a human. I mean, that’s a whole other project I’m working on. What happens as we give AI human rights? I mean, that’s a whole other issue.
TEDDY DOWNEY: That’s kind of part of what they’re saying, though.

COURTNEY RADSCH: Right.

TEDDY DOWNEY: They’re trying to give it rights to avoid the copyright.

COURTNEY RADSCH: There is this anthropomorphism of AI kind of throughout the entire thing. And I would say, in addition to saying, well, it’s just like a human learning. No, it’s not just like a human learning. Actually, the companies have spent billions of dollars on chips, energy, talent, land, water. The only thing they think they don’t have to pay for is other people’s content. So, that is really problematic.

So, regardless of the technology. But the fact is that in copyright, especially in this country where—in my opinion, having talked to a lot of copyright lawyers, and I taught at UCLA Law, not a lawyer, but the law is fundamentally interpretable. Copyright, as we said in our submission to the Copyright Office, is fundamentally about the public interest and how do you balance different public interests to encourage people to create and to allow innovation and development, et cetera?

We have to balance those two things. But right now, it’s very imbalanced, right? Because the creators are losing on all sides, especially journalism. And that is because a lot of people approach it as, well, journalism is just collecting facts.

First off, many of those facts do not exist before they are observed and reported by journalists. So, going to the front lines of Gaza or Ukraine or Minnesota is not cost-free. Those facts about how many bodies are on the ground? What are the conditions? Sure, they’re facts. But they don’t even exist until a news organization or a freelancer put their lives on the line, spent the money to send someone to the front lines. And I am very passionate –

TEDDY DOWNEY: Yeah, you know how much that content creation costs, but you could just look at how much the journalist is paid.

COURTNEY RADSCH: Yeah, I mean, and that doesn’t even begin to cover it. I spent many years at the Committee to Protect Journalists where, I mean, journalists literally put their lives on the line. They go to prison. They need safety gear. Like, there’s a lot of costs beyond even just the salary of a journalist. But yes, so it costs money to create facts.

Furthermore, a lot of these facts, I mean, they are told as parts of stories. There’s a lot of analysis. Journalism is about much more than facts. So, we need to get outside of that idea.

Furthermore, the courts, I think the judges, are really worried about breaking the internet, as we saw with Judge Mehta and his decision in the Google search case, which I’m just shocked that he somehow decided—he probably didn’t want to get involved in the AI innovation debates. And we cannot divorce what’s happening in the courts from what’s happening in the political sphere.

So, not only are the courts being asked to decide these really fundamental questions, but there’s this political push by the Trump administration for AI innovation in our race with China as this existential threat. Of course, they never say what are we innovating towards? It feels like we’re innovating towards making a more complete surveillance state than China. But nonetheless AI innovation.

And they haven’t explained what are we innovating for? Or how do you continue to have the full ecosystem needed for that innovation, right? If you don’t have people out reporting the facts, creating music, writing books, the whole system will collapse. But I think that judges and the courts are very cautious.

Meanwhile, you’ve seen the Trump administration pursue state preemption, trying to prevent states from regulating AI for ten years. I mean, they’re doing everything possible. Trump said, well, I don’t see how you could pay for copyright for every piece of data used as if he has any concept of how AI or copyright works. I mean, it’s just that there’s a lot of political pressure. And so, I am not hopeful that the courts are going to decide this one way or the other. And the few court cases we have seen are ambiguous. Some seem to favor fair use. Others seem to favor copyright for publishers.

So, it’s unclear. And remember, this is not just a U.S. problem. This is a global issue, publishers around the world. And so, that’s why it’s important that we also see lawsuits in Europe, in India. But I think that if we look at Napster, the free music sharing platform as an example, one reason that we saw music licensing and new platforms come up is because it became ineffective, too cost ineffective and legally ambiguous, to continue that platform and to outside of a market solution. And you saw public policy, right? So, you saw the requirement for—

TEDDY DOWNEY: But the key was they got justice in the courts on copyright. I mean, the courts shut down Napster.

COURTNEY RADSCH: Yeah, exactly. So, we need a lot more lawsuits, in my opinion. Like, what I don’t understand is why we don’t see way more lawsuits in many more countries. There’s a lawsuit in Canada, for example, against Cohere. But the lawsuits will help reconfigure the balance. And remember, almost no country in the world has as permissive of an interpretation of fair use  or fair dealing, as they call it in other countries  as the United States.

TEDDY DOWNEY: Oh, wow. I mean, it’s actually not that friendly. So, it’s kind of like   it’ll be interesting. I haven’t looked at the other countries. But like, copyrights   obviously, I’ve unfortunately had to sue some people. And it’s pretty established statutory damages. So, the damages are significant, I think. And the fair use is, like, pretty   actually, in many respects, clearly defined, like how you can use it.

COURTNEY RADSCH: Yes.

TEDDY DOWNEY: And it’s not really for commercial purposes. It’s more for like –

COURTNEY RADSCH: Research.

TEDDY DOWNEY: —academic purposes and things like that. And so, I actually think it’s not that lenient of a loophole in terms of that respect. But I’m curious to see the other states. Because I haven’t looked at the other states as much.

COURTNEY RADSCH: So, we did a review of countries around the world that have pursued news media bargaining codes or have considered those. So, like, Australia, Canada, where they’re trying to balance the platform/publisher relationship. So, we did a study of about 10 different countries.

And what we found is that most of them have some sort of fair dealing provision, which is akin to fair use, but it typically excludes commercial use. The EU’s text and data mining exception in its copyright directive, and as it’s in the EU AI Act, does not allow for commercial use.

But you cannot separate out what the law says and, again, the geopolitics. Because what is the Trump administration doing? They are going after every country that is perceived as trying to regulate the platforms or the AI companies to intimidate them and try to prevent them from pursuing regulation, whether that is the Digital Services Act and Digital Markets Act in Europe, or whether that is copyright and AI. I mean, it’s really challenging. Because now you’ve got not only the massive economic power of the tech platforms, but that’s now backed up by the massive power of the U.S. government.

TEDDY DOWNEY: And I think, to me, when you say the copyright laws are sort of more generous here, Google did get away with not ever violating copyright law with the way that they do Google search with crawlers and things like that.

COURTNEY RADSCH: And Google Books.

TEDDY DOWNEY: And Google Books. I think that’s, but again, that involves non-profit  I mean, the noncommercialization stuff to some extent. But I think you’re totally right that if you just look at it like, well, Google got away with this last time between some mix of Section 230 and just kind of getting exempted from copyright law, effectively, the way that they set up the crawlers and things like that. I do think, however, I’m curious to get your thoughts and then I do want to get to solutions. And I want to get to some audience questions.

Google, when they won all those cases, was like a darling, right? It was a company that people liked. No one seems to like AI companies. I mean, their approval rating is very low. Data centers are getting banned. Big Tech is losing on Section 230, which used to be blanket immunity. And now they’re actually having to face potential big time damages.

So, I’m curious to get your thoughts if the public perception, and then ultimately the perception of the courts, might be different. You mentioned Meta, but I’m very skeptical that Judge Brinkema will let Google off the hook in the AdTech case. We just saw a jury convict Live Nation.

Is it fair to say that the courts could be going in a different direction here and not bail out AI or with some exemptions or some new generous interpretations of fair use? You seem not hopeful. I’m just playing devil’s advocate. I could see it going either way.

And my reasoning is that it seems like they need a change. They need the courts to change copyright in their favor. Whereas, last time they got the courts to interpret Section 230 under their view, which I think is a little bit of a different exercise to sort of get the immunity that they needed to behave the way they have. So, I’m curious to think if you think that there’s any merit to being a little bit more skeptical that the AI companies and Big Tech will win out when it comes to the courts.

COURTNEY RADSCH: So, I do think that public perception is different, and we are much more quickly realizing that we can’t have this complete tech exceptionalist perspective. And we’re fighting back more quickly than we did in the previous era. But let’s also be clear that when Google and Facebook started, they were not the digital advertising capital surveillance capitalist companies they became.

And remember the Google Books case and the initial Google Snippets case for Search? I mean, it was radically different technology, right? It’s very different to say we’re going to put a low-quality thumbnail and a general hyperlink with, like, one sentence than it is to say we’re going to give this, like, beautifully packaged paragraph with a link and a high-quality photo, which we saw Search do. And now to say, like, okay. Well, we’re just going to take all of this information and mash it up and also get the benefit of having your product in our system, but not return any of that value to you.

I think that there is—personally, if I were a judge, I would say limited precedence to the cases that Google and Big Tech love to cite for fair use. A couple of reasons. One, again, that this is fundamentally different and it is definitely violating the first and second and fourth prong of fair use.

And I think that we have to think about this in the public interest. How are we balancing the interests of creation and use and innovation? We can’t only benefit this idea of innovation as if individual creativity wasn’t also a form of innovation. So, I think we need to be careful about that.

And I also think that we’re in an era where precedence apparently doesn’t really matter that much, right? We’ve overturned abortion laws. We’ve gotten rid of a lot of the independent agencies and other sort of democratic institutional safeguards because the Supreme Court has been like, ah, precedence, what the heck. Let’s throw it out.

So, I kind of agree with you. I think it could go any direction. But look at what happened at the Copyright Office when their study came out and said that basically, like, yeah, this really isn’t fair use to just take everyone’s data. And then if you recall, the copyright director was fired. Luckily, the report got put out anyway. And the Trump administration intervened really powerfully. And so, I think we have to be realistic that yes, public perception has shifted. And I think there is more bipartisan concern about the power of Big Tech.

However, there’s also this ongoing discussion around innovation, competition with China, the U.S. needing to be the leader in AI. And I think this concept that it would just be too hard or too expensive to pay for data, which is ridiculous. Because if AI is somehow going to, like, take over the world, get rid of all our jobs and whatever, certainly they can

TEDDY DOWNEY: They should have some money left over to pay the journalists.

COURTNEY RADSCH: Well, yeah. And I mean, certainly they can figure out how to get attribution. And one of a couple of the startups that we profiled have created attribution technology, where you can trace where something came from, right? When Congress passed the Digital Millennium Copyright Act, it forced the companies to create the technologies, like hash databases, like Content ID, that would enable them to identify copyright-protected content. And they created a notice and takedown system. It is imperfect. There are issues. But nonetheless, they created the technology.

So, whether we’re going to see the technology come first or the laws come first, I think that’s unclear, but it’s certainly possible. And they have so much money, they can absolutely pay for data. The one thing we have to be careful about is not creating a system that entrenches the power of big AI and makes it hard for small AI to compete.

So, I think we should be looking at thresholds like Europe, where they have VLOPS, Very Large Online Platforms, that are subject to a different level of regulation and requirements. And we could pursue a similar idea here in terms of data content and licensing, but we have to have some sort of system to make sure that the entire ecosystem functions.

Because if you think about journalism, it really is a keystone species in the information ecosystem and in the AI ecosystem. When a keystone species dies, when it’s killed by—whether you’re talking about the factory putting crap in the water or you’re talking about the theft of everyone’s intellectual property—the entire ecosystem collapses. And we are building a multi-trillion dollar ecosystem on Big Tech’s big AI bet.

TEDDY DOWNEY: I want to get to some listener questions. If you have questions, you can put them in the questions pane. You can email us at editorial@thecapitolforum.com. You can put them in the chat. Get us your questions. We’ve got a couple here.

What comes next from this report? Does Congress need to legislate on it? Can state legislatures take action? You have a couple of recommendations in the report. Maybe you could talk about those and talk about what can Congress and state legislatures do?

COURTNEY RADSCH: Sure. So, the copyright issue and statutory licensing frameworks really do need to be set by Congress. Copyright is a federal law and it does preempt state law.

So, we do need to have some sort of licensing framework and enforcement of copyright at the congressional level. And we have seen, again, bipartisan support from everyone from Klobuchar to Blackburn to Hawley to Wyden. There’s a lot of agreement that this is not sustainable and that local news plays a really important role in their communities. So, we should be enforcing copyright. So, we need that at the federal level for sure.

At the state level, I think we could see things like enabling collective bargaining and sectoral bargaining. So, for example, alleviating antitrust concerns that would prevent publishers from creating collectives. And there were attempts to do this in the Journalism Competition and Preservation Act and some other state laws like in California and Oregon. I think that would be helpful. Because, of course, most publishers don’t have the staff expertise or time to engage in individual level bargaining.

We need to see transparency requirements. Ideally, we would see transparency at the federal level so that you have everything around in the states, but you need transparency on a few different dimensions. One is transparency on training data sources, transparency on inference data sources and attribution, and transparency in web crawlers and web agent requests. What is your web crawler? Who is it working for? What is it doing? And I think it’s kind of like know your bot legislation so that then publishers and others can figure out what they want to do with that.

And I think that we need to see explicit inclusion requirements for quality information. If you’re putting out a product in almost any other domain, if you’re a food producer, you have a nutrition label that tells you what went into this. We could do the same thing for AI.

And as a side note, I would say it would be great to understand the carbon footprint. That’s a whole other issue. But like this idea of a nutrition label, this idea of combining transparency and improving—or rather reducing—the information asymmetries between the technology companies, the AI companies, that have all of the information and the content creators and publishers who have very limited information. And I think we could see legislation, for example, prohibiting third-party resellers, again, trying to bypass what is essentially your terms of service, whether that’s expressed in narrative or robots.txt or some sort of forthcoming standard. Like when the industry creates a voluntary standard, legislation can require compliance with those voluntary standards.

TEDDY DOWNEY: I want to quickly follow-up here. Is it really possible to create this market without first breaking up at least some of Big Tech’s vertical power? You mentioned in the report that Google has all these different ways to sort of coerce publishers. Obviously, they’re doing AI answers when you go on search and that’s pushing out that referral money, as you mentioned. But they just have so much power throughout the supply, so much vertical power to push around the publishers. Aren’t we just inevitably kind of going to end up in the same place we got with Google Search if you don’t break up Google or at least prohibit these companies from being both the cloud provider and the AI provider and the intermediary?

We saw what happened with Google AdTech. You seem concerned that that will recreate itself here. Isn’t that just like kind of a necessary prerequisite before you even start talking about how a market could function? Or can you get at some of that vertical power through sort of specific rules and things like that?

COURTNEY RADSCH: I don’t think we can wait. I think we have to be able to do several things simultaneously. And we should absolutely be, I think, breaking up these big companies, but definitely creating restraints on this vertical integration.

So, for example, we’ve done several reports about public interest AI governance. And so, that would include governing the cloud as a public utility, limiting the ability to use data across different services. I think that we have to prohibit exclusive arrangements between, again, VLOPs for sure, but potentially of a certain size to enable the market to keep working. I think that we have to reduce the data asymmetries.

But for sure, right now, they are sitting on the supply and the demand side. And so not only that, but they’re worth so much money. They’re spending more money lobbying in Europe, in Washington, in Brazil, in South Africa, than any other company in the world, any other sector. So, how do you possibly compete with that? I mean, we are seeing some fight back. I really love what Brazil is doing. They’re frankly a lot more ambitious, I think, than we are here in the U.S.

But I think that we have to try to break up the power that those companies have as these tech behemoths, prevent the usage of these tying and bundling between different parts of their vertically integrated tech stacks, and pass the type of enabling legislation that will enable the market, the nascent market, to continue to thrive and grow.

TEDDY DOWNEY: We’ve got another question here. What is your view on the impact of the court cases going on in Europe, GEMA versus OpenAI, for example?

COURTNEY RADSCH: I think they’re really important. As I said before, it is less permissive in Europe, and there is an actual text and data mining provision in the copyright bill. It’s spelled out. So, it’s very clear that you cannot have these for non-commercial uses. And the fact that, like, OpenAI was a nonprofit and then somehow just turned itself into, like, what is an $80 billion for profit or trillion dollar for profit. Like, this is crazy, right?

So, I think that these cases in Europe will be really important. We saw a case in France, for example, where the judge had said Google can’t continue to use some publisher information. They did continue using it. And then they actually imposed meaningful fines on it. I think that we’ve seen the same in Denmark.

So, I do have hope that Europe can—we’ll see some developments there. It’s an important market for these companies. It’s unrealistic to think that, like, oh, they’re just going to pull out of the EU. Like, no. But I think it’s really important that the EU doesn’t get scared and back off as we are seeing kind of threats to do because of the threats from the Trump administration, whether it’s tariffs or restricting visas or all of these really, like, crazy geopolitics that are entering into what should be independent legal regulatory processes.

So, definitely keeping an eye out for them. But I also don’t think that the U.S. legal system looks much to Europe as it’s making its own interpretations. So, I’m not sure how much weight that will have. And I think that, again, when you have the wealthiest companies in the world, wealthier than most countries in the European Union, that means that they can turn lawbreaking into a competitive advantage and not necessarily need to comply. So, we’ve seen that, for example, with the fine on Twitter out of the EU. So, I’m not sure. We’ll see.

TEDDY DOWNEY: So, well, this is—I’m guessing I know your answer. There’s a follow-up question here. Within the geopolitical context and pressure from the Trump administration versus the EC, will competition law enforcement help the market in the field?

I want to push back against you a little bit in that, like, I actually think it’s, like, necessary that these companies get access to the European market. It’s a lot of money. I actually think that law enforcement there could have a much more dramatic impact if they actually enforce the law. They seem to always fall back on fines and things like that, as opposed to really curbing the conduct.

COURTNEY RADSCH: Right.

TEDDY DOWNEY: But you’ve seen Big Tech adjust their conduct to some extent. So, I guess this question gets at is it worth going at the AI companies if you’re taking heat from the Trump administration? To my mind, the answer is probably, on balance, yes, it’s worth it. Because you’re going to lose your publishing industry one way or another, potentially. And you might at least try to save it, as opposed to just throwing your hands up and try to appease Trump for a few years.

COURTNEY RADSCH: I mean, 100 percent. And it’s not just about the publishing industry or the creative industry. I mean, it has huge economic repercussions for domestic markets. And they’re putting their sovereignty at risk, right? Because these companies, to your point, is they are so vertically integrated, and they have enormous power. And so even if they want to, say, enforce copyright and create behavioral changes, we know that behavioral changes are important but insufficient. We’d have to have the structural interventions. But I do think that those are really, almost most likely, going to have to come from the U.S. They’re going to be really hard to enforce in Europe.

That said, this is an existential issue for Europe. It has massive implications for digital sovereignty, for their ability to regulate their own markets. But also, the European governments should use their own power of procurement to support their local industries. We’ve seen the UK—we’ve submitted several papers into their consultations over this issue and other competition market issues, and they keep signing contracts with hyper scalers for cloud services and appointing former U.S. tech execs—I mean, they’re European, but they work for the Big Tech firms—to head up their competition market authority.

So, I think Europe has to stand up for itself. And I think that we need to—like what Mark Carney said  you need to see the middle powers band together because this is about, yes, the future of journalism, the future of domestic publishing, the future of—I mean, let’s also remember that journalism is an essential component of democracy. This is about the future of entire economic sectors, creative sectors. I mean, the U.K. is one of the world’s leading cultural producers. What’s going to happen to that when AI companies just continue to steal for free and create competing products?

So, this is the entire system at risk, and the power, the more power, that these Big Tech firms have, especially in alliance with an increasingly fascist regime in the United States, is putting democracy at risk in Europe. It’s putting sovereignty at risk in Europe and elsewhere. And so, this is really more existential than just the content licensing marketplace. And that’s why we need to look at the specifics but also understand the broader repercussions.

TEDDY DOWNEY: I want to stay on this for one more second, then we’ll let you go. We’ve kept you long enough. But one thing that I also am always questioning is how much does the Trump administration  are they really willing to go to bat for what they say? Oh, we’re going to raise your tariffs if you don’t get rid of these regulations.

There have been these laws that stand on the books in Australia and Canada to pay journalism and media companies out of the pockets of tech platforms. I mean, they’ve been punished, these countries will get punished, but for reasons unrelated to that, right? You run a bad ad that Trump doesn’t like. Or you don’t come to his rescue in the Strait of Hormuz quickly enough. So, I question actually whether or not the Trump administration’s conduct is in any way tied to this. Because David Sachs’ influence in the White House is big, but it seems fairly empty when it comes to the USTR just putting out press releases or what have you. And then when you see the real result in trade policy, it’s for other reasons, national security, et cetera.

So, I see it as a bit of like empty calories in terms of how they’re threatening Europe and Canada, but around tech policy. But it is a show of force from the lobbying sector and how much money and influence they have just to get those press releases out. I just question the level of commitment from Trump. And then you also have Democrats expected to pick up a lot of power in November. Does any of this offset your sort of concern that Europe would be punished if they go along with more enforcement in this area?

COURTNEY RADSCH: No. So, I was in Brussels last week, and I think what we see is that the mere threat of retaliation, especially in trade, has resulted in some really negative impacts in legislation there. So, reopening the GDPR, looking at the Global Data Protection Regime, looking at whether the Digital Services Act should be revised hedging on whether the tax and data mining exception should be applied.

So, I think that it is having an impact regardless of whether or not it’s actually resulted in a trade impact. But I would say that there have been real impacts. I mean, there are several people who were prevented from entering the United States because they worked on the Digital Services Act. I mean, these are real impacts. It’s having a massive chilling effect on regulation development and enforcement, on the types of research and speaking that people will do.

I mean, when I speak in Europe about the importance of resisting the Big Tech power and the dangers that poses as it’s being fused with the U.S. government under the Trump administration, you know, people are not willing to speak out. And I think that there is a real potential for impact that, even if you have a change in some parts of government in Congress, for example, in a couple months or whatever, I don’t think that’s going to change the dynamic.

And it’s not like Europe is suddenly going to be like, okay. Well, we’re going to stop this review, right? There’s all of these things that are already underway. And you do have real concerns in Europe about domestic innovation, about domestic competitiveness, which are real, but are, in my opinion, being distorted by the power of Big Tech and the inability to imagine an alternative, and the difficulty of actually creating alternatives when the market is so imbalanced.

TEDDY DOWNEY: Yeah. I guess I kind of see it more as they’re letting a weak position bully them into a lot of behavior they don’t otherwise need to be doing.

COURTNEY RADSCH: Yeah.

TEDDY DOWNEY: Because it’s not helping them avoid tariffs. It’s not helping them from a trade perspective.

COURTNEY RADSCH: Exactly, exactly.

TEDDY DOWNEY: It’s sort of just like weak kneed stuff.

COURTNEY RADSCH: Yeah, and there are alternatives with smaller language models. I mean, also the American approach is very environmentally unsustainable. And Europe and many other countries do care a lot more about the environmental impacts. I mean, lots of organizations in Europe won’t let their stuff fly to certain places because of the carbon footprint.

So, I do think that if we can bring that more into the conversation, there is lots of good science and development showing that scale is not the end-all be-all of AI development. So, the more that we can see alternatives emerge, the more we can see, hopefully, competition, not just on size and scale, but also on quality and on alternative models.

TEDDY DOWNEY: Yeah. I love this paper. I love this conversation. We got into how the market’s working, problems in the market, where the market’s going, potential policies down the road. I’m super excited to follow this going forward. Please go read the paper. It’s excellent. Courtney, thank you so much for doing this today.

COURTNEY RADSCH: Thanks. Teddy, appreciate it.

TEDDY DOWNEY: And thanks to everyone for joining the call today. This concludes the call. Bye-bye.