Transcripts

Transcript of Conference Call on How Courts Interpret Copyright in the Age of AI

Aug 13, 2025

On July 28, The Capitol Forum held a conference call with Keith Kupferschmid, CEO of the Copyright Alliance, for a conversation about artificial intelligence, copyright, and what court rulings may mean for artists, platforms, and the broader creative economy in the age of generative AI. The full transcript, which has been modified slightly for accuracy, can be found below.

TEDDY DOWNEY: Good afternoon, everyone. And welcome to our Conference Call on “How Courts Interpret Copyright in the Age of AI.” I’m Teddy Downey, Executive Editor here at The Capitol Forum.

And we’re joined by Keith Kupferschmid, CEO of the Copyright Alliance. He’ll be talking with us about what court rulings may mean for artists, platforms, and the broader creative economy in the age of generative AI. And Keith, thank you so much for doing this today.

KEITH KUPFERSCHMID: Yes, thank you for inviting me. I’m looking forward to it.

TEDDY DOWNEY: And before we get started, quick reminder. If you have questions during the call, please type them into the questions pane in the GoToWebinar control panel. We’ll collect them and address them after the interview section of the call.

And so, Keith, I’m so excited to do this because I love nerding out on copyright stuff. You are, as far as I can tell, probably the biggest expert that I’ve ever had a chance to talk to. So, I’m super excited about this.

KEITH KUPFERSCHMID: You’re being nice. You were going to call me the King Nerd, I think. King of all copyright nerds. I’m fine with that. I’m fine with that.

TEDDY DOWNEY: And maybe we could start off, if you could tell us a little bit about the Copyright Alliance and your members. That might be a good place to start.

KEITH KUPFERSCHMID: Sure. And as the name indicates, we really just focus on one issue, copyright. And there’s a lot there. There’s a lot for us to do on copyright, especially these days with artificial intelligence and generative AI. And we’ll talk a lot about that.

What we do is part educational, part advocacy. We represent 15,000 different organizations across the spectrum of copyright disciplines. So, when you think of copyright, you usually think of movies, music, books, video games, software, things like that. And we represent all those types of companies and trade associations.

But we also represent a whole bunch that maybe you don’t think about. The technology companies like an Adobe or an Oracle, the sports leagues like the NBA and the UFC. What about the National Association of Realtors? They have their MLS database that’s protected. We represent some model code providers, folks like that.

And we also represent approximately two million individual creators. These are the authors and the artists, the songwriters and the software coders, performers and the photographers, and so on and so forth. And what we do is advocate on their behalf before the courts, before Congress, the Copyright Office, the executive branch, you name it, to make sure that policymakers and lawmakers understand the importance and the value of copyright.

TEDDY DOWNEY: And one quick question on that audience. What happens if there’s a conflict between one of the big companies or one of the big industries and the individual users? How do you deal with that?

KEITH KUPFERSCHMID: So, typically, there is not. Typically, our members are all on the same page. But as you pointed out, sometimes we will run into an issue where one member doesn’t agree with another member. And that’s usually where we are representing, let’s say, individual creators and big companies on something like moral rights.

And what we do is try to find a common ground that we can all agree to. And if at the end of the day, both sides don’t think we’re saying anything all that’s useful or effective, then we just stand down and let them take the ball and run with it. But that happens very, very rarely. In most instances, we are able to all be on the same page.

TEDDY DOWNEY: And maybe a good place to start is we have different levels of sophistication here in terms of our audience. If you can just talk to us about Congress, how they establish copyright protection, how courts determine whether or not something’s a violation, and maybe a little bit of the story about how that’s evolved over time and where we are now. And then we can kind of get into all the crazy stuff about AI that’s going on.

KEITH KUPFERSCHMID: Yeah, that’s good. It kind of sets a foundation for those who may not be aware. A lot of people aren’t even aware that copyright law has its roots in the actual U.S. Constitution. Specifically, Article 1, Section 8, Clause 8 of the Constitution grants Congress the authority to make intellectual property laws, specifically copyright and patent laws. In this case, we’re talking about copyright law.

And so, that’s what Congress has done. And over the years, Congress has changed the copyright law to adapt to different technologies and new technologies. But at the heart of copyright law are the rights that are provided to copyright owners. And there are five rights: the right to reproduce or make a copy of a work; the right to prepare a derivative work, that means an adaptation or a modification of your work; the right to perform the work publicly; the right to display the work; and the right to distribute the work.

Those are the rights that every copyright owner gets. An infringement occurs, when one of those rights is violated. If somebody engages in one of those rights that doesn’t have authority to do it, doesn’t have permission to do it, unless one of three things is true, there’s a defense, like fair use—which I know we’ll talk about a lot here — that excuses the infringement. The work is not protected by copyright. Maybe it’s in the public domain. Maybe its term has expired. Or the person has authority to use the work. They let’s say have a license from the copyright owner. And I think we’ll probably talk a little bit about that as well.

And so, that’s when infringement occurs. And we’re going to talk a lot about artificial intelligence, and those issues come to a head in all these artificial intelligence cases.

TEDDY DOWNEY: And I want to get into the courts and where we are with the courts, because there’s some big decisions going through the courts right now. But before that, really quickly, the White House recently came out with this AI statement. I noticed there was nothing on copyright in there, interestingly, from the White House. A lot of people were expecting them to come out hard against copyright protections, or there’s some concern about that.

Then the President came out and said, had this weird rambling statement, basically suggesting he thought AI was actually people and they deserve the same rights as people in terms of fair use and derivative works. It was odd. And then the White House came out and said — sources came out and said, hey, actually, no, we want this issue to play out in the courts. What did you make of the whole White House executive order on AI as far as copyright is concerned?

KEITH KUPFERSCHMID: It didn’t come as too big a surprise to us that there was nothing in the report specifically about copyright. As you point out — and as I think several speakers from the government have pointed out since that day — this is going to play out in the courts. Whether AI companies need to license or not, whether it’s fair use of these copyrighted works that they use for training, is going to be decided by the courts. And they’ve already started to decide some of these cases.

So, you’re right. There was nothing in the AI action plan that directly spoke to copyright. And then the President later that day made some remarks about copyright that frankly were a little unclear. One of the things he said was that you can’t expect AI companies to license everything, right? It’s just not possible. We don’t disagree with that. That’s why we have the fair use doctrine to figure out what needs to be licensed and what doesn’t need to be licensed.

Ultimately, I think we’re on the same page there. But there are other things that the President said certainly that are a little bit more concerning. But ultimately, as you point out, this is going to be decided by the courts.

TEDDY DOWNEY: And let’s get into some of the recent court cases. It seems like it’s been about 50-50 lately with some really good cases for copyright holders, some, I’d say, troubling to deeply troubling cases. Maybe a little bit leaning towards the copyright holder in terms of just an optimistic way that things are moving through the courts, respecting copyright law at sort of a fundamental level. But I would love to get your sort of overall assessment of how things are going and then kind of dig into each one of these key cases.

KEITH KUPFERSCHMID: Yeah, I appreciate that. And remember, we were talking about me being a copyright nerd. If I get too nerdy, just jump in and totally ask me to explain something.

TEDDY DOWNEY: I love it. I love it. I’m going to enjoy it.

KEITH KUPFERSCHMID: All right. So, for those who have seen it, I’ll start by talking about two cases that were decided in the Northern District of California, about a month ago at this point. And the two cases were the Bartz v. Anthropic case, decided by Judge Alsup; and the other one was Kadrey v. Meta that came out two days later. And so, you have two cases, decided by the same court, Northern District of California, two days apart.

The decisions in these cases could not have been more different. I think it’s got a lot of people scratching their heads, especially AI companies scratching their heads. It has created a certain level of chaos. Now, if you saw the headlines, the news stories right after those cases came out, it basically says Meta wins, Anthropic wins. As I’ll go through in a second here, it’s really not that simple. It’s much, much more convoluted and nuanced, if you will. And let me explain why.

So, these cases, like I said, the same court, different judges, dealing with the same issues of AI training, came to very, very different decisions.

One, for instance, in the Bartz case, talked about comparing human learning to AI training. And in the Bartz case, Judge Alsup says it’s identical. The way that humans learn and the way that AI training is done, it’s the same thing. And therefore, it should be fair use. He used that throughout his fair use analysis.

In the Kadrey case, Judge Chhabria, just two days later, thought that was insane. He actually calls it inapt, I think is what he says. He says human learning and AI training are nothing like one another. And he goes into a very long discussion of why that’s the case. And basically, in essence, taking Judge Alsup in the Bartz case out to the woodshed, if you will.

Another way they differ is their handling of the use of pirated works. In both cases, Anthropic and Meta, both accessed pirated databases, and used those pirated works to train their AI. And in the Bartz case, the judge said, no, no, no, no, no, no. That’s not right, you should be liable for that. And we’ll talk about that in a little bit more detail in a second. In the Kadrey case, the judge says, it seems to be, it’s okay. It’s okay that you copied these illicit databases to use pirated works.

Another area where they differed was the consideration of output under the fourth fair use factor. In the Bartz case, the judge says, there’s no direct one-to-one substitution of the works. They haven’t proven that there’s any output that’s identical to the works that we’re trained on, so that’s okay.

In the Kadrey case, the judge said the exact opposite. You get a sense here that, because of the scale of what’s going on, there could be market dilution. Even though the works are not identical, in other words, the output and the inputs are not identical, the fact is that, by the sheer scale of AI, you can put all of these works out of business.

Other ways they differed is in their impact. Meta involved only 13 plaintiffs at the end of the day. The Anthropic case is a class action suit, so there are thousands of plaintiffs involved in this case.

And then lastly, liability. Huge, huge difference here. Meta was held to be not liable for training on any of the copyrighted works. So, in terms of those 13 plaintiffs, they really are out of luck. But the court said that’s largely based on bad lawyering, the fact that the judge didn’t have the evidence he needed in his hand to make a decision in favor of the plaintiffs here.

But what the judge did do was give a roadmap for all future plaintiffs on how to win future cases. And so, these 13 plaintiffs are out of luck, but future plaintiffs are not. And we’re actually seeing this play out with a new case that was just recently filed that followed Judge Chhabria instructions in the Meta case.

Now in the Anthropic case, the judge did hold that for certain types of training, you’re not liable, but in other types, you are. Specifically, the training on pirated works. You’re going to be liable for that. And this is a class action. So, at the end of the day, Anthropic could be liable for millions or billions of dollars in liability.

So, looking at those headlines again, it’s not so clear that Anthropic and Meta won. To the extent that they did, frankly, in any case where you have the defendant who might owe billions of dollars, I don’t see how that can be construed as a win. I think at best, it’s a pyrrhic victory for both Meta and Anthropic.

TEDDY DOWNEY: I think that gets to a little bit of how I’m reading it as well. The way I see it is, look, these companies are violating copyright in so many different ways that you only need to trip up one way with a judge to have lots of liability. But if you look at the flip side—and maybe I think we should stay on these cases because I think these are the two best for AI, Big Tech.

So, I think it makes sense to stay on them for a second, and then we’ll get to the cases that are worse for them. But if you’re thinking about, oh, well, there are five ways that they could get tripped up, they definitely open the door. Similarly, on the flip side, they sort of open the door to a path to doing it without paying for licenses. And I think that, to me, is a real problem. We haven’t talked yet about, well, what did these judges say about—I think there’s two things.

One is like, what were the judges pointing to allow for fair use here? Because when I looked at it, it seemed like they were fudging the lines around the sort of, well, you’re going to be stricter on copyright if it’s a for-profit entity versus a nonprofit. There was just a lot of the blurring of the lines to sort of excuse this kind of conduct and sort of rhetorically say, hey, well, this is a transformative technology. I wanted to get your sense of (1) was there a blurring of lines, to your estimation, around sort of being a little bit more skeptical because these are for profit entities? And then also the difference between getting into the mechanics of the law versus sort of rhetoric carrying the day to at least create a path for these companies to not license the material?

KEITH KUPFERSCHMID: That’s a terrific question because there were a lot of mistakes made by these judges, especially in the Bartz case. And so, for those who may be unfamiliar with the fair use test, let’s take a step back. There are four fair use factors. And the first and fourth factor are the most important. With the first one, you look at the purpose and the character of the use. And when you’re doing that, as Teddy mentioned, you look at commerciality and you look at whether the use is a transformative use or not. And you look at whether the use was justified.

Unfortunately, in these cases, the judge didn’t do consider these. At least Judge Alsup didn’t do that. The judge was so overwhelmed with the technology, calling it — let me see if I can find out what he called it. But he was so sort of tickled with the technology that he just said, yes, of course, it’s transformative. It’s the most transformative thing we’ll see in our lifetime or something like that. But you know what? There’s an actual legal standard you have to apply here. And the judge did not apply that standard. That is a huge problem.

The judge also didn’t evaluate whether this is a use of a commercial nature either. What you’re supposed to do is look at how transformative the use is, compare that to how commercial the use is. And then balance them against one another. But if the work is highly transformative, then the fact that it’s commercial isn’t going to play as significant of a role.

And in both cases, in the Bartz case and the Kadrey case, neither one of the judges looked at the commercial aspect here and how that affects the case. At least Judge Chhabria did look at the transformative nature and did apply that test. But in the Bartz case, the judge did not do that at all. He just simply said, “That is exceedingly transformative. This is among the most transformative many of us will see in our lifetime.”

Now, that’s based on the judge’s opinion. I do not disagree with it being a very transformative technology. And it’s a very interesting technology and something that is just mind-boggling, quite honestly. But there is actually a legal standard that you have to apply here.

And the judge’s opinion was completely unmoored from that legal standard. And so, that’s a huge problem. The legal standard is whether the use supplants the original or instead adds something new with a further purpose or different character. And you just won’t find that test being applied in the Bartz case at all. And so, that was a huge problem.

TEDDY DOWNEY: I’m obsessed with that tension on commerciality. And so, I was trying to figure out how did they justify this? I remember one of the cases I’m looking up and that the reference is to a university library filing software, right? Just obscure, nonprofit case with sort of an expansive reading of fair use. I was expecting them to point to Google Books and a whole host of things. But no, they went to this obscure case that had seemingly no connection because that level of commerciality is so different. It was completely opposite. It’s like, how do I find a book in a library? Okay, if you’re going to copy the book, yeah, fine. Okay, you’re not making money off that. No one’s losing money there. But here you’re using the same logic to completely take away potentially the value of this book, couldn’t be more opposite. But yet, seemed to be a key reference in that case.

You said they made some errors. When you looked at the cases that they were referencing, did you have any other head scratchers? I was trying to get a sense of like, what are they building this fair use protection on? Because that obviously would be really interesting. What are the other cases out there that they can point to?

KEITH KUPFERSCHMID: Yeah, I think where both judges dropped the ball significantly is the case decided a couple of years ago, when the Supreme Court handed down a decision in the Warhol case. And in that decision, it basically held that transformative use is just a sub-factor of the first fair use factor. It doesn’t control the whole analysis.

And prior to the Warhol case, courts, if something was transformative, ninety-nine percent of the time, would hold it was fair use. And the courts fell into that trap here, that if it’s transformative, all the other dominoes must fall, and it must be a fair use. And that’s one thing that — in terms of looking at key court cases that either the courts missed or didn’t talk about correctly — absolutely, they did not give enough weight to the Warhol case. Because the fact that transformative use controls the whole fair use analysis, those days are over according to the Supreme Court. So, if this ever goes to the Supreme Court, I would imagine they would put their thumb on the scale here on that.

But the other case, as you point out, is the Google Books case. I don’t know how much that was actually talked about in both decisions, but AI companies love to fall back on that case. But that case is so much different than what you have going on with AI training. First of all, I’ll point out that this was a Second Circuit case, the Google Books case. And the court comes out of the gate in that case saying this tests the boundaries of fair use. In other words, this is the extreme limit of what we’re going to allow in fair use. That’s what they said in the Google Books case.

But then if you look at the Google Books case, it’s very different because Google didn’t copy books to make new books. That’s what AI is doing. Google used the information in the works. They didn’t use the expressive content in the works. That’s what AI does.

Google implemented numerous safeguards to make sure that the books couldn’t be pirated or somebody couldn’t bring use all the snippets to add up to the book.  There are no such safeguards used by many AI companies. Some are starting to use them now, but a lot of them don’t.

And perhaps most importantly, in the Google case, there was no licensing market for Google’s use. But in these AI cases, there’s absolutely a vibrant and emerging license market for AI training materials, and that market would be destroyed if this is all held to be fair use.

And the last distinction, which is also a big one, is that Google copied authorized books. These are real books. They didn’t go on the internet and scrape pirated books or scrape other pirated material, like AI companies are doing. There’s so many differences between the Google Books case and the cases we have here. It really isn’t very applicable.

And, if you look at the Warhol case, you question whether a lot of the decisions that were made, let’s say, in the last 20 years or so, whether they’re even still good law or not, as the Warhol case has basically overruled those.

TEDDY DOWNEY: I’ve had the misfortune of having to go through a couple copyright cases myself. And one of the things that comes up is sort of knowing that what you’re doing is wrong. And there’s a lot of evidence that these executives sort of knew that, hey, if we license this material, that weakens our fair use claims.

To me, that just seems like it should be kind of outcome determinative. Like you just know what you’re doing is legal effectively. You’ve got documents saying it. That should have made it a little bit easier for these judges. And like, maybe we will see that come up when we get to damages at a jury trial or what have you. But what was your take on those kinds of documents? Am I jumping the gun in thinking that those should have come up? Or what was your take on that behind the scenes dialogue not really carrying the day here?

KEITH KUPFERSCHMID: Yeah. So, you’re preaching to the choir, right? I think that where an AI company, or anyone for that matter, is going to an illicit source to use pirated works, that should be an all-out disqualifier, period, end of story. You should not be able to take advantage of the fair use principles.

Now, there are a few cases out there that do consider whether the defendant acted in bad faith. But if you look at those cases, they are so much different than the cases we are dealing with here. You’re talking about cases like Sega v. Accolade, and other cases where they’re using the work for interoperability purposes. They want to make a different type of program that will operate and work with something else and reverse engineering it.

But what they did in all these cases that was “bad faith” is they got a purloined copy. They got a copy in a dishonest, underhanded way. It was only one copy. It wasn’t millions of copies. Also, while they didn’t know what they were doing, they did try to get the copy in legitimate ways and couldn’t do that. That’s very, very different than what we have going on here where Anthropic and Meta both knew absolutely what they were doing. They did it intentionally. They knew it would harm the copyright owners. And yet, they did it anyway. And this went all the way up to management, as you see in the evidence.

And we’re not just talking about one work. We’re not just talking about dishonesty. We are talking about very intentional, willful activity done in a way with disregard and disrespect to copyright owners’ rights in a way that they knew would harm the copyright owner. That is a very different scenario than where you have someone who is merely pilfering one copy.

And so, this is a very different scenario. Judge Chhabria, in the Kadrey case, doesn’t get into that. He just said, okay, there is some element of bad faith here, but it’s unclear what role that plays. And then he just moved on. But Judge Alsup doesn’t let them get away with it in the Anthropic case. He basically calls them on the carpet and said, no, you can’t copy pirated works like this. This is beyond the pale, and that’s why they’re going to be liable for a lot of money.

TEDDY DOWNEY: You mentioned the sort of right-to-repair kind of aspect or interoperability. I’ve always expected that the AI companies would lean into intermediate copying, that sort of gray area, as a way to defend what they’re doing here. Obviously, as you point out, I think those situations are very different. But that seemed like at least a gray enough area that they would try to exploit. That didn’t work, but it also didn’t seem central to what they were arguing in these cases that sort of went more favorably for them.

What did you make of the intermediate copying not really being viable here? And then I’ve got one more question on these good cases for AI, and then we’ll switch to the bad ones. Or the other way around. Bad for copyright holders, and we’ll get into the good ones.

KEITH KUPFERSCHMID: Okay. So, in terms of the intermediate copying, you’re right. It’s a little surprising that there wasn’t a big focus on the two cases that we’re talking about here. And maybe that’s because the AI companies realized they would lose on that. You hear a lot from AI companies arguing that the purpose of their copying is for AI training purposes.

That is not the correct analysis. What the courts will look at is the ultimate purpose you are using the works for. The courts don’t just look at the intermediate purpose. So, yes, you’re allowed to make an intermediate copy if it’s a fair use, okay? If the court agrees that it’s a fair use. But you have to look at the ultimate purpose.

So, let’s take a look at that for a second, and let’s go beyond these two cases too. If you’ve got an LLM, where you’re copying text to produce other text, that very well likely could compete with that text, okay?

Let’s take an easier case and take music. If you’re copying music, you could be copying that music for the purposes of what? To make other music. That is going to be directly competitive, right? And exactly the same purpose, right?

Now granted, in the LLM situation, let’s say you have an AI model or an AI system, and all it does is create thank you notes. So, if you’re copying all of Stephen King’s books, let’s say, just to make thank you notes, okay, there’s a very good argument that it’s transformative. But if you’re copying all of Stephen King’s books to make a Stephen King-like book, that’s a very different situation here.

So, the courts need to look at the ultimate purpose here. And they did to some extent, but maybe perhaps not enough. But the intermediate copying issue really is not a primary focus of either of these two judges in these cases.

TEDDY DOWNEY: And we’ve discussed a bunch of things that these judges got wrong. How vulnerable do you think these cases are on appeal? Just given what you said about the Supreme Court, the Supreme Court historically has been very pro-copyright for the most part. So, how do you think these play out on appeal, if they get appealed. Or where do you think they’re weak on appeal?

KEITH KUPFERSCHMID: So, let’s talk about the 50 cases pending throughout the country. Although, most of them are in New York and California. In each of these cases, fair use is going to be the big issue. Whether it’s fair use to use a copyrighted work for AI training, that is going to be the big issue. There are other claims in these cases, but that’s going to be the big issue. Fair use is decided on a case-by-case basis. We’ve got three decisions so far. I didn’t really talk about the Thomson Reuters and Ross case, which we can talk about as well.

Now, you have two cases. Both, as I said before, came to very, very different decisions, very different opinions. They’re both going to go eventually to the Ninth Circuit, and the Ninth Circuit’s going to have to make some sense of this. Since they were both conflicting one another—and I think it’s highly unlikely that the court just disagrees with one of the courts entirely and agrees with the other court entirely. I think both cases probably get overturned.

And so, I’m not sure how much value we should put in these cases. As I’ve been telling people who ask me, this is a marathon, not a sprint. And so we’ve got two, three decisions so far. But let’s see where the other cases come down, especially when they start coming down in the Second Circuit, which I think will result in more favorable decisions to copyright owners, but we’ll have to see.

I mentioned the Thomson Reuters v. Ross case. That’s a case that’s pending in the Third Circuit. It came out of Delaware, pending in the Third Circuit. And in that case, unlike the two other cases where they did say it was fair use ultimately, the Ross court said it is not fair use. Now, that case is probably going to be the first case heard on appeal, because it’s already in front of Third Circuit, where it’s going to be decided. But that case held that it was not a fair use.

And going back to the idea that fair use is on a case-by-case basis, AI is going to impact music, images, text, all differently, because there are different AI models. I mentioned before, you’ve got the LLM for text. You’ve got a music model for music. You’ve got the diffusion model for images. You have different business models for each of these, right? And the law applies differently.

So, for the LLM, you need to copy a lot of text. But for music, you don’t need to copy anything. There are AI systems out there now that don’t copy anything. Because there are a limited number of notes, limited number of progressions, limited number of chords in music, that they don’t need to. And they can create their own AI to create their own music. That’s got to work against these music AI companies that are copying copyrighted music without permission.

So, what I’m saying is that we are going to get a lot of different decisions here. It doesn’t mean the law is not working. As a matter of fact, it means that the law is working.

TEDDY DOWNEY: And let’s talk about the Thomson Reuters case, because I thought that one was really interesting. And it also gets to a question of, to me, the two things that I think have been sort of the most rigorous and exhaustive look at how the technology and the law works were the Copyright Office report and the Thomson Reuters decision, and also a lot of the documents in the briefs for summary judgment there.

I’m curious to get your sense, both, is there a gap in the rigor of all these opinions, the Thomson Reuters versus the two in California? And also, maybe you can just go through and just give us a sense of what happened in the Thomson Reuters case. I think, to me, that one I just learned a lot reading that and covering that. And so, maybe I’m a little bit biased there, but would love to get your take on the meat of it and how you thought of the sort of rigor of the decision.

KEITH KUPFERSCHMID: Regarding the Thomson Reuters v. Ross case, I have to begin by saying it is different than the other cases. It does involve AI, but it doesn’t involve generative AI. So, that was a big difference.

The other big difference is Thomson Reuters and Ross are both — although Ross is out of business now, but they were competitors at one point. And so, as I said before, they’re decided on a case-by-case basis. And so, really, really fact dependent.

And so, here you’ve got two competitors, one competitor in essence stealing from another competitor and doing it in a way that is creating a competition that could supplant the Thomson Reuters market.

I haven’t talked much about the second or third fair use factors because they’re not as important, but under the first factor, the court said, this is not a transformative use. You’re basically taking something Thomson Reuters was already doing and turning it around and basically doing the same thing but doing it for yourself. And so, it was not a transformative use. That is different than the two California cases where they said it was a transformative use.

And then the fourth factor, in terms of the market, they said, look, Ross, you’re going after the same market that Thomson Reuters is going after because you’re both competitors. So, the case factually is very different than not only the California cases, but a lot of other cases that are going to be decided by the courts in the coming months and years.

And so, it’s helpful to see because it was the very first case to apply the fair use doctrine to AI training. And that was very helpful. It was very helpful in a few different areas where we’ve heard AI companies say, they no longer make this argument, that they’re not copying the expression because they know that’s a loser. And in the Thomson Reuters v. Ross case, they made it clear that they were absolutely copying copyrighted expression.

TEDDY DOWNEY: The thing that stood out to me about Thomson Reuters was that they do really treat the budding licensing market as very informative. What was your take on opinions where they landed on the licensing market? Because to me, I get what you’re saying, all the facts are different. But the core steps of how the copying and then the different stages of what happens with copyright were still there. So, I would be interested to get your take on the wanting to protect a healthy licensing market.

KEITH KUPFERSCHMID: You make an excellent point. The facts are very different. So, you can’t necessarily rely on the outcome of the case, but you can look at how the court analyzed it.

And your point is a good one. In the Thomson Reuters v. Ross case, you have a court looking at the licensing market and saying, look, there is a licensing market here.

In the Kadrey case and the Bartz case, you really don’t have that. Both judges said there may be some licensing going on, but you are not entitled to that licensing market, which doesn’t make a lot of sense. They said if it’s going to be a transformative use, you’re not entitled to license that market. So, I think that’s also something that’s going to be looked at by the appellate courts, certainly, both in the Third Circuit and the Ninth Circuit.

But I think in terms of the Thomson Reuters v. Ross case, if you push to the side the factual differences, and just look at the legal analysis, I think it’s very instructive and very helpful in terms of how a court should look at these factors and how they should apply each of the four fair use factors to AI training. I think both the Kadrey case, not to 100 percent, but to a large extent, and also the Ross case, give a roadmap for how these fair use factors need to be analyzed.

And I should mention too, the best source for all of this is the U.S. Copyright Office, which came out with a report in the first week of May, talking about fair use and how it would apply to AI training. And granted, there was no case before them. So, there were no facts to apply. But they did talk about how it might apply in certain types of instances. That report is probably the best of any of these. If anyone is just starting out and wants to understand this area better, I would direct them to that report that the Copyright Office came out with in that first week of May. It’s very, very well done, very thoughtful.

I’m not saying I agree with everything that’s in the report. I know AI companies don’t agree with everything that’s in the report, but that’s what’s going to happen. And that might be an indication that they got it right and that it’s balanced because everyone is a little bit equally upset with different aspects of the report. But they did a very thoughtful analysis. They had something like 10,000 comments to go through. So, I would direct people to that if they really want to learn more about how this applies.

TEDDY DOWNEY: You may have some quibbles with it, but if courts end up looking to that for how they make their decisions on appeals at the lower courts, I mean, that would be profoundly bad for the Gen-AI companies, at least in terms of how they operate their business now. It would shape how they want to do it, the sort of get out of jail free. They get no get out of jail free fair use exception. Am I misinterpreting? Did I read it wrong? Or I’m guessing your concerns were more on the margins because the meat of that report, I think, would require profound change from the Gen-AI companies.

KEITH KUPFERSCHMID: So, I do. I’m not sure I would classify my concerns as being on the margins because the copyright office does say that they think that, in most cases, it’s going to be considered to be transformative use. And I’m not as agreeable on that particular point. I don’t think that’s an issue around the margins either.

But in terms of, look, what AI companies want the courts to say is that there is a categorical exception for AI training on copyrighted works. No court is going to say that because that’s not what the law is. There’s no categorical exceptions in fair use.

So, the fact that the copyright office did not say that AI companies get a free pass, and are categorically exempted, anything short of that was going to upset them. What the copyright office said is, you have to look at the different facts. You have to look at the different type of works, how those works are being used, what is the nature of use? Is it commercial? Is it not commercial? Are they using pirated works? Are they not using pirated works? How much of the works are being used? What area? What’s the different model?

And that’s what a court should be doing, looking at all those different facts. And the Office said in some instances, it will in fact be fair use. And in other instances, it won’t. And like I said, I think AI companies are not happy with that because they want anything short of it is fair use in every single instance. And if it doesn’t say that, they’re going to be upset. And that’s just not how fair use works.

TEDDY DOWNEY: I would say, when I read the copyright report, it was very specific. If you sort of look at how the AI companies have operated and have done their training and spitting out the content, there was a lot to be concerned about if you’re a Gen AI company with how they’ve behaved to date. Is that fair to say?

Because I didn’t read much into them saying, well, some of this might end up being fair use. When you read what they were actually talking about, a lot of what was going on in the industry today would violate what they were laying out despite maybe what they were saying might end up being seen as fair use. But I’m open to misreading it and sort of reading it with an eye towards, wow, this sounds like a lot of what’s going on is a problem when that wasn’t what they were saying.

KEITH KUPFERSCHMID: No. So, if you look at the roughly 50 cases that have been filed, that are pending, in most of those cases, I think the Copyright Office — and you are right — in terms of in most of those cases, at least based on my reading of the facts, it’s probably not going to be and shouldn’t be a fair use. But there are other instances, those haven’t turned into lawsuits, maybe because people think, okay, this probably is a fair use for instance where you do research or analysis. Where you’re not using something where there’s already a licensing market for that and where there could be an output that could be infringing.

Now, to be clear, there doesn’t need to be an output that is infringing for the input, for copying the input, to be an infringement. You look at the two separate infringements. There’s the copying of the copyrighted work to use for training purposes. That’s the input infringement. And then there’s the possibility that it might produce something where the output is infringing. And you look at the Disney NBC Universal case, that focuses on the output, the fact that they’re producing minions and other characters, which are clearly infringing copyright. But those are two separate analysis and therefore two separate infringements. So, you don’t have to look at the output necessarily.

TEDDY DOWNEY: And we’ve got a listener question here I want to in a minute. But one thing I want to get your sense is also just the judges with respect to Big Tech and AI. Overall, I’ve gotten the sense in the past few years, judges generally are much more skeptical of Big Tech business models, seen this in the persistent erosion of Section 230, as well as moving away or narrowing impact of decisions on copyright that were very favorable to Google initially, especially when it comes to companies saying, oh, this is just like how Google indexes or Google Books.

What do you think is the court’s attitude currently toward Big Tech? And do you agree or disagree that it’s gotten substantially less favorable and deferential in the past few years versus when Google and Facebook and Amazon first got off the ground?

KEITH KUPFERSCHMID: First of all, I’m not going to say anything about Section 230. We’re not involved in Section 230. I don’t really track that. So, just to put that out there.

In terms of the court decisions themselves, I mean, we always say copyright is sort of like a pendulum. You’ll get some good case, then they’ll swing back the other way in a case that’s not so good, and back and forth. I think in the early days of the Internet with new tech companies that were coming, there were a lot of cases that really were more favorable to Big Tech. And, as you point out, I think the pendulum has now swung a little bit and is kind of teetering back and forth. Some cases go Big Tech’s way, and some don’t.

But I want to be careful not to include all these Big Tech companies in what we’re talking about here, which is AI. Because what I’ll call big AI, if you will, or the tech bros, big AI tech bros, or whatever we want to call them, they’re making arguments that we never heard from tech companies before, and, frankly, they just don’t make a lot of sense.

Like they’re saying we need an exception here in order to beat China. That the only way we can beat China in the AI race is if we get an all-out exception for copyright, because China’s not licensing copyrighted work. So, we shouldn’t have to, right? But that would be a gargantuan mistake.

Because to let China and others take valuable American intellectual property for free would frankly be a gift to China because, as we all know, China leads the world in mass stealing of creative works. The only way to win the race for AI innovation is to stick to our core principles, which means enforcing strong IP against China. If the U.S. sets the global standard by requiring respect and adherence to copyright laws through free market licensing, then that will give U.S. AI companies an inherent advantage over China. That’s what we need to be doing. Because on a level playing field, no other country can match America’s second-to-none workers and artists and innovators. And AI America will innovate on AI just like we dominate in copyright now.

And then the other issue, just real quickly, is national security. You hear these AI companies saying, oh, it’s an important national security issue. But that is such a misguided argument, I have to tell you. Because global dominance of American culture and arts is a key tool for spreading American values and advancing democracy and freedom.

So, if we really want to dominate the world in terms of American values, what better way to do it than through our popular American books, movies, music, art. Those all provide this vehicle for protecting American values. If, all of a sudden, we say, no, those aren’t protected, then we leave ourselves open and vulnerable to Chinese propaganda, and we do not want to do that. And honestly, instead of AI companies saying, wait a minute. We need free copyrighted works, they should take a look inside, clean their own house first. Because there’s been reports that say that China has been covertly using Chat GPT and other tools to spread propaganda, manipulate social media engagement, target journalists and politicians, in a coordinated AI-powered influence campaign.

And so, these are not things we hear from just general Big Tech companies. Instead, we’re hearing from AI companies because, frankly, I think they’re a little desperate, and they’re trying to find some loophole here to get policymakers to agree with them.

TEDDY DOWNEY: And I think this is a good time to get your sense of what happens if the courts end up going the wrong way here. If judges get copyright wrong when it comes to how AI harms artists, what’s going to be the impact for artists in the economy, right?

Things are already not really great in many respects. You’ve got YouTube undermining video and music markets. You’ve got Spotify. You’ve got Live Nation. I mean, we could just go through industry-by-industry. Creative workers are under attack in terms of having multiple ways to benefit from their work. Could that get even worse? That vibrant, creative economy that we still have, despite all that, isn’t that what the LLMs need to be good in the first place? Do we risk losing that? You’ve mentioned that in your previous answer, but we want to get more into what really could happen to the creative economy if we don’t protect the artists here?

KEITH KUPFERSCHMID: That’s a great question, a very important question. And yes, things should get a lot worse in two parts. One, if all of a sudden AI companies get a free pass, and they do not have to license these works, that’s money being taken out of the pockets of the creative community. Okay, that’s number one.

Number two is they’re going to be producing works that are going to drown out human-created works, right? And so, you hear about the struggling artist all the time. It’s going to be beyond struggling. We’re going to have the starving artists. I mean, you hear about that and it’s going to be much worse. It’s going to be so, so difficult for artists and authors and filmmakers and all sorts of different creators to make a living and to have a career when all of a sudden they have machines that can churn out millions of different types of works that can compete with them in the market using their works to do it. Things are going to get very, very ugly.

And right now, we have an industry that is hugely successful. The core copyright industries add over $2 trillion dollars of value to the GDP and that accounts for about eight percent of the U.S. economy. Employment is about ten percent of the total workforce. That is going to drop dramatically if that happens. And that’s not good for the economy and certainly not good for copyright owners.

There is a win-win situation here. Policymakers, lawmakers don’t have to pick a winner and a loser. They don’t have to say, okay, AI companies, we want you to win so, therefore the copyright owners have to lose. And I think if they try to say that, they’re going to end up in a lose-lose situation. Because what’s going to happen is you’re going to have these AI companies that need these copyrighted works, they need to train on those copyrighted works, and they’re not going to be available. Because they’re going to be killing the goose that laid the golden egg.

The best solution, the only solution, is licensing in a free market, licensing, licensing, licensing. So, copyright needs to be continue to be respected and hopefully the courts will come to the right decision. If they don’t, it’s going to be Armageddon, I think.

TEDDY DOWNEY: Yeah, that’s something that comes up a lot. I mean, we had a recent case where the judge was just emphatic against the defendant saying you don’t respect copyright law. I do think that it resonates pretty well with judges right now.

We’ve got a question from the audience. Could you clarify your interpretation of the Bartz v. Anthropic decision when it comes to use of unauthorized materials to train Gen AI models? Some passages suggested it was fair use to any copyrighted material, whether legitimately acquired or pirated, for Gen AI training. The judge did, however, explain that it is not fair use to obtain pirated works for purposes of building a permanent library to be used for training. Can you comment on that distinction?

KEITH KUPFERSCHMID: Yes, sure. So, I didn’t at least initially want to get into this level of detail, but we’ve got a great question here. So, it makes sense to do that. So, in the Bartz case, there were three different types of scenarios.

Scenario number one is where they were using works that they had licensed or they had legitimate use for access to those works. And so, in that case, the court said, that’s fine. That’s a fair use. I’m not saying I agree with that, but that was the conclusion made by the court.

The second instance, and this does not get talked about a lot, is that they bought a whole bunch of books, ripped off the bindings, and just copied the pages of those books. The court said that, since they had access and bought those books legitimately, that they had access to those books, that that was also considered to be a fair use.

Now I’ll stop here for a second to say if that case was decided in the Second Circuit rather than the Ninth Circuit, that case gets decided a lot differently. Because the Second Circuit has taken up cases very, very similar to that and said it’s not fair use to rip off the bindings and make copies of books.

And then you have the third instance, which is where they were copying pirated works from these so-called shadow libraries. I hate using that word. Managed to make it almost an hour without using it, but these are not really libraries. But these were shadow libraries and copied pirated works. And that’s where the court said no, no, no. That is not a fair use.

So, there were three kind of different factual scenarios, three different type of uses that the court had to consider, and they considered each differently. Like I said, I focused mostly on the pirated version and a little bit on the first one. I didn’t talk about the book bindings being ripped off. That adds a very interesting element here that doesn’t exist in most other AI cases.

TEDDY DOWNEY: Yeah, and I think that’s super interesting, just making the distinction between different circuits. And maybe that’s a good way to sort of have a last question here. You’ve got 50 cases. How do you see this all playing out in the end?

And you mentioned earlier that in some respects the music AI companies might have a little bit of a better case. Now, my understanding is they are still copying a lot of music. So, you’re saying maybe theoretically they’re going to have an easier time because they don’t need to copy. But some of them seem to be doing that. So, you could say, hey, I’m going to have this music sound just like Taylor Swift or what have you. What do you think the outcome is? You look ahead 50 cases making their way through the course. How do you think all this ends up?

KEITH KUPFERSCHMID: Yeah, I think we’ve got two cases or three cases right off the bat. I think those cases are fairly unusual and frankly, not the greatest cases necessarily for copyright owners. The Kadrey v. Meta case is interesting because it was such a significant level of piracy. And part of the piracy that Meta was involved in was not addressed by the court and will get addressed in the future. So, there’s still a chance that Meta could be liable in that case.

But if you look at the music cases, look at the newspaper cases. Also, those are really, really strong cases for copyright owners, I believe at least. I mentioned in the music area why that’s the case. In the newspaper area, there’s something called Retrieval Augmented Generation or RAG. And that’s, in essence, where I’ll simplify this if I can. In essence, where you’re doing the same thing as AI training, but before you put it out to the user, you’re looking at a very trusted source. And a lot of times that trusted source is a much better source, and that ends up getting copied. Maybe not entirely, but big chunks of it get copied.

So, when you’re talking about retrieval augmented generation — and the Copyright Office Report talks about this as well — I think that really makes for a very strong case for the newspaper publishers. And so, you’ve got those cases. But we haven’t heard from the Second Circuit, any courts in the Second Circuit of New York, yet. We’ve only heard from Third Circuit and also from California.

So, things are going to be all over the map. I wrote a blog about this recently, about chaos in California. I think it’s going to be chaos in America. I think there’s going to be cases all over the place. And I think AI companies are frankly not going to know what to do. When, frankly, there’s an easy thing to do, which is to license the works. And there are new licensing options being created by Collecting Rights Societies and other organizations every day. So, anyway, I think we are far from done here. We’re at the very first pages of the first chapter of this book.

TEDDY DOWNEY: In terms of licensing—I just want to say that before we go. I know we’re over time. But the safe thing for the AI companies is to license. That would be your safest path to not building up millions, potentially billions, of dollars in liability.

And the AI companies say, oh, we can’t do it with licensing. We can’t do it with licensing. That makes no sense to me. I mean, the almost limitless ability to raise capital seems like you could pretty easily license the content. Is that just the path of least resistance? If you want to be a successful gen AI company without a ton of legal overhang, you just license the content.

KEITH KUPFERSCHMID: Yeah. So, this might be a good way. I’m going to steal Judge Chhabria’s words here and just read a quote from him. Because Meta made that case, they said that ruling against them, if they had to license everything, would, “stop the technology in its tracks.”

And Judge Chhabria, in the Kadrey v. Meta case, called it, “ridiculous.” He said that AI products are expected to generate $460 billion to $1.4 trillion over the next 10 years for Meta alone. And then said, in a quote, “if using copyrighted works to train the model is as necessary as the company says, they will figure out a way to compensate copyright holders for it.” And I think that’s exactly the truth. So, I think I’ll just stop there.

TEDDY DOWNEY: Fabulous, fabulous. Well, Keith, pleasure to meet you. I had an absolutely great time getting into the weeds of all this with you. I learned a lot. I hope our listeners learned a lot. And I can’t thank you enough for doing this.

KEITH KUPFERSCHMID: Well, thank you for having me. I enjoyed it as well. Nice to meet you.

TEDDY DOWNEY: And thank you to everyone for joining the call. If you found the conversation helpful, be sure to follow Capitol Forum on LinkedIn, X, or Blue Sky for breaking news, policy updates, and info on upcoming events. We’d love to hear from you.

Drop us a line at editorial@thecapitolforum.com. Thank you so much. And thanks again, Keith.

KEITH KUPFERSCHMID: Great. Thank you.

TEDDY DOWNEY: Bye, everyone. This concludes the call.