Jun 25, 2025
On June 18, The Capitol Forum held a conference call on “Promoting AI Innovation Through Competition” with Jack Corrigan, Senior Research Analyst at the Center for Security and Emerging Technology. The full transcript, which has been modified slightly for accuracy, can be found below.
TEDDY DOWNEY: Good morning, everyone. And welcome to our Conference Call on “Promoting AI Innovation Through Competition”. I’m Teddy Downey, Executive Editor here at The Capitol Forum.
And today’s guest is Jack Corrigan, a Senior Research Analyst at the Center for Security and Emerging Technology. He’ll be talking with us about his recent paper, “Promoting AI Innovation Through Competition: A Guide to Managing Market Power”. It’s really an excellent paper. You rarely see, especially in these nascent markets, a really thorough, holistic analysis of market power. I can’t recommend the paper enough. Please download it, read it. It’s excellent. Hopefully you’ve already read it so that you can engage in the conversation. But Jack, thank you so much for doing this.
JACK CORRIGAN: It’s a pleasure to be here, Teddy. Thanks for having me.
TEDDY DOWNEY: And I’m going to start off with some questions. We’ll get to your questions. If you have any, please put them in the questions pane in the control panel or email them to editorial@thecapitolforum.com.
And Jack, normally I ask like, oh, well, why did you write this paper? I mean, I think it’s kind of obvious why he wrote the paper. But I would love to get into the methodology. Like, how did you do the research? How did you come up with a plan to really understand each layer of the market here?
JACK CORRIGAN: So I’ll zoom out a bit and talk a little bit about the origin of the paper because it’s kind of related.
I’d been interested in looking at competition and innovation in the AI space really since I started at CSET in 2021. At the time, this was pre-ChatGPT, pre-the whole generative AI boom. AI was still this niche technology. There were a lot of people who were pretty engaged in the space, but the community was relatively small. And the tech itself showed a lot of promise, but in these kinds of discrete application areas.
And at the same time, you saw a lot of — because it was relatively small and the commercial industry hadn’t taken off in the way that it has recently — you saw kind of a lot of players in this space in academia and industry. The Big Tech companies were involved, but you had all of these other kinds of like small shops that were also working in the space.
And then in late 2022, when you saw ChatGPT come out and generative AI became part of the zeitgeist. You really saw it transform from this niche technology with these kinds of discrete applications to the conversation really focusing on AI being this all-powerful system that’s going to completely transform the way that we live our lives, the way we organize the economy, the way we run our politics. You might remember some of the apocalyptic language around that time. It just became this much, much bigger thing. And you saw all this money pouring into it.
For me, the first glance that I got at the way that the industry was going to be shaped was this partnership between OpenAI and Microsoft. So, here you have — this was months after ChatGPT had come out — the breakout star of the whole generative AI boom thought it was in their best interest to latch themselves to one of these incumbent tech companies, give them a huge stake, give them access to their tech, that kind of thing. And to me that really raised some red flags. If there was any company that was able to make it out on their own, it should be this company, this lab that built an incredible tool that tens of millions of people were using. But they apparently didn’t see a path forward.
Around that time, you had a lot of people focusing on the capabilities and the risks of AI systems, but not a lot of attention being directed towards market power. So I started poking around and doing my own research. As you saw more of these partnerships form, as you saw the market structure begin to crystallize, you started to see how a lot of the inputs to AI development were either directly or indirectly controlled by some of these Big Tech companies and the way that the AI development process was structured in a way that gave these companies a big advantage.
And so I got interested in that. Within the last two years, I saw some great work coming out looking at the AI tech stack and where a lot of the power was concentrated. But discussion was largely happening within academia. At CSET, we’re a think tank that straddles the line between the national security community and the tech policy community. I didn’t see a ton of that discourse making it through to the audiences that we reach.
So, with this report, my goal was basically to take a lot of the great work that had been done and frame it in ways that would resonate with a national security audience, which is deeply interested in AI, has a lot of money to spend on AI, has a lot of interest in where the tech is headed, but doesn’t have a super deep expertise in the competition issues at stake here.
So methodology-wise, I was honestly standing on the shoulders of giants who did a lot of this work ahead of me. But what I was trying to add with this report was a reframing some of these competition issues in a way connects to the things that national security leaders are concerned with, which include innovation, resiliency, and democratic governance. And I think that through the work that we had done, the basic, the top line finding of the report is that, if left unchecked, this industry will continue to consolidate around these Big Tech companies. And that’s going to make it less innovative, less resilient in the face of disruptions and technological surprises, and less responsive to democratic governance efforts.
TEDDY DOWNEY: At the beginning of the piece, you go through, you sort of lay out like, look, here are the stakes. Here’s why competition is useful from a national security standpoint. I mean, I think it’s worth just touching on that briefly before we get into the meat. Because obviously, it was important enough that you led off the piece with that. But also, I think it’s useful refresher, useful to sort of go — you did some homework. You did your history when you wrote that. And so, it’s useful to kind of go over that for the audience.
JACK CORRIGAN: For sure. So, the things that I focused on were competition impacts on innovation, competition impacts on resiliency, and then competition impacts on governance. And I think in the report, we focused pretty heavily on the innovation side of it. Because there’s a lot of great literature there.
Everyone in a market operates by different incentives. When you look at really concentrated markets where you have these big incumbent firms that we’re kind of seeing emerging in AI, they operate by very different incentives than startups. They are going to be investing in more incremental developments on their existing business lines, things that might not push the envelope as much as you might see from a startup that’s trying to break into the space and disrupt the industry.
When you look at AI, this is still this relatively new tech. And if you look at the way that even in the last ten years, the technology has progressed, you’ve seen this movement through different AI paradigms, where you have some breakthrough that enables a lot of innovation up to a point, and then you have another breakthrough and it keeps moving in this staggered direction. If you rely too heavily on incumbent firms, like you’re not going to see that kind of steady stream of breakthroughs in the way that we would want to see from a national security perspective to kind of keep the U.S. at the technological edge.
I’d been working on this paper before all the DeepSeek stuff happened, but for folks who followed that story, it really drove home this idea that there are certain things that incumbents might not want to do or might under-invest in because it’s not necessarily as beneficial for them, even though it might benefit society. With DeepSeek, what you had was this Chinese AI developer that came up with this much cheaper way of building high-performing models, and then they opened up their model in a way that a lot of these kind of U.S. tech companies had been reluctant to do. And if you had a more competitive ecosystem in the U.S. where that sort of disruption was being incentivized, you might have seen this sort of breakthrough happen here rather than across the Pacific.
There are lots of similar stories throughout history. The Bell Labs example comes up very often. Here was this great lab at AT&T that was doing all kinds of innovative stuff—lots of the technologies that we rely on very heavily today, including semiconductors, got their start at Bell Labs. But we did not see AT&T commercialize those tools. They conducted the basic research, but it wasn’t until the DOJ stepped in and forced AT&T to open up that IP that we saw a lot of this kind of downstream innovation happening. That story really resonated with me, and I think that it resonates with a lot of national security folks. I really wanted to drive that home and talk about the importance of having actors in a space that are trying to disrupt existing ways of doing things.
And then just to briefly touch on the resiliency and the democratic governance issues. I think that when you think about resiliency, you want to have a pretty diversified array of actors participating in the space, because if something goes wrong, the impacts won’t ripple out as much as they would if you were relying on just a couple really big players. If you think back to the CrowdStrike outage from last year, you saw how one flaw in a specific system can ripple out and airports and all these other industries offline. I think you could see a similar thing if we rely too heavily on a particular AI tool—if something goes wrong there, like the ripple effects could be huge.
And then if you look at governance, when you have a more diversified, contestable, competitive ecosystem, the companies are less able to block, influence, or shape governance efforts in their favor. There be more alternatives, people who would be willing to comply with these new regulations or governance efforts that are being put forth. There are many different angles on AI and competition that are worth exploring beyond innovation, resiliency, and governance. But I think that within the national security community, those issues really resonate.
TEDDY DOWNEY: I think it’s just so interesting to hear you talk about your rigorous approach and doing all this research. It just cuts through so much of the inane propaganda that you hear a lot from the sort of establishment Washington community that, oh, nothing can happen to — no rules, no enforcement to Big Tech, because that will weaken the U.S. internationally, and the national security risk will somehow lose out to China. But obviously, if you do a modicum of research — and obviously, you did a lot of research here — poke a lot of holes in that thesis.
Do you still hear — I mean, really, we don’t have to stay on this. Do you still hear that? I mean, I get it. I mean, I’m in this world. I hear it a lot. But is that still kind of the main talking point that you hear from Big Tech at this point? Or has the conversation just fundamentally evolved past that?
JACK CORRIGAN: Unfortunately, I wouldn’t say the conversation’s totally evolved past that. I think that among many of the folks that I’ve interacted with, a lot of credence is given to this idea that we need these massive companies to compete against China. But I think that the dynamic of this conversation has shifted recently amid this broader discussion of AGI.
There’s this idea that the way that AI progress is going to happen is by continuing to build bigger and bigger and bigger systems. This idea that scale is going to be the way towards artificial general intelligence has become very prevalent within the national security and broader policy community. And when you frame things through that lens, it takes what would seem to be a pretty self-serving argument from Big Tech—“oh, don’t touch us because we’re your national champions”—and reframes it around technical justifications. They’re saying, “actually, there a technological benefit to having this massive concentration of financial assets, technical assets, because that’s the way that progress is going to happen.”
Of course, I think we can pick that argument apart, but I think that this framing has become the focal point of this argument for leaving Big Tech alone.
TEDDY DOWNEY: Well, as you mentioned, it seems like DeepSeek pretty solidly put a hole in that point. I mean, we don’t really even need to do any history. You already basically proved that’s just flat out not true. There’s value in these smaller players in making big improvements really, much more efficient.
But I think we can kind of pick this apart as we get through the paper. I want to get into the three layers. But actually, first, you mentioned something that I thought was interesting. You said ChatGPT, like the lifeline was Microsoft coming in and effectively buying them. I mean, I don’t really understand the difference between taking 75 percent of your profits, 50 percent of your profits, in perpetuity or whatever. What is the difference between buying someone and that? I intellectually cannot, I’m not smart enough to figure that out.
But I guess like a couple quick questions there. One, is Silicon Valley, is VC just completely useless at this point? Like, how could they not have any money? Is there something really broken about the VC community? I have my concerns or skepticism about the long term value of a ChatGPT. Or are they just stealing a lot of copyrighted material? Like really, what is this based on? And what is the real utility of it? And is it making people dumber?
I have a lot of my own personal concerns about it in terms of the value. But obviously, if you’re getting that many users, you should get money, right? Like, you should get some VC money, right? Obviously, it’s worth $10 billion or whatever, fine. You’re getting credits from Microsoft. It’s worth billions of dollars, right?
How is VC not showing up? And I guess the other question is, if the FTC had said, hey, Microsoft, no, you cannot effectively acquire this, do you think it would have gotten VC money? And do you think there is value in it being independent? Or was there an alternative? Like, have we already actually made a big mistake?
You know, has the FTC and DOJ, by allowing all Big Tech, the vertically integrated Big Tech, to come in and partner with, I mean, by essentially, whatever, take a huge amount of equity in these companies that we’re already behind the eight ball here?
JACK CORRIGAN: Right. It’s a very interesting question. And I don’t want to dodge it. But I think there is some credence to the argument that an OpenAI almost needed Microsoft to step in order to stay in business—their foundation model was developed in this way that requires huge amounts of compute, and that compute is very expensive. I agree that these various these companies should be able to make money. But there’s still no real clear, tried and true path to monetizing these AI tools. Even today, you have a bunch of companies kind of scrambling to kind of find a business model there. But especially in late 2022, it was pretty unclear. So, I think at the time, it may have made sense both for Microsoft and OpenAI to engage in this partnership. The FTC has rightfully scrutinized that partnership and some of these others and are paying very, very close attention to it.
I don’t want to say that we’ve completely fallen behind the 8-ball on it yet. But I think where we are now starting to see the pernicious effects of these partnerships is that instead of running into these very high costs, running into these technical barriers, and having that create an incentive to develop cheaper or more efficient ways of doing AI, these partnerships have enabled this relentless pursuit of scale and allowed companies to basically continue developing AI down this path that is becoming enormously costly, both in terms of money and environmental impacts.
And I think that is where, for me at least, some of the broader downsides of these partnerships reveal themselves. We probably would not have been able to continue developing the tech down that path as far as we have if those partnerships were prohibited or otherwise disincentivized.
TEDDY DOWNEY: But I mean, we’ve already got DeepSeek. And if your incentive is use more compute, I mean, you already have a bad incentive here, which is a lot of the money is actually in credits for compute.
JACK CORRIGAN: Right.
TEDDY DOWNEY: And so, there’s no actual interest in being more efficient. It’s just like use more compute, get a nuclear facility. It’s not, hey, we need the — if the VC puts in $10 billion — and by the way, they were very patient with Amazon. Amazon lost money for a long, long time. And so, losing money and throwing more money at a growing user base technology that is — at least everyone else thinks — is transformative. Again, I have my personal sort of skepticism there. But plenty of people are excited about this technology.
And then obviously, the incentives are different. If you have $10 billion of VC with no credits, you want to (a) figure out a business model, and (b) be more efficient with the compute, right? Bring down your costs and bring up your revenues. And also, like you said, the externality here is like insane use of energy.
JACK CORRIGAN: Right.
TEDDY DOWNEY: So, there’s an environmental problem with that inherent incentive to just use more compute. So, I mean, look, I mean, anyway, we don’t have to stay on this. Because obviously, we’re not going to solve like how broken the VC community is, that ChatGPT can get money on its own.
But I think that’s kind of an interesting question that we can potentially come back to at some point, which is like we’ll see what the FTC ends up coming up with around the Microsoft acquisition. But when you said that, when I read the paper, obviously, you mentioned that. You mentioned the details of the Microsoft acquisition and the incentives of Big Tech. But I hadn’t quite put it together that there was a little shift in how the market would operate at that very moment, at least in the U.S. Is that fair to say? Or am I overstating this?
JACK CORRIGAN: I agree with everything that you’re saying. I think that the real question comes down to had Microsoft not stepped in … would we be okay letting OpenAI, which has achieved this breakthrough, potentially go out of business in order to prevent Microsoft from gaining outsized influence over the company? And I think that’s an open question. I’ll just also note that prior to the big partnership, Microsoft was already a very significant investor in OpenAI. There was already a pre-existing relationship between those two companies.
But I agree. I mean, had OpenAI gone under, you could have had all of its employees go out and start these new companies and find more efficient ways of doing AI where they wouldn’t need to be as beholden to a cloud company as they are now. I agree with that.
TEDDY DOWNEY: It’s just a fundamentally broken capital markets in the U.S., if Chat GPT cannot get like — I mean, someone needs to be — they should be, I mean, I don’t understand how they can’t attract money, except from a vertically integrated company that can cross subsidize it.
JACK CORRIGAN: OpenAI is still losing money … and I think that’s the key thing here. Part of the reason that the industry has become so concentrated is that — and maybe this is a problem with the VC industry — is that there is no like clear path to profitability for any of these companies. So, the only ones who are able to take on these massive costs with no like clear upside is some of these Big Tech firms.
TEDDY DOWNEY: You get into these layers. Let’s talk about these layers. Because I found this really interesting. Look, we’re already halfway through. We could spend all day talking about all these layers. But I want to get to the main choke points in each layer, and then sort of get your holistic assessment of the problems going forward.
But let’s start with the infrastructure layer. You broke things into three layers, infrastructure layer, model layer, application layer., Let’s start with the infrastructure layer. And then where you found the sort of key choke point there. And then maybe we can just progress to the model layer and application layer.
JACK CORRIGAN: Sure, that sounds good. Before we dive into each of these layers, I’ll just say kind of like the main idea is that the way that AI is being developed gives an enormous advantage to these incumbent firms because of their existing businesses. So, they are able to use their existing collections of assets in ways that give them both a pretty significant competitive advantage in AI development and a lot of control over the market.
So, when you look at the infrastructure layer, if you think about AI systems, one of the big inputs is compute. When you’re building an AI model you need a lot of compute power, particularly when you’re building these really large foundation models. Buying that compute directly is very expensive, so what a lot of developers will do is use cloud computing platforms. And the cloud computing market is highly concentrated. Amazon, Google, Microsoft collectively control two-thirds of the global cloud market.
Through that control of the cloud market, they themselves are able to build AI systems using compute at marginal cost, and then are able to charge a premium to all of the users that use their platforms. And it really can be a choke point because they basically set the terms upon which all of these other developers can access this really critical input.
So, as you mentioned, with a lot of these partnerships, companies were effectively giving these discounted compute access to developers in exchange for some stake in the company. So, it’s basically like, here’s a finite amount of free compute power that you can use to build your AI tool. Obviously, a company that has that arrangement is going to have a clear advantage against a company that needs to pay the market rate for compute.
And then when you think about just the structure of the cloud market, there’s a lot of competition if you’re a developer looking at the outset for a provider. But once you choose a cloud provider, you are locked into that cloud provider just by virtue of different technical barriers to migration and other artificial barriers that the cloud providers will erect through like egress fees and some of the other contractual terms.
TEDDY DOWNEY: Can we stay on that for a second? Because we’ve spent some time looking at those contractual terms. You mentioned the Microsoft OpenAI agreement is exclusive. They’re an exclusive cloud provider, right?
JACK CORRIGAN: At one point I believe it was.
TEDDY DOWNEY: But in terms of the exclusivity or those contractual obligations, and also there are other lock-ins, right? Like you mentioned in the paper — and I’m curious if there’s any contractual language around this also — but typically, you’re going to layer on the developer tools to the — you use the developer tools that you get from your cloud provider.
JACK CORRIGAN: Right.
TEDDY DOWNEY: That’s not like an open market either. And so, there are just all these lock-in effects. Did you think about that as a choke point itself? Like those agreements or the sort of tying and bundling together of all these things when you choose your cloud provider. I just want to talk about that for a second because that seems like there’s a lot going on there with exactly how did these — when you choose your cloud provider, how easy or hard is it to get out if you want to switch?
JACK CORRIGAN: I’ll note that a lot of these contracts are not public, so we’re not super familiar with the different provisions. But I think you’re right to point out how once you are in with a developer, there are all kinds of direct and indirect forces keeping you there. I think the developer tools is a big one. I did some separate research with another colleague of mine looking at the way that these different companies are investing in infrastructure expansions and the creation of these open-source software packages and developer tools. The idea there is that when you build these developer tools and a particular developer gets used to using those tools, and then they have to switch to another cloud provider, they’re going to have to learn a whole new set of skills and a whole new set of tools. And that itself is a barrier to entry.
If you look at a lot of the infrastructure expansions that these companies are making across the globe, they often include workforce development programs. There you have situations where it’s like, “hey, we’ll build you this data center here and then we’ll train up all these developers on how to use our stuff.” And the idea there is that once you train those developers, the become customers for life.
We didn’t really look this issue in the most recent report as much, but I think that bundling there has a huge impact on the ability of a particular company to migrate from one cloud to another. I think there are also technical barriers to retooling your system for a new cloud because there’s not a lot of interoperability between these cloud platforms. And there have been proposals to mandate interoperability as a way to incentivize a little more competition between cloud providers.
TEDDY DOWNEY: And you talk about national security or resiliency. Isn’t it more of a risk if you’re all loaded up in one cloud provider as opposed to a multi-cloud system? Obviously, that becomes more complicated and it’s harder if it’s not interoperability and things like that. But intellectually, it seems like you’re going to be more resilient if you cannot be totally reliant on one cloud provider.
JACK CORRIGAN: One hundred percent. And actually, if you look at the government, a lot of agencies will require these multi-cloud environments for a particular program or a particular system in order to have that kind of resiliency. And right now, we just don’t have as much of that in the private sector, or at least in the AI industry.
TEDDY DOWNEY: And by the way, I want to give a quick shout out to your image. I basically have this image up on my desk at all times at this point. It is so well done. You know, you’ve got the choke point between the cloud computing platform and the model layer. And maybe you can tell us a little bit about how you think about that in particular being the choke point there.
JACK CORRIGAN: For sure. So, if you think about, again, developing an AI model, you need a lot of compute, and then you also need the algorithms and the data to all come together to build an AI system.
And, as we talked about earlier, because of the way that AI is being developed today under this “bigger is better” paradigm, building these really big foundation models is very expensive. For the most part, the top models in the U.S. at least have been developed by these incumbent firms.
So, when I think about kind of the model layer, especially as of late, I’ve been thinking more about if I’m a developer and I’m trying to access a foundation model and fine tune it so I get to a particular application that there’s this choke point where, again, I need to rely not only on the compute power from one of these companies, but probably one of their foundation models too. And again, the incumbent firm gets to determine the terms upon which I access that model, the extent of tinkering that I could do with that model, the amount of auditing that I could do to kind of understand how that model is working under the hood. And that, again, is just another point at which these firms can squeeze their customers.
Also, you need data to train these models. And these firms have access to massive troves of proprietary data that they can distribute as they see fit. So, again, that’s another way that they can potentially pick and choose which developers might have an advantage in the market.
And then if we go down to the application layer … I’m so happy to see it getting more attention now. But this final choke point is the distribution channels for AI systems. So, you could build a system and that’s all well and good. But if you’re a developer, you’re not going to be able to build a successful business unless you’re able to get your tool in front of a customer. And the channels through which developers access customers are also themselves pretty much controlled by these large tech platforms that have over the last 15-plus years come to dominate these different corners of the digital economy.
So, if you’re looking at consumer applications of AI, the way that you or I might access one of these tools is through an app store, through a smartphone, through a web browser, through a social media platform. Again, there’s one or two big players in each of those spaces. And they could, through self‑preferencing, through bundling, through all of these other mechanisms, put their preferred model in an advantageous position on those platforms.
Another area that has not gotten a ton of attention yet but I think is going to be extremely relevant as AI tools become more broadly adopted, are some of these B2B distribution channels. If you’re a business that’s using AI, you probably already have a cloud provider. And the AI tools that you are going to access are going to be the ones that are provided through that cloud environment in which you’re used to operating. And if you look at the way that these cloud environments are structured, the interface for these software tools looks very, very similar to app stores, to online retail marketplaces, all of these different platforms that we know can be structured in a way that advantages certain products over another.
So, there are just many, many opportunities through these distribution channels for companies to preference their whatever model or product they want to get in front of people. And there’s a lot of ways that other developers can be squeezed.
TEDDY DOWNEY: Did you look at also, did you think about like how if there’s a race to the bottom from a competition standpoint where you’re sort of incentivized right now to try to violate IP, right? Obviously, there’s costs associated with that, right? Eventually, you’re going to lose billions and potentially hundreds of billions of dollars in copyright lawsuits.
JACK CORRIGAN: Sure.
TEDDY DOWNEY: But right now you’re sort of incentivized in some ways, your model might perform better, if you’re using proprietary copyright material. And obviously, that could, if it undermines the news industry, could be a threat to democracy long term. So, when you think about national security, is the pirating and stealing of copyright, as an input in a model, a national security issue?
JACK CORRIGAN: I think so. I’ll admit we don’t really touch on this in the report. And I haven’t looked into this as much as I should. But I think, on an intellectual level, it makes a lot of sense. If you believe that a free press is essential to democracy, that open, free discourse and truth is important to the democratic process, then I think that any behavior that undermines that is a national security risk.
TEDDY DOWNEY: It just seems like having a way for copyright lawsuits to play out as a way to punish the people that are stealing the IP, potentially, I think there’s a lot of value from that standpoint. Like, let’s say the FTC did a 6B. And they’re like I need to see these models. And then you got some subpoenas. Or state AGs start having a concern. It seems like, at some point, at least some pressure to compete on quality where you’re, all right. I’m licensed. I licensed this legally. This is a legal, higher quality product where you’re not violating any IP. You don’t have to worry about violating IP because we account for that.
JACK CORRIGAN: Right.
TEDDY DOWNEY: But the disconnect between the timing of a copyright lawsuit, and when — I mean, eventually, it’ll come out. But you could already have just the big first mover advantage if you’ve been doing it, and everyone starts using your app, because it’s better because it has a lot of this copyright material in it.
Anyway, when you’re thinking about solutions, government investigations, to address all these choke points, is the first step getting access to these contracts so you can see exactly how the leverage is being used? Or what do you think about in terms of how to get at the problem that the choke points are creating or the potential future leverage that you talk about in the report of the vertically integrated Big Tech over the more specialized AI companies that don’t have all that, you know, they’re not like comprehensively vertically integrated?
JACK CORRIGAN: Yeah, I mean, I think it’s a really interesting question. I think that transparency, in whatever way we can get it, is an absolutely crucial first step. Because a lot of this report is just kind of speculating on ways that this power can be exerted. And then we have a few concrete examples, particularly related to the compute deals. But I think that the more that we can look under the hood for the business models, how the models themselves are being developed, how copyrighted information is being used or not used, I think that’s all incredibly helpful.
But I think that in addition to that, we really need to just understand … how these platforms and products can be manipulated in different ways to give preference to certain products over others.
In the recent Google search antitrust trial, this was really at the crux of the issue. There’ve been a bunch of investigations into what Amazon was doing on its marketplace to preface certain products over others. And I think that cracking down on that kind of behavior would really create space for these other types of tools that might be more respectful of copyrighted information or better in some other way … that would allow them to compete on the merits. Because right now, it’s like you can develop one of those systems … but given all of these other ways that the companies can exert control over the market, and the different advantages that these incumbents have, I don’t know that new developers would even be able to compete on the merits.
I hope this answers your question. I think transparency would help us target interventions, but I think that even now, particularly as it relates to the distribution channels, we kind of know what needs to be done. We have a rough idea of the way that market power would manifest itself, and I think that stepping in there would be a good thing that we could do in the short term.
TEDDY DOWNEY: So, obviously, a lot of ways that Google sort of locked in its market power was with these defaults, right?
JACK CORRIGAN: Yeah, yeah.
TEDDY DOWNEY: Was understanding the value of the defaults. And if you think of ChatGPT, and AI, chatbots, and things like that, as maybe not a total replacement, but a slight alternative to search or sort of search like how people use it. So, you go there. You want to get information. You type it in. You’re going to stick with your default, probably.
Are you talking just like antitrust enforcers being super vigilant about default agreements and sort of trying to prohibit those? Are there other ways that we’re talking — I mean, obviously, we could go through the app store and all the different types of distribution channels. But just given that things are sort of free-ish now, the way that they’re operating, is that Google default analogy probably the best one in terms of what to watch out for? Or are there other things that you’re seeing that, hey, this is going on?
You mentioned at markets, online marketplaces. We’ve got the tried and true strategies. All of these companies are being sued for antitrust right now, except for Microsoft. But what else do you see besides defaults? Or if you had to kind of come up with a top three or four to be on the lookout for or to do some enforcement on, what would you suggest?
JACK CORRIGAN: I think the defaults are a big one. Not to just repeat what I’ve already said, but I think that the self-preferencing on consumer app markets is a big problem, I think that the self‑preferencing on these B2B marketplaces is potentially a big problem. I think that scrutinizing the kind of pay-to-play stuff that we’ve seen on these platforms previously would be important. I think the challenge here is that, again, the AI market is so young.
TEDDY DOWNEY: You’re looking at the leverage and you’re saying, hey. There’s going to be a lot of ways that they can use this leverage in the future. That’s really the crux of the paper.
JACK CORRIGAN: Right. And it’s like to some degree, all of these examples are fighting the last war because we’ve already seen the ways that market power can manifest. And I see a lot of similar ways that it can manifest in the AI space. But we don’t yet really know how AI tools are going to be used. We don’t know what industries they’re going to be most prevalent in quite yet. And I think that maintaining vigilance and maintaining the basic understanding of where these companies have power in the supply chain and different ways that they could exert it, I think is for the best.
TEDDY DOWNEY: Well, it’s an interesting market in that when you write this paper, and I’m reading it, there’s a lot of mentions of exclusive dealing.
JACK CORRIGAN: Yeah.
TEDDY DOWNEY: And you don’t necessarily always call it tying and bundling, but tying and bundling. I mean, it’s like at every layer, there’s lots of this going on. And so, if you think about restraints of trade, there are just so many already. So, it’s not like — I feel like it’s not like you’d have a lot of — there’s a lot of material for DOJ and FTC to work with here –
JACK CORRIGAN: Very much so.
TEDDY DOWNEY: — already without necessarily having it be a mature market.
JACK CORRIGAN: Right.
TEDDY DOWNEY: I mean, that’s in some respects a result of what you said earlier. And we kind of get back to this initial point, which is if you don’t have all of these companies operating independently, they’re all connected. And if you look at the myriad ways that Big Tech is already intertwined with a lot of these sort of innovative, smaller AI firms. I mean, it’s a crazy level of entanglement.
So, it’s probably a function of that in some respects. Whereas, if you had a lot of independent companies with independent money, you’d have less of that kind of material to work with potentially. I actually even wonder how many multi-cloud AI software companies are there? They’re all kind of tied in with their specific cloud provider, right?
JACK CORRIGAN: Right. Yeah, there’s a few that have struck these partnerships with multiple firms — Anthropic was working with Google and Amazon. But that was one. But your point is taken. The vast majority are very much integrated with a single provider. And that gives them a lot of opportunity to kind of co-opt the work that’s being done externally and to all the tying and bundling that we’ve been talking about, all of it.
TEDDY DOWNEY: And AI does come up a little bit in a lot of these cases that are already ongoing. It’s come up in Google in terms of what the remedies should be. Just having looked at this — and not, as you point out, kind of fighting the last war — do you think the remedies in this Big Tech litigation should try to address any kinds of conflicts of interest and things like that in the AI market as a way to deal with this potential problem down the road? Or do you see it more as like, hey, no, we’ve got to take another crack at this?
JACK CORRIGAN: No. I mean, I think that there’s a lot of ways that we could structure remedies in current cases to prevent this extension of power into the AI market. And I think that hopefully this report, by tying it to some of the behavior that they’ve already done, helps identify where the AI angle might come in on some of these remedies.
One thing that’s come up recently that’s somewhat related to this is the — and I guess this would be kind of a new thing – is that Meta announced this “partnership” that they were going to have with Scale AI. I think it was on the order of like $14 billion or something. But Scale AI, their business is effectively data annotation and organizing, this deeply unsexy but extremely important part of the AI production process. They’re the really big player there. And if that falls under the control of Meta, whether directly or indirectly — again, these partnerships are kind of a gray area — that, again, is just another way that these companies can exert market power.
And again, it looks very, very similar to what Facebook had done with WhatsApp and Instagram. And because of the way the deal was structured, it might not warrant the same kind of scrutiny that a more traditional M&A deal would get. But again, they are rerunning the playbook that they have been running for the last 15 years.
The DOJ’s proposed remedies in the Google case where you have Chrome spinning off, requiring this kind of like data sharing with competitors. I think that’s a great way to maintain this openness that we want to see in the AI market and prevent these companies from using their existing assets to co-opt this space. Because with all of these are chokepoints, the reason that they could exert control over these chokepoints is because of the businesses that they’ve already established and the assets they’ve already accumulated. So, it’s just making sure that they can’t deploy those in a way that allows them to continue to keep rerunning the playbook.
TEDDY DOWNEY: You mentioned in the piece specifically that Google is a player in absolutely every layer. They can rely on absolutely no one else to sort of run their AI playbook. They’re sort of a completely independent supply chain, theoretically, I guess.
So, are they the biggest concern here? Obviously, Amazon is bigger in terms of compute and cloud. Microsoft has the agreement with OpenAI. Are those the big three that you think deserve the most scrutiny here, the most worth watching? And are there any kind of differences in how they could leverage their Power? Or are they all just kind of like pretty super vertically integrated at this point?
JACK CORRIGAN: Yeah, I mean, I think I would add to that list Meta. They’re releasing these open models, the Llama models, they can exert a lot of control in the model layer. And then just because of their dominance in social media, they can exert a lot of power in the distribution channel layer. I think that each of the companies is kind of unique. The goal of that whole chart was to highlight all of the areas that could be influenced. But each of these companies exert different power over different areas.
So, I think Google is interesting because they are simultaneously a cloud provider, a model developer, and they control all of these really important distribution channels. I think Amazon is much more of a player at the infrastructure layer and the distribution channel layer. Again, if you consider these cloud environments to themselves be distribution channels, the fact that Amazon is the biggest cloud computing company on the planet is it gives them enormous power there.
The same with Microsoft. Even if they weren’t partnering with OpenAI, they’re the second biggest cloud provider in the world. So, they can control the chokepoint there. They also developed the most widely used enterprise software suite in the world. So, that’s another big distribution channel where we might see a lot of bundling.
So, I think the specifics vary company‑to‑company. But for the most part, they all touch on at least two of these choke points. I think Google is unique in that it really can like develop, deploy, and distribute an AI system without relying on anybody else.
TEDDY DOWNEY: We haven’t talked too much about data. It comes up a lot in terms of, I think, people talking about the value of the AI models. You sort of need like real time data to keep them competitive or at a high-level.
Ostensibly, some companies are doing licensing agreements with media companies. But obviously, you can get data from being a search provider or being a social media company. You can get data all different types of ways. How did that come up when you were looking at this in terms of data being not just either an incumbent advantage or also just weaponized generally in the market?
JACK CORRIGAN: Yeah, I mean, I think if you think about it in terms of incumbent advantage, just the sheer amount of proprietary data that each of these companies controls gives them a pretty significant advantage in developing these more tailored tools that require this really high-quality data.
And then, if you consider third-party data – the stuff that’s out there that you need to be pay to access — having all of this money just gives incumbents a pretty big advantage. And they could sweeten those deals and create exclusive arrangements. It’s kind of a boring way of looking at it, but it’s just the —
TEDDY DOWNEY: No, I think financial subsidization, all that stuff comes up a lot in your cross subsidization. It comes up a lot in the report. I mean, it’s kind of there throughout. It’s just like, well, they can just lose money for longer than everyone else.
JACK CORRIGAN: Yes, right. And they can leverage their capital to negotiate whatever terms they want with anyone that has something that they want. You know what I mean? The money goes a long way.
TEDDY DOWNEY: Yeah, that sounds like extortion. But let’s move on. Could you share your perspective on the prospects for antitrust action by DOJ and FTC in relation to market concentration in the AI sector? Last question here. So, that’s a good way to close things out. I mean, we could do this all day, I think.
JACK CORRIGAN: Yeah.
TEDDY DOWNEY: Let’s end on this. I mean, we’ve already got lawsuits against all of these companies, except Microsoft. And you’ve got an active investigation into Microsoft’s role or relationship with OpenAI. So, but more action. Your thoughts. I guess the Trump administration. How do you think about the Trump administration’s antitrust enforcers in terms of focusing on AI?
I mean, it is kind of interesting because you’ve got Congress kind of saying, hey, let’s take a hands off approach. Maybe we’ll do a ten year moratorium here or ban on AI regulation. You had all those DOGE Musk people in the government saying hands off AI. Let’s use AI mean, but then you have this. DOJ and FTC that’s deeply skeptical of Big Tech’s power and all these ongoing lawsuits. So, it is a little bit of a confusing dynamic here to think about. What’s your take on is there interest here? Have you gotten interest from the paper so far? What do you think about the prospects for action here?
JACK CORRIGAN: That’s a good question. I think that we need to think about it in terms of two buckets. So, if you look at the pro-innovation, pro-AI, “hands-off” argument that you see a lot of Republicans talking about now, my reading of that is that a lot of that rhetoric is focused on AI governance, the safety regulations, and not necessarily the market power issues. I think that if you look at FTC and DOJ, antitrust enforcement is not incompatible with that 100-percent pro-innovation approach to AI. So, I’ll be very interested to see how that continues to play out. I mean, I think that the fact we have seen the FTC and the Justice Department continue on some of these cases and investigations that were started under the Biden administration, that’s promising.
I think that the one point of tension that I will be interested in following is not so much the anti‑AI governance rhetoric, but the focus on subsidizing scale. So, if you remember, I think it was back in January when Sam Altman announced this big Stargate project with the White House … that is connected to this idea within the national security community that we need scale, we need all of this money, we need all of this compute to continue developing more powerful AI. They think the government’s job should be removing any barriers to building the data centers and infrastructure that we need to continue pursuing scale. I think that that is where you’re going to see the real tension with the competition community. Because the people that are going to succeed if scale is just pursued relentlessly is going to be the Big Tech companies.
So, more so than the governance stuff, I think this tension between efficiency, small models and scale … that is going to be what really rubs up against the competition policy agenda. And I really don’t know how that’s going to play out.
TEDDY DOWNEY: I think reading your report and talking to you, I find it just ironic that the incentives right now for Big Tech are towards inefficiency, right?
JACK CORRIGAN: Yeah.
TEDDY DOWNEY: Like in terms of energy. Given that the whole argument historically to deregulate and weaken antitrust law and enforcement is that they’re more efficient. And actually, you have a business incentive to be inefficient here, which I find is just hilarious at some level. But I do think antitrust investigations have not deterred any real behavior by Big Tech.
However, I do think that this market is a little bit different for the main reason being that there is this hidden billions of dollars, potentially more, of copyright risk. And that has got to be a deeply concerning thing for any AI company that if the FTC opens the hood, right?
You know, look, I mean, maybe nothing would come of it, but you’ve got to be a little bit nervous. You’re getting a subpoena for someone to look at your model. You’re getting a 6B to look at your model. All of a sudden, you’ve got — there are a lot of bodies buried there probably. I mean, I would imagine. I mean, look. Obviously, I don’t know. Maybe they’re not violating any IP, some of them, but certainly the lawsuits to this point suggest otherwise. But maybe I’m being too cynical.
But look, this has been, as my conversations with you always are, super interesting. Everyone should read this report. I imagine it will be a landmark paper here for anyone looking at regulation, looking at policymaking, looking at strategic decision-making in this market going forward. And I can’t thank you enough for doing this. This was super, super interesting.
JACK CORRIGAN: Oh, this has been a blast. I really appreciate you guys having me on and taking an interest in the work.
TEDDY DOWNEY: All right. I look forward to staying in touch. And thanks to Jack. Thanks to everyone for joining the call today. This concludes the call. Bye-bye.