Jun 17, 2025
On May 29, The Capitol Forum held a conference call with Ketan Ahuja to discuss antitrust risk in OpenAI’s acquisition of Windsurf and what it means for competition and innovation in the AI ecosystem. The full transcript, which has been modified slightly for accuracy, can be found below.
TEDDY DOWNEY: Good morning, everyone. And welcome to our Conference Call on Antitrust Risks in the OpenAI-Windsurf Deal. I’m Teddy Downey, Executive Editor here at The Capitol Forum.
And today’s guest is Ketan Ahuja, a fellow at Harvard Kennedy School’s Growth Lab where he leads a research team on green growth and works on antitrust and competition policy. His work has appeared in publications including Cambridge University Press, Harvard Kennedy School, Financial Times, MSNBC, and ProMarket.
He’s going to be talking about how the OpenAI-Windsurf Deal could reshape the competitive landscape for AI coding tools. We’re going to be talking about Big Tech, antitrust cases, precedent, and just what we need for a healthy AI innovation ecosystem. And Ketan, thank you so much for doing this today.
KETAN AHUJA: Thank you, Teddy.
TEDDY DOWNEY: And before we get started, I just want to quickly mention, if you have questions, please put them in the questions pane in the control panel or email us at editorial@thecapitolforum.com, and we’ll get to that later in the call. Ketan, I found this ProMarket piece very compelling. Why don’t you start off with why you wrote this piece, how you came to writing that piece, and then we’ll get into the details from there?
KETAN AHUJA: Yeah, absolutely. So I think it all started 15 years ago with this wave of consolidation we saw in Big Tech. So, the giant tech platforms that we have today, they’re terrific companies, incredibly innovative. But many of them were also built through acquisition. And this is something that’s well known. Google, Facebook, Amazon, Microsoft, they rolled up more than 400 companies in the last 15 years or so. And they’re continuing to buy companies.
But some of the critical pieces of their market power that have been challenged in recent antitrust cases have been bolt-on acquisitions. So, in Facebook’s case, it obviously acquired Instagram and WhatsApp. And in Google’s case, much of its ad tech ecosystem was actually Google snapping up lots of really, really dynamic startups that were starting to create that ecosystem.
Now, tech ‘ecosystems’, it’s kind of an ‘in’ word. You can’t have an ecosystem of one company. Normally, it takes many different companies in the early stages of the technology to build out a system that’s interacting. One company typically can’t do it alone. And that’s actually what we’re seeing in AI nowadays.
So, probably two years ago, when ChatGPT first launched, everyone was really worried that AI models would be a really concentrated market because these models are so capital intensive. They cost billions of dollars to train. It only makes sense to have a few of them. And so people were worried AI models would be a natural monopoly within the tech stack. There would be various different natural monopolies. There’s obviously NVIDIA which has a really dominant position in chips. Some of the model providers have really strong positions in the general LLM models.
But as it turns out, its seeming like not so worrying that LLM’s will be a huge source of market power. A lot of the value in AI is actually being created by companies that customize these models for particular use cases. They tune the models or they introduce the models into particular workflows. And so, it’s actually proving to be something that could be quite dynamic and create lots of opportunities for lots of different players.
But that said, there are a couple of really big market opportunities that are emerging. And one of them is coding. These models are by far most intensively used among software developers because they just code really well. There’s a lot of code on the internet. And all of the models just are really good at the language of coding and the syntax. And it saves developers enormous amounts of time. The models have actually got so good that they’re able to code autonomously. They can be really like junior programmers.
So, I think that has led to a host of companies that produce these coding agents that are downstream of the big model providers that we all know like OpenAI and Anthropic. That’s meant that some of these coding agent companies are really taking off. And they’re actually some of the fastest growing companies in the history of mankind.
The category leader is called Cursor. Windsurf is like the second place. They create these integrated development environments that software developers then use and can plug in different models and let the models autonomously do coding tasks for them that they can then direct, sometimes just using spoken language without even having to touch a keyboard all that much. So, it’s really an incredible technology. It’s creating a huge amount of value. And it’s kind of like the next opportunity, the biggest market that these models have unlocked at the moment.
So, connecting these two things, we can see this burgeoning ecosystem developing. But we can also look back to the history of our current Big Tech platforms that are mired in antitrust risks and how many of those risks were created by allowing the ecosystem to consolidate in its early years. And I think that dynamic very particularly plagues OpenAI’s acquisition of Windsurf, which is really where this piece came from
TEDDY DOWNEY: Yeah, you write about the lessons from the Google Search and monopolization, the ad tech cases, and you briefly touched on that. But what is your takeaway from how the FTC and DOJ should think about these markets? What are the lessons from those cases that you take away that you think are applicable here?
KETAN AHUJA: We’ve recently had in the last eight months or a year or so, this big decision against Google Search and another big decision recently against Google’s ad tech monopoly. These decisions are very different, but quite complimentary. Together they tell us why this OpenAI acquisition of Windsurf is so problematic.
Google Search’s case is all about how Google controlled distribution channels for its search function, and then used that control over distribution to foreclose competitors and monopolize the search market. At its heart is this model of human behavior where we don’t tend to change search providers. We tend to just go with the default.
And once we get used to doing a certain set of tasks, once we learn a user interface on a technology platform, on a computer or a phone or whatever, we tend to just do the same things by rote. They become very habit forming patterns.
And so, in Google Search, Google managed to monopolize all the distribution channels and used its money and resources to become the default provider on lots of different web browsers and on Android, and it obviously owns Chrome. And that meant that everyone just used Google, got accustomed to it. And it became very habit forming.
Now that’s relevant in this OpenAI deal because user acquisition is incredibly expensive in this online world. And in this new category of coding agents, people are getting used to these integrated development environments, getting used to particular workflows. This is how you become productive, and it just becomes how you fire up your computer in the morning: if you’re a software developer, you just get used to a certain workflow or set of technologies or stack that you use when you work.
And early in a technology’s life, in a new product category like this coding agent category, new companies can come in and claim some share of eyeballs or users or whatever, and they establish a base. That’s a huge asset for a company in the tech space if it doesn’t have to then go out and do user acquisition in an established space, which can be incredibly expensive.
Google Search tells us that we are dealing with human people here. They don’t behave like neoclassical consumers. We can use behavioral economics to understand that they don’t actually tend to switch technologies. And having an habituated user base that you capture that’s growing fast in a key space can be an incredible competitive asset.
Google’s ad tech case is slightly different. It involved Google’s control over the ad tech ecosystem, which is very complex and has lots of different interlocking technologies. But it involves both the supply of ads. So, a bunch of advertisers who use software to place ads on websites, and also the demand for ads, which is a network of websites that have spaces for content and ads on the internet.
Google managed to buy up many of the key players in that ecosystem when it was developing. And that meant that Google could use its control over distribution or demand and control over supply to favor its own products, basically, self-preferencing and leveraging its control of different parts of the ecosystem to really own the whole stack. This created the monopoly that we have today in Google. There are lots of great comparisons in this caseto this OpenAI Windsurf acquisition as well.
So, OpenAI is a model developer. It also has a consumer product, which is ChatGPT, biggest consumer product in the ecosystem there. But it doesn’t have a huge presence in this coding agent space which is new, and it’s not being led by OpenAI. OpenAI is a fast follower in that space.
TEDDY DOWNEY: But they have their own company, correct?
KETAN AHUJA: Yes, they do.
TEDDY DOWNEY: Or they have their own division that does the coding, competes with Windsurf.
KETAN AHUJA: Yeah, they do. So, they have a new—they call it an experiment. But they have a new—it’s called Codex and it’s their own coding agent. It’s new. It’s not the category leader.
Anthropic’s Claude model, is actually considered the best model to power coding software. And Anthropic actually has its own coding agent called Claude Code, which is getting a lot of traction. And then companies like Cursor and Windsurf that don’t produce their own models, allow users to pick which model, which kind of large language model, they want to use.
So, you can use Windsurf, or previously used to be able to use Windsurf, with both Anthropic and also with OpenAI models, the GPT class of models. And you could also use Windsurf with DeepSeek or Mistral. So, it created an open interface where lots of model developers could plug in.
This has the same dynamics as in the Google AdTech case, where there’s this rich ecosystem of different players that use each other’s services interchangeably. And letting OpenAI muscle in downstream in the value chain really lets OpenAI control one of the distribution channels for its models.
TEDDY DOWNEY: And one question I had is, is the bigger issue that OpenAI has this sort of dominant LLM—you seem less concerned about competition in LLM space. But here is it that Windsurf is one of the top two and they’re going to leverage their power in the LLM to favor that one coding application? Or is it that the coding application is already pretty well-established and they’re going to use that dominance to favor the LLM? How do you think about where the market power lies in the space?
You know, your paper looks at LLM layer competition, application layer competition. But I’m curious how you think of that power dynamic evolving. Or is it just that, look, this is a nascent industry? You need to nip the problems in the bud and preserve this sort of interoperability. Just curious to get your thoughts on how you think about the power dynamic here working.
KETAN AHUJA: Yeah, absolutely. As you say, it’s a complex space. So, there’s this LLM layer. There’s the coding agent layer. And this is really about the interplay between the two layers. And you can think about it according to quite specific harms that will arise rather than this general—there are lots of emerging ecosystems. Sometimes mergers are good in them, sometimes they’re bad in them. In this case, there are quite specific harms that we should be worried about.
One of those harms is this classic antitrust analysis, which is OpenAI competes as an LLM model provider, but it also competes in the coding agent layer where it has its own Codex agent and it’s kind of a runner up in that layer. It’s not the front runner. It’s kind of a newcomer in that space.
According to a very traditional antitrust analysis, that would be concerning. Because OpenAI is like a direct horizontal competitor of Windsurf. And then it’s also one of Windsurf’s model providers. So, it has this vertical and horizontal relationship.
And then you bring in other model providers like Claude’s Anthropic. Anthropic currently earns a lot of its revenue as a coding model in these integrated development environments or used through an API in other systems.
So, once OpenAI has control over that coding agent, then it can stop using Anthropic’s model. It can tie its coding agent to its own model and slowly shift more customers, more users, from Claude’s LLM to its own OpenAI GPT class models because it has control over distribution. So it’s a classic tying or vertical leveraging theory of harm that commonly occurs in antitrust cases.
But there’s a whole other set of reasons why we should be worried about this transaction that are not related—that traditional antitrust analysis doesn’t really capture all that well. And that’s all about how new innovation and new technologies require rich and supportive ecosystems.
Neoclassical economics looks at innovation as a question of incentives. So, to promote innovation you try and optimize the incentives of people to innovate. Antitrust has picked up that perspective and tries to get a structure of the market that optimizes incentives to invest and innovate.
But there’s a whole other question about ability which antitrust doesn’t capture well and neoclassical economics doesn’t have a very good language for. So, you can have all the incentives to innovate that you want, but you may not have the ability to innovate. I have just as much incentive as the developers of Windsurf to build a giant company that makes coding agents, but I don’t have the ability to because I’m not versed in that field.
So, this language of what enables innovation, what gives people the capabilities to innovate, is not one that’s well captured in antitrust. And what lots of research has found is that ecosystems with lots of different kind of interlocking technologies are normally better at enabling lots of different innovators to take things in new directions and recombine technologies in new ways to make new products.
TEDDY DOWNEY: And by that you mean the ecosystem is open and interoperable? And the fear here is you have a good likelihood of losing that interoperability and then the whole ecosystem would be less innovative? Is that kind of what you’re saying? I don’t want to put words in your mouth, but I’m just trying to fully understand what you’re saying.
KETAN AHUJA: Yeah, that’s exactly it. So, there are these two theories of harm in my piece about this transaction. And one is this kind of classical antitrust, one about vertical leveraging and tying and whatever. And then the other one is about how if we want to build a future innovation ecosystem, having lots of players that are interoperable, many of which have the capabilities—and capabilities could include talent, understanding of these technology stacks, but it could also include a user base and a brand that lets you kind of have an established set of customers in this space. So, if we have many different companies that all have capabilities to innovate in AI, 10 years down the road, we’re going to have a much more dynamic innovation ecosystem that produces a lot more novel insights and technologies.
TEDDY DOWNEY: Another thing that comes up in these nascent markets is kind of the killer acquisition, you’ve got to act early. You also seem to favor build versus buy, which is something that I think the antitrust enforcers have favored of late. I don’t want to put words in your mouth, but can you talk a little bit about that? You mentioned some of that preference in the paper is that the cost here of not doing something, letting it go through, you’re letting them just buy their way as opposed to having them compete. What’s the issue with that? Why do you have a problem with that?
KETAN AHUJA: I think one thing that we should acknowledge is that OpenAI is actually not the preferred model. It’s GPT class models are not the preferred model for coding. Claude’s Anthropic model is the preferred model for coding at the moment for many developer communities. So, this is a use case in which OpenAI is not leading. And these two companies, these two front runners in the U.S.’s AI system, they have had different strategies.
OpenAI’s strategy has been to create this consumer app, which is Chat GPT that we all know and love. Anthropic’s strategy has been less focused on creating its own app, but more focused on becoming the preferred set of models for coding for developers, given that coding is such a huge use case for this class of technologies.
So, competition on the merits would really look like OpenAI improving its models for coding, trying to really outrun Claude, trying to get developers in interoperable systems to choose OpenAI of their own volition. And that’s what OpenAI is doing with its own coding agent, its Codex system. They have the resources and capabilities to build their own systems and compete on the merits.
But OpenAI is much better resourced than any other company in this AI ecosystem. They’ve just raised $40 billion. And so, if you’re OpenAI, why not just muscle your way into the coding space? They tried to buy Cursor, which is the category leader and currently sends a lot of revenue to Claude and Anthropic because developers choose the Anthropic LLMs through Cursor. But Cursor said, no, they didn’t want to sell. So, then they bought Windsurf, which is in second place, and also sent a lot of revenue towards Claude and Anthropic.
So, I think, we say antitrust is about promoting competition on the merits. And I think we should let OpenAI compete on the merits rather than try and use its extra resources to muscle into this downstream market and try and sideline its competitor in the LLM layer of the tech stack, which is Anthropic.
TEDDY DOWNEY: If you’re the agency and you do a second request or you want more information here, what other areas would you want to poke around on to see if there are potential bottlenecks in the AI space? I sent it to you earlier, but there’s this paper “Promoting AI Innovation Through Competition, A Guide to Managing Market Power” by Jack Corrigan. He’s at the Center for Security and Emerging Technology. It’s a Georgetown affiliated place.
And he mentions data, access to data, and data as a potential choke point. Is there a data play here? You mentioned it more as like customer habit and sort of how people develop their workflows. But is there also value in access to all this data, cutting off data, controlling data? How do you think about data in the AI, competitive—I guess it’s a supply chain, as a potential choke point? Is that worth further scrutiny potentially?
KETAN AHUJA: For sure. I think there’s a huge amount of concern about data. And these models, they’re trained on the whole internet. And a lot of data on how to solve coding problems is public and on the internet. So, people use Stack Overflow. They solve problems in this way and you can use that to train the models.
But you can also use the problems that the models encounter in regular day-to-day use by software developers. You can use that to refine your training as well. Customers using your product is a key source of data.
And then these model companies will also hire software developers to solve specific problems that the models can’t solve well. So, they just hire people to specifically train the models as well. And that’s a proprietary source of data.
I think definitely I would investigate that in a second request. It’s a good question. What would the agencies look for in a second request? I think definitely I’d investigate that.
One thing that’s been incredibly valuable in some of these recent cases against Big Tech companies has been disclosure of emails around commercial strategy. And you have these smoking gun emails from Mark Zuckerberg about the acquisition of Instagram about why Facebook needed to buy this competitive threat. I think whether or not that is the case in this OpenAI acquisition, I’d do some discovery around that.
TEDDY DOWNEY: Can we stay on that for a second? I mean, in terms of why they would be buying it, ostensibly it’s because they’re already dominant in the chat bot space, but they’re a laggard in the coding space and they want to ensure that they are at the top, preserve their position at the top, the sort of dominant LLM or most favored LLM, what have you. From a strategic standpoint, what do you think is the reason they’re buying them? We’ve sort of touched on this, but just from a corporate strategy standpoint, why do you think it makes sense for them to do this?
KETAN AHUJA: Absolutely, I think it’s a great question. One thing that’s come out since this Windsurf announcement, has been OpenAI’s offer to buy this other startup IO, which is from a guy called Johnny Ive and they’re making consumer hardware for AI systems.
We don’t know exactly what they’re making. It’s not public yet. But it’s, I guess, some electronic piece that will fit in your pocket, doesn’t have a screen, and you can interact with it through other media, maybe voice or whatever. And then it can be your assistant that you carry around.
A new large consumer market in tech doesn’t come along very often. It came along in the 90s and early 2000s with things like the iPhone, Facebook and Google Search. But there really hasn’t been one in twenty years that is something that everyone uses all the time.
But AI is creating lots of new large consumer markets in tech. And what we can see with OpenAI is that these two acquisitions that they’ve made of Windsurf and IO is a strategy to really own the consumer facing spaces, try and be the kind of front runner in these consumer applications, whether your user group is software developers or everyone who wants an assistant in their pocket or a chat interface.
So, what’s great about the chat interface is that it was a category they really created and they’ve made the best product there. And it’s just a terrific thing that’s creating a lot of productivity for a lot of people.
But innovation happens because lots of people have ideas in a distributed way. And that’s what’s creating these new categories as well around things like coding agents and AI assistants that you put in your pocket. And OpenAI is trying to buy its way into these other categories that other people are creating and really claim a lot of the distribution for their models, but also claim a lot of the user facing platforms that people get habituated to and have a hard time switching from.
So, I’d say that’s their commercial strategy. Or at least that’s what I assume it is. And that’s what it would be if I were in that position. And I think it’s also a very dangerous thing to allow to happen very early in the stages of a market. We’ve seen that with Google and Amazon and Facebook. So, I think we can just learn from our history.
TEDDY DOWNEY: Obviously, there’s been concern about Big Tech’s affiliation with all of these models. You’ve got Microsoft, Google, Facebook, Amazon, all involved, intertwined with these AI companies in one way or another. We mentioned access to data. That obviously is a big advantage. Computing power, that’s a big advantage for Big Tech. Are those two areas that you would want—let’s say there’s a more sprawling investigation into these AI platforms—where would you think DOJ and FTC should think about Big Tech’s influence or their ability to distort a competitive market, if at all?
I know you mentioned you’re optimistic about the LLMs being more competitive because you’ve got open source. You’ve got a lot of different varieties. It feels like less of a bottleneck, but you do have these Big Tech companies sort of looming in the background. And I’m particularly interested in this, you know, right now a lot of these companies are just committing potentially rampant copyright violations. But if you get in a world where you cannot steal content, you have to license it, the data then becomes a little bit more either expensive or your own data becomes more valuable.
I’m interested in your thoughts on that as well. If we move into a world that’s more respectful of copyright, if that creates any advantages, disadvantages. You mentioned resources. You mentioned billions of dollars, market capitalization, things like that, as helping people. But I could easily think of that as a potential issue as well. So, would love to get your thoughts on all that. I know that’s a lot to throw at you, but I’m sort of trying to take a step back and look at things a little bit more holistically as far as Big Tech’s concerned, get your thoughts on.
KETAN AHUJA: Absolutely, yeah. I think one thing that’s interesting is we’ve been speaking for more than half an hour and this is the first time we’ve really brought in these massive tech companies and their work in AI. And I think that’s significant because what it shows is these tech companies are incredible in the amounts of capital they can invest, but they’re actually not leading in any of these new spaces.
And innovation tends to come from small, scrappy organizations where the incentives – I mean, not necessarily small, but it comes from scrappy organizations where the incentives are aligned and they can see new niches. If you’re a career software developer or researcher in a Big Tech company, you can do great research and the tech company has all the resources it can throw at you. But your incentives aren’t necessarily aligned and the corporate politics get in the way of really creating new products.
With all their resources, you might wonder why is there even space for these other companies like OpenAI and Anthropic and Windsurf and Cursor? Why are they even in the show at all when Google has like hundreds of billions of dollars that it has put behind these technologies? Google had these chatbot things long before OpenAI launched ChatGPT. And Google developed a lot of the foundational technology for LLMs.
So, I think, I’m actually less concerned about Big Tech really muscling its way in and monopolizing the space through its own effort. If we’d been talking today about Big Tech’s acquisitions of Windsurf and Cursor and whatever, then I would have been worried. I don’t think we should allow them to buy up these companies.
But I think, let them be competitors in the space. Let them use their resources to develop new technologies and improve large language models, which they do incredibly well. But applying them in different situations, it takes something else.
We’ve actually seen this time and time again. Large industrial monopolies have incredible resources to put behind innovation, but tend not to be that good at commercializing all the time. AT&T’s Bell Labs created the semiconductor, created a whole host of technologies that are critical to the modern world. But it didn’t really commercialize any of them. It had the best researchers, had the best research labs, had all the funding to try out it. And it did incredible good for America and for the world to allow it to try its best in those spaces.
I think I’m very happy for Big Tech to be a competitor. I don’t think that policy should do that much to restrain them from funding their own initiatives in this space. I think policy should restrain them from muscling in by controlling distribution, nudging people to use their products instead of those of competitors, buying up these new startups, which antitrust has always had a role doing.
TEDDY DOWNEY: One thing we’re also seeing a lot of is the LLMs effectively being allowed to use search, right? Like search in particular, people are thinking that, oh, well, search will be replaced by ChatGPT and things like that. But one of the things you’re hearing is that these LLMs, they use search to like go out—and they use the index to go out and get more information and make it more relevant in a more immediate way.
Do you see that as a potential choke point at some point of, well, in order to make it really better access to that search seems critical and you’re seeing this come up in the remedy discussions in Google search. You’re seeing DOJ have some concerns about Google’s market power in AI. I know you just mentioned that, look, let them compete. But there are just a lot of different ways that Big Tech again, like I said, the data, the search, plays a role certainly in the supply chain. I’m curious if you’ve thought about that at all.
KETAN AHUJA: Yeah, absolutely. No, I mean, let them compete in terms of creating technologies, trying to commercialize them. Be very careful about how Big Tech might muscle their way in using their existing platforms and using their existing user base or distribution channels. I mean, that’s definitely a lesson from Google Search and you’re very right to bring it up.
I think people just go to—we go to our browser. We go to Google. We go to Perplexity or wherever. We go to these places to find stuff and we do it by habit. We don’t even think about it. It’s just the place we go to, to find stuff. And it’s very hard for a new company to bring users over in these spaces.
So, I think what we’re going to find is that the places where people go to find stuff or the apps that people get used to using become choke points or gateways to the rest of the world of human knowledge through the internet or they really become kind of the platforms. And it’s easy to think of Amazon as a platform. You know, Amazon is like a bunch of different tiles where people click on different products and it just looks like a platform. And Google’s Search window historically, as it used to be, was also a platform. Because it was like a bunch of different hyperlinks and you just—it’s clearly a kind of aggregator that you use to access lots of other different things.
What’s a little trickier is these LLMs are going to become—the text that they generate is going to become the platform. We’re going to have to think more creatively about what it means to actually control distribution or control eyeballs or be a platform in this space.
TEDDY DOWNEY: I want to get back to specifically these applications. How does it work? Is there like an analogous app store for these developers when they go in? Or is there a layer on top of the Windsurf where you can buy lots of apps? Or how does it really work? Do you just go to Windsurf and that’s what you work with, you buy the software? I’m just curious. We’re so used to these app stores and these like layers on top of the apps that extract rent. Does that occur now? Is it an open platform now?
How do developers purchase or use this stuff? And do you see any concern that buying up—we talked about it before, but sort of more high-level. But I’m curious if you could get in the weeds of like exactly how the payments and uses of the software occurs up there or anything worth talking about there?
KETAN AHUJA: Yeah, absolutely. So, I’ve used Cursor. I haven’t actually used the Windsurf interface. But it’s exactly as you say. So, you’ve got a software development window. It has a place where you write code. It has a place where you run code. And you can see outputs.
And then in the corner, it has this little place where you can select which large language model you want to use to power your agent. So, it’s just literally a dropdown menu and you just kind of select which thing you’re going to use. And that becomes the kind of gateway to controlling or shaping user demand. And it does a lot to shift the default for that from one model to another.
TEDDY DOWNEY: So, I see. So, it’s almost control. It’s control of the default. It’s almost more analogous to the browser in some respects because it’s like how you are doing your coding. You’ve got this assistant pulled up and you can just really easily choose a different one. But if all of a sudden, they hide that and your default is OpenAI, you don’t think about it, that would be a more closed, less interoperable, ecosystem, for example.
KETAN AHUJA: Yeah, definitely. And what we found in Google Search is that—and behavioral economics tells us this – is that people are satisficers. We tend to stick with something if it does well enough. And we don’t necessarily always optimize for the exact best option so long as what we have is working okay. And particularly in spaces where quality is very hard to assess.
So, if you’ve got this AI agent that’s writing a thousand lines of code and you kind of run it for an hour and then you go read it through and it looks like it’s okay and you make minor edits, you’re not going to then go do it again and test if maybe another AI agent is a better coding agent.
Who’s to say what this space looks like three or five years from now? But it has developed in a way that was actually quite good for competition because it was very interoperable–it allowed people to just choose their favorite model. Allowing money and ties to particular model companies, that warps the incentives in the space in ways that are not necessarily good for users or good for competition.
TEDDY DOWNEY: And we’ve seen the sort of myriad ways where you could prefer your default as well, your ecosystem. So, Apple has all these ways of sort of delaying other technology from being interoperable with iPhones. And you could see that open AI may be making it run a little slower when you choose another you choose another LLM.
You know, you just make it a little bit, just slightly more annoying. You’ve got to log in every time you want to use a different LLM. I could imagine a lot of small nuisances that would steer people to preferring the sort of OpenAI owned, or the OpenAI LLM, as opposed to one of the other ones. And in many cases, this goes back to Microsoft, right? Like you’re talking about sort of the browser effect here and how that sort of is a gateway to a new ecosystem. Well, that’s very interesting.
Anything else that you think we should chat about? I’m sort of out of questions. I don’t see any questions in the queue at all. Haven’t gotten any questions online.
KETAN AHUJA: Yeah.
TEDDY DOWNEY: Everyone’s, I don’t know. They’re ready to head to the beach or something already. It’s Thursday. We don’t have a lot of questions. But anything you want to touch on before we let you go?
KETAN AHUJA: Yeah, absolutely. One thing that came up is this industrial policy versus competition question. You talked about it a bit with these Big Tech companies. So, you’ve got these giant technology companies that can really invest in the space, but also we’re worried about them owning the space as well.
And often, one of the things that has really kind of stymied antitrust action at critical times has been this idea that we need to compete with China or we need to compete with whatever our competitor is at the time and build the best in class technologies in that space. And we need these giant national champions to do it.
And what we found is that greater competition in an open ecosystem that enables lots of players enables better innovation, whoever has a good idea can come and engage in the system. You don’t necessarily need to have Google’s capital and Google doesn’t have a monopoly on good ideas. That tends to actually be better.
So, we associate industrial strength with the might of a large company. But actually creating a competitive ecosystem with lots of different players is a much bigger driver of industrial progress. And I think that’s critical for when we think about our strategic competition with China and with other strategic competitors in AI, which is the technology of the future. Keeping an ecosystem open and accessible and preventing it from being monopolized is really critical to helping us win that race.
You know, you can see that small companies can do wonders. We’ve had this Deepseek announcement in China where a little company developed a much cheaper way to build these models. And it really shook a lot of people’s assumptions about how to go about developing an AI industry.
You see it as well in lots of different sectors. Reliance on one big company that claims it can do the best at anything and claims that it’s the best positioned company to lead the U.S.’s AI industry is a very—it’s just not a winning argument. You know, China has been so successful in producing electric vehicles because it’s funded literally hundreds of EV companies that all compete with each other.
It becomes like the Hunger Games in these companies trying to outdo each other. And then in the process, they build an industry that can then go and dominate the world. Big companies become big, great companies become great, by being good at what they do. You don’t make them big so that they can then become great. And if OpenAI wants to be the best at coding agents, it should do so by producing the best products and becoming great in the coding agent space. And then it will become big in that space. You don’t need to knock together a lot of these different companies to create synergies so that you then create a big company that is not necessarily great.
TEDDY DOWNEY: Yeah, yeah. The sort of open competition leads you to get the best innovation, the best company that will ultimately perform the best over time internationally. We’ve got one question. I think this is a good question to end on. We’ve sort of discussed this a lot, but good to kind of have this question I think. Can you explain why you think the merger reduces the ability rather than the incentive to innovate? Arguably post-merger, Windsurf has access to different capabilities and technologies from OpenAI.
KETAN AHUJA: That’s a terrific question. And what you see is actually this trade-off. It’s a very real trade-off that needs to be evaluated on the facts of any case.
So, of course, Windsurf has more capabilities when it can access OpenAI’s technology. And OpenAI has more capabilities when it can access Windsurf’s distribution channels. So, together, they’re both stronger.
But when you look at capabilities in the ecosystem as a whole, there’s a trade-off because the ecosystem is weaker. So, other models, other LLM developers, and other companies that might use or interact with or engage with Windsurf in some way, no longer have access to Windsurf’s distribution channels. So, you’re really trading off the pro-ability to innovate synergies of combining sets of capabilities quite strongly versus the capabilities embedded in the broader ecosystem that arise from having more independent players that are themselves capable and can interface with each other.
The questions to ask are really, do you need to co-develop a technology? Is it necessary for an innovation to occur to combine these two companies? Or are you better off keeping them independent and ensuring lots of people can access this open innovation ecosystem?
And here’s actually where I distinguish Windsurf from OpenAI’s acquisition of IO. IO doesn’t have a product yet. They haven’t commercialized anything. Maybe you do need this really tight integration between the hardware developer and the software developer to create a new product category. We certainly saw that with the iPhone where Apple is kind of like the only vertically integrated smartphone developer because they produce both the iPhone and a lot of the software that goes into it. And maybe it was necessary to have that tight integration to create the new product category in the first place. And then that enabled Android and these other smartphone developers to come in with open source software.
But in Windsurf’s case, that’s clearly not the case. Windsurf has created this new product category. It’s leading it. OpenAI is the one that’s trying to catch up in coding agents with its own applications. And so, it doesn’t seem necessary to combine these two companies to create a new kind of branch of innovation or new product category. Instead, it seems like the risks are greater in depriving everyone else in the ecosystem of the ability to innovate through Windsurf’s platform.
TEDDY DOWNEY: And we just got through talking about a whole higher level perspective, which is that innovation comes from a competitive ecosystem, an open competitive environment. That’s where you get the innovation from. Innovation just doesn’t come from like one company necessarily just saying I innovate now because I am big and have resources.
I mean, I don’t want to put words in your mouth, but that was my takeaway from what you just said in the conversation before. It seems we’re talking—it’s kind of about the perspective is like, do you care—or is it companies that innovate or these competitive open ecosystems that lead to, that ultimately results in innovation? And you’re losing obviously a competitor, right? You’re losing one competitor. You got two right now at the very minimum. You’re losing one of those essentially, effectively taking one out of that ecosystem. So, at a minimum, you’re getting one less place to get innovation.
Well, listen, this was amazing. I know you do green technology work as well. I look forward to finding a way to have a conversation with you on that down the road. I think this AI stuff is super fascinating. I know the agencies are continuing to grapple with how to ensure competition in this space going forward. I think your piece is a very important contribution to that dialogue.
And thank you so much for doing this today. It was a real pleasure to meet you and chat with you. And I look forward to continuing this conversation if you are amenable to that.
KETAN AHUJA: Yeah, absolutely. Likewise, Teddy, really appreciated the chat. Thanks so much.
TEDDY DOWNEY: And thanks to everyone for joining us today. This concludes the call. Bye-bye.