Apr 12, 2026
On April 6, The Capitol Forum held a conference call with Asad Ramzanali, Director of AI and Technology Policy at the Vanderbilt Policy Accelerator, to discuss his recent paper, “After the AI Crash”, including the financial and policy risks surrounding the current AI investment cycle and what could follow a potential correction. The full transcript, which has been modified slightly for accuracy, can be found below.
TEDDY DOWNEY: And welcome. I’m Teddy Downey, Executive Editor here at The Capitol Forum. Today I am very pleased to be joined by Asad Ramzanali, Director of AI and Technology Policy at the Vanderbilt Policy Accelerator. Asad previously served as Chief of Staff and Deputy Director for Strategy at the White House Office of Science and Technology Policy, where he worked on national technology and innovation policy. He is the author of the recent paper, “After the AI Crash”, which examines the financial and policy risks surrounding the current AI investment cycle. Asad, thank you so much for doing this today.
ASAD RAMZANALI: Thanks for having me.
TEDDY DOWNEY: So, you know this. I thought this was an absolutely incredible paper. It covers so much ground. It must have required so much research. Can you walk us through how you came up with the idea for this, how you went about doing this paper? It covers so much ground. It’s so impressive. Would love to get a little bit of the backstory.
ASAD RAMZANALI: Yeah, many of your listeners here will remember that towards the end of last year, there was a whole set of are we in a bubble conversations happening? And the tenor of those conversations felt like a bunch of financial analysis on is this a bubble? Are bubbles bad? Should we be worried? Should we not be worried?
And the conversation that I and a couple of others internally started having is if it is a bubble, what kind of harms are we talking about? And also, are we prepared as a policy community for that kind of a bubble that could actually cause real harms? And one of my colleagues, Ganesh Sitaraman, who leads our center, was around in 2008 working on a lot of the response then. And we started talking about why don’t we do a, hey, if there is a crash, we need to be prepared for something that is a commensurate style, commensurate size, of a policy response to what might be happening? So, that’s how we got into it and at the end of last year, started writing.
TEDDY DOWNEY: And what to you walk through a series of risks. What to you really stand out as the big ones that our audience should be mindful of? Because the way that you lay them out, I’ll be honest, when I finished reading the paper, I was materially more concerned that this bubble popping was not just kind of an inevitability, but a little bit more imminent than I had previously thought. I would love to get your perspective on is that how you ended up with the research when you finished it? Did it feel to you that it was more likely to happen? And then what risks really stand out to you as jumping off the page?
ASAD RAMZANALI: Yeah, that’s the cycle I went through in my research process, right? Of going from, oh, there’s something here. Like, I need to read more about this. And the reading started with the New York Times articles, and then the Atlantic long form articles, and the New Yorker wrote something. And you kind of go deeper and deeper. And Teddy, I mentioned this to you earlier, but I started my career more in a finance-y world. So, I started going to, like, let me understand what the investors are saying. Let me understand what folks who have skin in the game, how they’re thinking about this. And that’s when a bunch of pieces started coming together.
To answer your question directly, the risks I worry most about is (1) the things we don’t fully understand. So, private credit is a space that we don’t fully understand. There’s a lot of opacity. And this is where, like, I try not to go too jargony, but like, when you have a special purpose vehicle backed by private credit, and that’s what we call a data center owned by Meta, but it’s not really owned by Meta. Meta doesn’t have the loans. It’s not an asset on their balance sheet. And that’s $27 billion facility in Louisiana.
That’s when you start to say, wait, how do these things connect? And for a lot of people, when they hear private credit, they think, yeah, but that’s somebody else’s money, right? Like, that’s a private investor’s problem. But private credit takes a lot of investment from your 401k, from your IRA, from your parents’ pension, from their life insurance. And so, that’s when we all start becoming on the hook. These are the things that we know about because really good researchers and really good journalists have done the hard work to uncover them.
I worry about the things we don’t fully understand yet, right? These are the things that I could pull together in a couple months of writing, but there’s probably threads that I’m not able to pull together because the reporting’s not there. So, that’s one big worry.
And the other big risk in my mind is, and the goal of this paper, is that we are caught flat-footed. That if something like this happens, there’s going to be a lot of harm, but that the policy community doesn’t know what to do or how to think about it. So, that’s the purpose of this is to say, let’s have that discussion now around if something like a financial crisis caused by overinvestment in AI happens, what do we do?
TEDDY DOWNEY: Before we get to the what do we do, I want to stay on the risk for a little bit longer. You mentioned in the paper Meta having this $27 billion off balance sheet debt for a data center. Blue Owl’s tied in with that. Obviously, Blue Owl has, since you wrote the paper, gotten even in a considerable amount of trouble financially. And also, in the paper you mentioned, there are some estimates out there of how many other off balance sheet investments there are in the space. What’s your sense of how big that problem is? Were there any dollar estimates that you found credible, putting a number on that? We don’t know what’s out there. It could be this much. And also, the Blue Owl, the fallout from just the squeeze that it’s enduring right now.
ASAD RAMZANALI: Yeah. So, the numbers I’ve seen put total private credit going into data centers north of $100 billion, but I don’t go more granular than that because it is hard to measure this stuff. And even the reporting that does exist happens sometimes quarterly or a couple times a year.
For those who aren’t following it all closely, Blue Owl is a creditor in this space. They’re the ones who are backing that Meta data center in Louisiana that’s really a special purpose vehicle. And the thing that’s happened there is private credit often takes money from investors and offers liquidity where you can pull money out quickly. And they, and actually a couple others too, have had to change those terms to where their investors aren’t able to pull out money at the rate they might want to because they’re seeing too many people ask for money to go out.
And so, that’s where you start to worry about, well, what is that? And why are those investors worried? And if I was one of those investors, if I was an institutional investor, the thing that might not have been obvious four months ago, but has started to emerge because of a lot of the reporting, is you yourself as a limited partner, as an institutional investor, you’re trying to diversify. So, you’ve got money in different asset classes in different categories.
And then one of the things that is starting to emerge is actually if you’re in private credit and you’re in banks and you’re in corporate bonds and you’re in equity, all of that is pointing towards data centers because of how much of the stock market data centers are making up because corporate bonds are hitting record levels because of the tech companies. And so, that to me is one of those things whereas like a wonky technical financial thing is a correlated risk, right? You start to worry that your diversification strategy may not actually be diversifying. So, that’s one of the risks that starts to appear there and where I’m hearing people kind of informally have some concern.
TEDDY DOWNEY: And when you mention feeling diversified when you’re not, I mean, that definitely brings back 2008 for sure.
ASAD RAMZANALI: Totally.
TEDDY DOWNEY: Your paper mentions that at the outset. We have a question here from the audience. We’ll get to questions later. If you have a question, put it in the questions pane or email us at editorial@thecapitolforum.com. We’ll try to get to it.
But I think the question we got from the audience is are AI companies parallel to banks in 2008? How should investors and policymakers think about systemic risk in this context? Your point right there about diversification really does bring up 2008 in the question from the audience member.
ASAD RAMZANALI: That’s right. I think the scenario looks a lot like 2008. And I should say at the outset, look, I’m trying to do this as kind of a sober analysis of what I see. I hope I’m wrong, right? The scenario of 2008, the number of people who lost jobs, lost their savings. I lived through it. It was painful, right? And I truly hope I’m wrong.
But the scenario looks like 2008, I don’t think the parallel of are the tech companies, the banks, I don’t think that quite holds because you don’t have something like a run on the banks. And you still have a financial layer that’s involved here that looks like private creditors, that looks like, hey, a lot of the private creditors are themselves, some of their biggest investors are the banks, who themselves try to diversify. So, it goes to places where like my confidence level of what’s going on at like two levels down gets lower and lower. Because the analysis we would need to do that isn’t fully—we don’t have all the data we’d need.
But the parallel that I tend to think of is in 2008, one of the core problems was you had housing—which is a massive part of the economy—housing crisis turned into a financial crash that impacted every person and every industry. That’s the thing I’m worried about is a data center or an AI crash that leads to, because of everything being intertangled in ways that are really hard to understand until you’re looking back at it. That’s how we understood the housing problem is when everyone looked back at it and said, oh, this is how everything was connected. That’s what I’m worried about. We’re seeing the parallels that look like that.
TEDDY DOWNEY: In the piece, you also mentioned some accounting that reminds you of sort of Enron style accounting. You mentioned equity investments in customers as opposed to like customer financing, which is a little bit more typical. You also mentioned, I want to say Neo clouds and how Neo clouds are using credits to pay for credits, which sounds really dubious. Can you walk us through some of the accounting issues? How bad are these? How worrisome are these? And are any specific companies particularly exposed to these kind of accounting shenanigans? You mentioned Microsoft being tied to the Neo clouds in the paper.
ASAD RAMZANALI: Yeah. So, there’s a few different things going on. I’ll start with the circular investments that are happening. We’ve seen vendor-based finance at scale for centuries, right? So, when farmers want to buy equipment, that is a lumpy cost that may not align with when their money is coming in. So, often they get a loan from their vendor to pay it off. And that has all types of problems, but that’s not a systemic, everybody is paying kind of risk.
As far as I can tell, we haven’t seen any kind of scale of vendors investing in unprofitable customers like this. And that’s what gets worrisome. Because NVIDIA is invested in OpenAI. OpenAI is invested in AMD. OpenAI is invested in many of its customers. It has a separate startup fund that is actually controlled by Sam Altman, but it’s not OpenAI, but it’s called OpenAI. And so, there’s a lot of like, the arrows are circular and the problems I see is when a vendor gives money to their customer and that money is used to buy from the vendor that is pumping up revenues in ways that the market might not do naturally. So, that creates a question in my mind.
If I’m an investor, the thing I worry about is if you’re investing in a chip company, you’re trying to get exposure to the chip market. You’re not trying to get exposure to a million other things. And if those million other things end up on the balance sheet of the chip company, that muddies your investment idea of what you’re trying to do there.
So, that’s one. Neo clouds are a group of firms—a couple are public, most of them are private—that basically take excess capacity needs for the big hyperscalers. So, the big hyperscalers are Google, Amazon, Microsoft, Oracle, and Meta, with some caveats that Meta and Google tend to do their own internal.
But those companies, when they run out of capacity, they shift their capacity needs to companies like CoreWeave. Many of those companies used to be crypto miners. So, they have data centers and computing assets sitting around they then sell as a service, effectively back to the hyperscalers.
Now the biggest investors in these Neo clouds is of course, Nvidia is the chip provider they’re buying from. And so, that’s one thing that gets a little bit tricky. These off book data centers that we were talking about earlier, the special purpose vehicles, Meta’s auditor flagged it as a critical audit matter. Now, that does mean that the auditor is saying, we’re letting this go, we’re saying this is okay, but we’re flagging it. We’re saying that there might be a problem here. And so, this is the kind of stuff that any one of these things might be a problem, but when they all start to come together, it, to me, raises a red flag.
TEDDY DOWNEY: Before we get to solutions, those were some of the biggest risks, specific risks, that I wanted to touch on. You have others in there. Are there any other specific risks that you’re seeing, that you’re monitoring, that you want to flag for listeners here in terms of like how this could unfold, how things could go awry?
ASAD RAMZANALI: The only other big theme that I’ll point out is in the dot-com bubble of the late 1990s, early 2000s, the percentage of the economy that the tech world made up was pretty small. The place we are now is AI makes up a massive percentage of the economy. It’s the kind of thing where you don’t have to look at any specific measure, though there are plenty. But the thing that sticks out to me is the units that we use. We think about AI investment as a percentage of U.S. or global GDP, and we measure it in the trillions.
So, the magnitude of the thing we’re talking about is not some small little thing off in the corner on the West Coast. It’s actually eating up—JPMorgan has this fantastic financial report at the end of last year, looking at what kind of capital is going to be needed. And their assessment is basically every possible capital and financial market that exists is going to be needed for this.
And they do a little waterfall chart where they look at how much cash, how much equity, how much corporate bonds, how much private debt, how much junk bonds, everything. They go through every market. And then they’re remaining with, it was either 1.2 or 1.4 trillion dollars they can’t quite figure out which market it’s going to come from. And that’s what they label like a TBD effectively, or maybe the government will do it.
TEDDY DOWNEY: Let’s figure out where the money will come from.
ASAD RAMZANALI: That’s right.
TEDDY DOWNEY: The capital markets just don’t have enough for that.
ASAD RAMZANALI: That’s right. And that’s in a five year period. That’s 2026 to 2030. That’s not like over the course of our lives. It’s soon.
TEDDY DOWNEY: Maybe it’s worth mentioning, you have in there some great data points about how much debt these companies have taken on. Was that surprising to you to go through and see—these were businesses that were asset light, right?
ASAD RAMZANALI: Totally.
TEDDY DOWNEY: They were not doing crazy amounts of CapEx investing. They’re not taking out debt. And then all of a sudden, they’re just issuing tremendous amounts of debt, including a Google 100 year bond.
ASAD RAMZANALI: A hundred year bond.
TEDDY DOWNEY: Tell the audience about just how quickly and how much debt these companies are, the Big Tech companies are taking on.
ASAD RAMZANALI: Yeah, I included a couple of charts of debt, both for Meta and for Alphabet, just because they tell a visual story as charts. Where Alphabet, for the size of company it is, it had $14 or $15 billion of long‑term debt year over year for many years, right? From like 2021 to 2024, it’s about that much. It’s a relatively flat chart.
And then 2025 last year, it goes from 14 to 49, right? So, you’re talking about almost a 3 1/2 X increase literally in one year. That doesn’t include all the off book stuff that we were talking about because that’s by its nature off books. And since then is when they’ve been doing these 50 and 100 year bonds in Europe. Which, when you stop to think about it, like that’s a crazy investment.
The Facebook story is even more interesting. To your point of, financially these companies were interesting because they were capital light, because they were software, has always been super easy to pay for up until 2021. Facebook has no long-term debt at all. And then it just starts gradually increasing from 10 to 18 to 29 to $59 billion of debt last year, long-term debt. And that doesn’t include its $27 billion facility that’s a special purpose vehicle backed by private credit.
So, this is where, when you just back up and like, we’re just fluently talking about basically a $100 billion raised in debt last year from two companies that excludes Oracle, which its credit rating is one shade above junk bond status. It excludes Microsoft, excludes Amazon.
So, all of these folks have their own stories that are at one level hard to put together, right? Because each story is a little bit different. But when you look at each of them, you’re like, wait. Each of them has their own problems that are actually ones that we should all be paying attention to.
TEDDY DOWNEY: Yeah. I was surprised by those numbers. I didn’t realize how much the debt had increased in such a short period of time.
Now let’s get into, let’s meander towards the solutions part of the paper. You imagine a couple of different ways that this can go based on historical precedent. You point to the dot.com bubble bursting, which is traumatic in many ways, but not a financial crisis in terms of catastrophic proportions. And then you kind of envision a worse version.
You don’t get into solutions and problems that would be created by a financial collapse in terms of like what to do about the banking system, but walk us through how things can go bad, how things can go sour, how you thought about that and then how that sets us up to talk about solutions.
ASAD RAMZANALI: Yeah. The worry I have is some of these things that are special purpose vehicles—or the Neo clouds—that becomes, in my mind, the Lehman Brothers. That becomes the thing that no one had heard of, that isn’t massive on its own, but one of them have financial struggles and you quickly realize how entangled everything is. That’s the concerning scenario. And that’s what I tried to think through what happens, right? What happens? I don’t think we’re going to see something like Nvidia or Google having massive insolvency issues.
I think it’s all these other edge cases that it turns out are linked to all of the other companies. And so, when that kind of thing happens is when I get worried about, yeah, actually a lot of these companies have either bad financial credit or they have a bunch of debt on books and off books. Those off book ones are really easy to shed, right?
Because right now for Meta, that’s just a contractual agreement to buy with an entity that itself will face financial pressures if there is a big crash. And so, that’s where I tried to do a what are the problems we’re going to see? And what’s the big version of policy solutions?
Because the failure that, again, I worry about is we do small little things like after 2008, where we did .frank, that felt big to people, but was really, really technocratic. And one of the biggest things was setting up the Financial Stability Oversight Council, the FSOC. That’s an interagency body. The Congress of the United States directed heads of agencies to have a meeting a couple of times a year and write a report. That’s not big. Designating new institutions is systemically important. That idea is technocratic and didn’t actually get at things like stopping the failure of Silicon Valley Bank, right? We didn’t even foresee that. It was never on anybody’s list of too big to fail. All of those banks have only gotten bigger and more, and there’s more to deal with there.
So, that’s where I start with what are the proximate causes of the crash? The circular financing, that’s the thing that I don’t think we’ve seen anything like this before. The opaque debt, we have to get sunlight to all of that. There’s a lot of government subsidies that happen less at the federal level, but a lot at the state level. This is like a billion or two per state sometimes. Those things are distorting the market.
So, I tried to lay out a here’s how you deal with the proximate causes. Then here’s how you deal with things like data centers are going to become stranded assets. Well, we’ve had a need for public digital infrastructure for quite some time. We have an apparatus through our national lab system that the Department of Energy runs. We have an apparatus for thinking about how to use that stuff at the National Science Foundation. So, we should be thoughtful about taking advantage of the assets that might exist that can be useful for the public. So, I try to lay those out and then I go to workers and then I go to kind of more regulatory schemes.
TEDDY DOWNEY: Well, let’s talk through a couple of things because you make so many good points here. First, Dodd-Frank was a failure. I think that’s a conclusion that you make and you’re like we’re not going to make this mistake again. It doesn’t make sense to make this mistake again. And you mentioned both kind of strategically it was a failure, but it was kind of set up to fail because people hadn’t thought about it in advance. They weren’t prepared.
Now, I’m curious, let’s talk about the not being prepared part. Do you really think that it’s because people hadn’t thought about it? Or because people weren’t ready for what I call the Wall Street people coming in and telling Washington how to fix things? Like there’s sort of like a corruption angle there or an outsourcing of your thinking problem and then a just not having any institutional knowledge. I guess they’re part of the same.
ASAD RAMZANALI: They’re related. I think there’s no way to know if we had been prepared and if our policy imagination had been solid beforehand, what happens, right? But I think about a lot that we have seen crisis moments are when we do things policy-wise that were previously impossible.
I think about Cambridge Analytica. That’s a crisis moment. The whole country reacts. You can have your own views on if that was actually a failure of whatever you want, but everybody freaked out, right? We all had a collective, oh, this is a problem.
We had no good ideas on the shelf, right? We did a bunch of hearings, but there weren’t bills ready to go on privacy and antitrust. Those things came after. And so, to me, this is another potential crisis moment where we have to be able to connect the dots. And that’s what I’m trying to do here of here’s how those kinds of ideas actually connect.
We have an accompanying op-ed and my co-author had a piece in “Time” that we wrote about how some of the big ideas like postal banking, right? How do we think about expanding banking access? Those came a couple of years after 2008. So, the crisis moment had passed, but those big ideas—there’s a good paper on what we should do instead of the kind of technocratic solutions we did after 2008. That’s an after the fact paper. So, this is an attempt to say, let’s at least—I may be wrong in exactly how this unfolds, right? So, some of these may be more relevant or less—but let’s at least put the ideas on the table and have a discussion.
TEDDY DOWNEY: Some of the ideas, I mean, you pull different ideas from different communities, which is, I think, rare to see in a Washington paper. You’re looking at a lot of different kinds of regulations and laws to solve these problems that might come from a collapse. I want to talk about the systemic stuff before we get into the worker stuff.
You mentioned a Glass-Steagall for AI. We don’t even have a Glass-Steagall again for banks. I know you leave that out. But tell me about a Glass-Steagall for AI. Why is that necessary? Why does that make sense given how things could play out from a bubble bursting?
ASAD RAMZANALI: Yeah. So, the history of this is interesting. In the 60s, the DOJ is putting antitrust pressure on IBM, which sells its stack as hardware operating system software all in one. And as part of that kind of back and forth, IBM unbundles—they decouple—their hardware and software. That’s why we get the rebirth of Silicon Valley from a chips region to a software region. That’s why we get AI as an evolution of software to internet to AI.
And now we’re seeing the rebundle. We’re seeing Google fully vertically integrated from designing their own chips to integrating AI into everything. Meta is trying to do a similar story here. And then you see something different with Microsoft that’s investing across the layers of the stack.
My argument here is part of what’s happening that we’re seeing is the data center layer, the physical layer, which is data centers, chips, the cloud computing that controls the data centers. All of that is a market where all of the money that we’re worried about, all of the financial engineering that we talked about, that’s the layer it’s happening at, right? It’s financing to pay for data centers and chips.
That’s being driven by exuberance in a market that’s connected through investment and co-ownership, which is the AI market that sits on top. So, AI needs data centers and cloud, but because it’s often the same company or co‑invested, we actually don’t have the market discipline of data centers or cloud companies saying, yes, this is a risk I want to take. I want to have this customer that’s going to drive X hundreds of billions of dollars of my own capital investment this year. You may not take that risk if you’re completely separate companies, but we actually have no idea of knowing what that risk profile looks like. And so, to me, there are a lot of reasons to do structural separation in tech, but the one that becomes clear in this scenario is we actually don’t have the market discipline you’d want at both layers.
TEDDY DOWNEY: One thing you don’t mention, but I think it’s kind of implied, is obviously China has these models that are much more efficient. And the way that it’s set up in the U.S. is the hyperscalers, they have an incentive to like one more cloud compute used, right? They have an incentive to be inefficient. They have an incentive for the models to use a ton of compute in some respects, because then that means you’ve got to buy more cloud and they’re providing the cloud. And so, they have this weird incentive to not be efficient. That strikes me as problematic as well. Just another way of looking at like how the incentives are not aligned to get the best quality product, particularly when you have these other costs that are on society of higher energy prices, more pollution, et cetera.
ASAD RAMZANALI: Yeah, there is something instructive about what we saw with DeepSeg and a lot of the Chinese models that are way more efficient. I do think that those can only exist as a follower because of the way they’re trained through distillation and other means. That being said, we have yet to see the efficiency in things like chips designed for inference, right?
Like we saw a huge push towards GPUs for training. We haven’t completely seen all that play out for what inference looks like. Inference is when AI is used, the model is used, rather than just trained. And so, all of those efficiencies are yet to kick in. They’re still kind of—we’re seeing them begin to happen. And that’s where like a lot of the existing cloud infrastructure might actually be more useful. We just haven’t figured out exactly how to do that efficiently.
So, there are efficiency gains. There’s all of that. But at the end of the day, the question is, as a society, we made decisions on when we wanted to invest in huge capital allocations, like the Manhattan Project, the representatives of American people made decisions to do that. The Apollo Project, the representatives of the American people wrote appropriations laws to do that. And voters could vote those people out. We do that for big, huge capital pushes.
Now there’s a version of the story that we decide as a country that that’s what we want. And we have a huge industrial trillion dollar move towards AI. We haven’t done that. That’s not what happened. So, this is where, on the one hand, if it was all private capital, I’m less worried. But when it’s all intertangled with public risk, that’s when it’s like, wait, no one actually, what’s the legitimacy of that decision? Not at the individual level, but at the aggregate level.
TEDDY DOWNEY: Yeah, one thing that we’ve started to do a little bit here is look at Big Tech procurement as just as important as government procurement, because often they’re spending more money on doing bigger projects. And this is a good example of that as they’re dictating how the market is going to look and what the priorities for society are when they’re spending such a huge chunk of GDP.
Now, you also recommend rules, not just structural separation, so they’re not vertically integrated. They don’t have incentive problems. That would entail massive divestiture. You don’t explicitly say, hey, we’re going to break these companies up because they’re too much concentrated economic and political power. You say, actually, this is just not a good way to allocate money. There are too many mis‑incentives. There are too many problems here.
Do you also think of this in that anti-monopoly tradition as well? I mean, you’re calling for very extensive breakups. And maybe you can walk through a couple of the different ways that you think of breaking them up to not be vertically integrated, how you would break up Google, Apple, Amazon, et cetera, according to this no vertical integration plan.
ASAD RAMZANALI: Yeah, the main idea here is a split between the hardware, the physical stuff—data centers, chips, the energy required to do those, the cloud computing—and then the stuff that happens because you’re using that technology. So, that’s all the software. That’s the AI. That’s the tools that AI is integrated into.
There are a lot of different ways of structurally separating. And we’ve seen the bills that the house antitrust subcommittee proposed in the 2019, 2020 frame. You’ve seen some of the bills from Senator Warren, who is developing a digital commission that would have structural separation authorities. Those actually go more granular. And I myself have proposed other ways of cleaving the companies. In this, the way I think about the cleanest way to split in some ways, in part because you don’t run into the business issues. You run into with like what happens with a browser that’s not on its own profitable? What happens with this kind of advertising or this kind of market that isn’t fully separable.
The data centers and chips piece are actually pretty easily separable. Now I’m not saying it’s an easy thing or it’s a small thing. It is a big thing to do. It would require real divestitures, but I mean, Amazon’s the easiest story, right? Where you have amazon.com, which has its retail presence, but also has a lot of investments in AI—is doing their own. They make their own model available called Nova. It’s more of an agentic shopping kind of tool. But they also have AWS, which owns physical data centers and designs chips. It has an energy subsidiary that trades billions of dollars of energy wholesale.
Separating those is something that actually financial people have been calling for years because they think that the premiums between the very different businesses bring each other down, right? So, there are actually financial incentives to do this for some of those companies. People have called for that for Amazon and for Google. In the Microsoft case, you can see how that’s a really mature cloud business. It’s the second biggest after Amazon. So, you can see how those things work out. It gets a little bit more complicated when you get into NVIDIA, which has a bunch of investments across the software layer, but it itself is not doing. So, that actually might be easier. But this is the kind of stuff where it’s not that you can’t do it—and I try to give examples of how it would look—but it’s that you have to realize that’s part of the mix of policy solutions.
TEDDY DOWNEY: Another kind of anti-monopoly policy idea that you draw upon is utility-style regulations, non-discrimination, things like that. Walk us through why you think that is necessary in addition to the breakup side of things.
ASAD RAMZANALI: Yeah. So, a lot of my colleagues at Vanderbilt and elsewhere have rebuilt this tradition of utility-style regulation or networks, platforms, and utilities, as a body of law. And I’m thankful for their foundational work here.
One of the things that’s most striking to me is Congressional Research Service, which every Hill staffer has read their reports. And I always thought they were really high quality. They’re really well done. The first sentence in the CRS report about cloud computing compares and gives an analogy to utilities, where you have an activity that’s done—like electricity generation or getting water—that moves through a network—in this case, the internet or transition wires for electricity, and you have a per-usage building kind of cycle.
These are digital utilities. They are means to another end. The only reason I care about the health of the chips or the data center market is because it impacts the innovation on top. That’s the thing we really care about. We really want the drug discovery that AI might enable. We really want productivity. But the cloud market enables that. It on its own is not something that we care about except for as a means to that end. So, that’s a feature of utilities similar to high cost of entry, barriers to entry that are different from cost.
So, that’s the way I try to think about the AI stack as three utility layers that enable applications that use AI. And that’s where we have a history of how to regulate utilities, some successfully, some of mixed success. And so, that’s where I don’t just say apply everything we know and every possible tool to every layer. It’s a little bit different depending on which market you’re looking at and what we need to do.
TEDDY DOWNEY: Can you give us an example of one of the markets and what kind of rules you would put on it? Just because I think it’s really interesting how you go through that.
ASAD RAMZANALI: Yeah, so the foundation model market, right? So, you have Anthropic, OpenAI, and Google that make up about 90 percent of the market, where you might think of their AI tools as consumer facing. You might use ChatGPT. You might use Claude. But where they make their money, especially Claude and Google, is on selling access to that model to other developers. So, through something called an API, an application programming interface. So, third-party developers are building a tool that sits atop those models.
In that world, one of the things we’re starting to see is, yes, but those model providers are also making their own applications. So, Claude, the Anthropic model—Anthropic also makes Claude code, which is an application that helps people vibe code, or AI assisted coding. Last year, they ran into an issue where they kicked off the second largest independent provider of AI assisted coding from their API.
Now they had their own rationale for doing so. But that’s one where that looks anti-competitive, because the incentive is to not enable your own competitor when you’re also a key supplier. That’s the kind of thing where you can apply non-discrimination or neutrality rules. People may have heard of net neutrality. I know that had a lot of political salience, and I’m not trying to get into the partisan back and forth. But even the Republican view of net neutrality that opposed how it was done, their view was that the law didn’t allow it, that the Communications Act didn’t allow it. But most people at that time even said that the principles were good ideas.
So, this is where a non-discrimination principle, which we have in many utility style markets, that makes sense there. But I wouldn’t do 20 other things. I would do non-discrimination, maybe interoperability. But that’s what makes sense at that layer.
TEDDY DOWNEY: Yeah, and I imagine depending on how much you’re getting these things broken up, that also dictates how many kind of utility roles you’ll need.
ASAD RAMZANALI: That’s right.
TEDDY DOWNEY: One more thing that’s sort of systemic, and then I want to move into some of the more behavioral sort of ideas. You mentioned banning surveillance based business models. How does that connect? Because we were talking before about systemic risk. But you make an interesting connection in the paper that, look, when push comes to shove, if they need to start making money, this is the tried and true way that these companies make money. And so, that’s what you really need to be worried about, that they go down that path. Tell us why banning surveillance business models is so key to this proposal.
ASAD RAMZANALI: Yeah, and this is where the place we should start from is what’s desirable. What’s the world we want? And to me, when we look at the surveillance capitalism business model that has taken off in the tech world, and now the AI world, that really started to get going during the dot-com bubble when there were profit pressures.
There’s a good book on the history of surveillance advertising by a scholar named Matthew Crane that connects those dots that says, here were the financial pressures in the late 90s and early 2000s that led to a need for startups to invest in specific types of advertising techniques, that then turned into Google and Meta and Facebook at the time, in particular, doing what we now call surveillance advertising.
We’re starting to see parallels, both between that, when open AI is saying they’re going to enable ads, and they’re saying they’re going to integrate shopping, and sometimes if something doesn’t work, but they’ll return to that idea. Google and Meta continue to be some of the biggest players here. So, those profits from surveillance advertising or how they’re able to reinvest in AI today. You’re going to see them rely on their AI tools as engagement mechanisms to further their advertising revenues.
But the thing that’s starting to take off in a new way is surveillance pricing. And by surveillance pricing, I include wages, wages as a price for labor, where we’re using AI in ways to start to personalize dynamic pricing. The airlines ran into a bit of trouble when they started to say they’re playing with this, and there was a lot of political pressure to do otherwise.
And the reason for that is it’s just unfair. It doesn’t feel fair for a company to be able to treat their consumers that way. And that, to me, is reason enough for policymakers to want to do this. Because their voters, the consumers, feel that it’s unfair. I don’t think we have to get into a secondary argument of what’s an efficient market or economic stance? There, I think there are reasonable arguments to be made against surveillance pricing. But to me, if you just start with what’s the world we want to live in —I don’t want to live in a world where I’m constantly wondering about if I’m getting bilked by some kind of fee system that I don’t understand.
TEDDY DOWNEY: I couldn’t agree more. Very frustrating. I could go spend the next 15 minutes talking about my rental car experience recently.
ASAD RAMZANALI: Oh, yeah.
TEDDY DOWNEY: But I will not force people to endure that. One of your proposals is to just stop the financial engineering in its tracks. Tell us how you do that.
ASAD RAMZANALI: So, I call out three categories of things. So, circular equity financing. Again, something that I’m not familiar with happening at an industry scale anywhere else, because I don’t think it would have other consequences outside of this specific narrow place where we’re already seeing it have distorting features. We could just stop that. We could just introduce a bill that says you can’t invest in your customers. You can’t make equity investments in your customers. Now, that runs into the thing we talked about earlier, the Glass-Steagall, where you want to make sure that the layers. So, it runs into it, but it’s not quite the same, and I’m happy to tease those out if useful.
On the debt side of the house, my thing is more we have to get rid of the opacity. You can’t have off-books debt that nobody knows about. So, I don’t think it’s wise to say you can’t borrow money in any of the markets that exist. But I do think it’s fair to say if you’re getting debt for a data center, that has to be disclosed. All your investors have to know that. All of the investors up and down the debt stack have to know that so that they can make their own decisions.
And then on subsidies, look, there’s a lot of arguments on whether or not we want to invest in AI companies as an industrial policy for national security reasons or for whatever reasons. That is a fair argument and a debate to have in the Congress. It is a fair debate for representatives of the people whose money it is to have that out and say, how much money and what’s it look like?
Where it feels unfair is when you’re having it at the state, local, county, every other layer, where it’s actually hard to tell how much tax subsidy one data center gets because it’s happening at so many different levels. And at the state level, you’re already seeing rollbacks where Governor Pritzker in Illinois is calling for rolling back their own data center construction subsidy. The State of Virginia is realizing how much money they’re leaving on the table and their budget is starting to assume they want to get rid of this subsidy. So, you’re seeing states realize this, but that’s where like it’s a thousand different cuts.
TEDDY DOWNEY: You also mentioned policing fraud should be a priority. Now, obviously, I wouldn’t expect too much out of this current federal administration, but you could see maybe the states look into this. What kind of fraud would you like to see, if not DOJ, maybe some state AGs go after.
ASAD RAMZANALI: Yeah, in the paper, one caveat I’d say here too is, look. I’m not alleging that there’s criminal fraud happening in the market today. That’s not what I’m saying. But what I am saying is in previous financial crises, there has often been some kind of financial fraud that’s accelerating the crisis. We saw that in the dot.com bubble with WorldCom and Enron. We saw hundreds of people go to prison during the savings and loan crisis of the 80s. During COVID, hundreds of people have been in prison because of financial fraud from the payments they received.
So, this is where—because that happens in so many other contexts and because the fact that almost no one went to prison in 2008, even though people felt like there was a public fraud happening—that led to a huge political backlash. You saw a real political realignment on the Tea Party on the right and Occupy Wall Street and so many other movements that came up that use that as a rallying cry. So, my thing is we need to keep an eye on what’s happening in those markets. And when we see fraud, we should prosecute it.
TEDDY DOWNEY: You also sort of have some practical solutions for the government to step in, in case the bubble bursts and in case these assets lose a lot of value. And walk us through that. You mentioned FDR and the New Deal type investments that were made. Walk us through how you thought of the government stepping in to fill in a big R&D gap should that occur.
ASAD RAMZANALI: Yeah, the excitement that I have about AI is the promise that you often hear about drug discovery or weather prediction that’s going to be as super accurate down to the granularity of a city block and you can have it a month out. That would save lives. Natural disasters kill people and having that kind of information a little bit better would actually save lives. Drug discovery would save lives. Each of us is going to either ourselves or have a relative that faces a cancer or a rare disease where we wish the research had gone further.
This is the kind of stuff that to me is exciting that sometimes gets a hand wave by the AI companies of like, this is why we need AI. Many of those companies are investing in them as kind of as a side thing to invest in research projects. Sometimes partnering with academics, sometimes not. That’s the kind of stuff that shouldn’t stop. We should want that. The government can, we have research funding. We have an apparatus to do that through many different agencies. That’s the kind of stuff I want to see continue.
The data centers, if they just sit there as stranded assets that are being unused and the chips are just going bad over time, why not use that for the public needs we would have in national security, in the Department of Energy, and in other places? That’s where my mind goes to, why not put those assets to use if they go into various stages of financial insolvency?
TEDDY DOWNEY: You also spend a good amount of time talking about what can be done for workers if there’s a collapse. Walk us through that. I thought that was super interesting.
ASAD RAMZANALI: Yeah, so for workers, I see kind of three things. One, when we’ve had a financial crises, we expand unemployment insurance and we reduce work requirements for other social safety net programs. We should do that. We should definitely do that as we’ve done during 2008. We did it during COVID. We do that often. That’s a thing we understand and it’s well researched. We should do those things.
If the amount of unemployment we’re talking about, which in my mind, the job loss could be a cause of the financial crisis, it could be an effect, or it could be a scapegoat. You’re already hearing companies talking about firing people because AI might automate their jobs. No matter how you get to job loss, the worker is still going to experience the harm. They don’t really care about identifying the perfect cause.
So, to me, a lot of the kinds of people who are going to lose jobs will be knowledge workers who, like you and I, sit in front of a computer and can do their jobs from many different locations. And we have a long-held need for those kinds of workers in local and state government. Why not match those? Why not hire the people who you would see lose their job or the kind of jobs that might exist? This is a digital works progress administration, kind of like the one that the New Deal included when FDR took over as President. That was a program that ended up hiring eight million people, put eight million people to work. That’s number two.
The third is for the people who keep their jobs. The examples we’ve seen where AI gets embedded into those jobs is that those jobs get tougher and tougher because of surveillance practices that are enabled by AI. So, for roughly ten years, we’ve been talking about truck drivers losing their jobs to self-driving cars, which is a form of AI. They haven’t really lost their jobs. The technology continues, and maybe that will happen, but in those ten years, they didn’t lose their jobs.
The thing that didn’t happen is they’re now required—almost all of their employers and customers require them—to have a surveillance device that controls for how fast you’re going, what you’re doing in your car. There are safety benefits to that, but there’s also a bunch of surveillance overhang that causes those jobs to get lower and lower quality. You see those truckers say that those jobs are less and less desirable.
You see the same happening with customer support, where literally the amount of surveillance that’s happening leads to management being able to say, here’s how many minutes you should be using the bathroom per month. That is a really, really low quality. That’s not a good way to treat people. That’s not a good way to manage people. That’s enabled by the kind of surveillance that only makes sense when you have AI. That’s the thing I worry about these jobs degrading. So, we should watch out for workers, not just losing their jobs, but the quality of their jobs getting way worse.
TEDDY DOWNEY: I remember there was a story about how—or maybe it was a Reddit post—about a gig worker getting paid based on a desperation index. The more desperate you were, the worse you would get paid.
ASAD RAMZANALI: Oh, yeah. I remember that too.
TEDDY DOWNEY: Now, we’ve got a couple of minutes left. I want to ask you one question, which is Iran, related to Iran. One thing I could see is we could spend a lot of time, oh, how could a financial collapse precipitate an AI collapse? But we’ve got a situation in Iran where we’re probably going to get more inflation. We’re going to get a supply shock. We’ve got energy costs going up. All of this is central to this whole AI play, which is data centers, electricity, heavy spending.
We’re already seeing supply crunches because of a lack of memory chips. We’re already seeing supply issues before the Iran crisis. I can’t imagine how constrained the supply chains will get when we really start to see the real world effects, energy rationing, et cetera. How do you see the Iran risk? Or are there any other risks that you really see that are precipitating a potential AI bubble bursting?
ASAD RAMZANALI: Yeah. So, I lay out one narrative of how this might happen, how we might reach a financial crash because of AI overinvestment. There’s a lot of assumptions baked in, like interest rate fluctuation, like supply chain, like the cost of oil. Any one of those things could shift the markets because they’re so precarious right now in ways that are really problematic and could accelerate the crash that we’re talking about here.
What I don’t do in the piece is I don’t give a timeline of when this might happen because I don’t know. I don’t know what that looks like. Many of these loans that are coming due, that I’m worried about, could easily be recapitalized for another two years. But the fundamental premise that trillions of dollars of commitment are being backed by tens of billions of dollars of revenue, there has to be massive changes for that to match up. That’s what I don’t foresee happening.
So, all of the things happening in Iran have many different downstream effects. Of course, it’s a war. So, the biggest and most dire impacts are the people who are dying and their family members dying. But you also get, we’ve already seen attacks on data centers, where data centers have become an artifact of war in a way that we haven’t seen before.
All of the supply chain issues that you’re talking about will slow down, not just energy, the Strait of Hormuz, of course, with oil and natural gas being a squeeze there, but all of the things that need that energy for their own supply chains. You’re going to see that cascading effect is what I worry about. There’s a huge lag because the oil tankers, that actually move across the ocean, move at the speed of a bicycle.
These are things that when the Strait of Hormuz closes, you feel the effects on the West Coast of America like a week and a half, two weeks later. That lag is where I think we are right now. I worry that should the geopolitics of everything not get to a better place, we’re going to start to see massive impacts on the economy worse than we’re seeing now.
TEDDY DOWNEY: Well, Asad, this was an incredible call. I learned so much. We’re going to have this conversation up on second request later this week, where everyone can listen again if they’re already listening, but we’ll have it up. It comes out every Thursday. And I’m super excited to keep following your work, meet up in‑person at some point.
This was a really incredible paper. I think a really important work. And I can envision it being a big part of the conversation going forward in Congress, for sure, if not at your next Fed bailout in the coming, whenever that ends up happening. Asad, thank you so much for doing this.
ASAD RAMZANALI: Thank you for having me, and thanks for the engaging conversation. I appreciate you having read it in detail, I can tell. I really appreciate you taking the time.
TEDDY DOWNEY: I really enjoyed it. It was incredible. It’s an incredible paper. Thank you to everyone for joining the call. This concludes the call today. Bye-bye, everyone.