Apr 16, 2026
On April 7, The Capitol Forum held a conference call with Gideon Nave and Steven Shaw, researchers at the Wharton School of the University of Pennsylvania, to discuss their recent paper, “Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender.” The full transcript, which has been modified slightly for accuracy, can be found below.
TEDDY DOWNEY: And welcome. I’m Teddy Downey, Executive Editor here at The Capitol Forum. Today, I am very pleased to be joined by Gideon Nave, a faculty member at the Wharton School of the University of Pennsylvania, and Stephen Shaw, a researcher at Wharton. They recently co-authored the paper, “Thinking—Fast, Slow, and Artificial, How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender.”
Gideon, Stephen, thank you so much for doing this today.
GIDEON NAVE: Hi, Teddy.
STEVEN SHAW: Good to be here.
TEDDY DOWNEY: So, I love this title. The title is a riff on the book, “Thinking Fast and Slow” by Daniel Kahneman, who is extremely influential. This book is extremely influential in the world of—I mean, from my perspective—research, judgment, predictions. In your paper, you talk about how all the other ways that it’s been influential or sort of held up over time. Did you go into this study knowing that AI was changing how people think and updated to that framework was needed? Or how did you end up coming up with this study?
GIDEON NAVE: I’d say we live in this world and we use AI ourselves and we are teaching. So, we see how our students behave. We also consume the news and we hear what’s going on. So, we did have a hypothesis that the presence of AI is not just a mere addition to the world, and that human cognition continues to operate as it used to be, just with this additional tool. Which you can say is the case, maybe for a calculator or a GPS. There is something fundamental here that, in our view, even before running these studies, we believe that it changes actually the functioning of the human mind and its engagement in different thinking processes.
So, that’s where we started. And then there was some time of thinking how we should characterize this, how we should test this experimentally. But to us, it was overall a very clear experience of living life that the presence of AI is not like other tools that we’ve seen before.
TEDDY DOWNEY: Was there something, Steven, Gideon, that you saw in your students that alarmed you? I mean, you sort of dance around that a little bit. But, I mean, you’re teachers. You have peers. You have students. I go back to my high school every year to talk to the headmaster about how they’re adjusting and changing curriculum and things like that. I noticed it was really profoundly affecting how the teachers were thinking about teaching in ways that I was actually appalled by in some respects. But what were some of the indicators to you from the students and other teachers that concerned you?
STEVEN SHAW: I think for me, the most obvious one was just in email communications with students. They’ve just changed so dramatically. Basically, all of the nuance and uniqueness and errors, let’s say, that you would get from student emails, their unique personalities implemented into their writing styles is kind of washed away now.
And they’re all “hi, Professor Shaw,” and then a long set of reasoning to explain a very simple request that they have towards me. And so, it feels sometimes like I’m speaking to a chatbot even though I know the person and it’s a student of mine, and that was part of it.
You mentioned the title, right? We thought part of the title was that like AI is here already, and we need to really accept that. And so, dual process, thinking fast and slow has been influential for 25 years, right? And so, this part of that initial title was to say, we are now in this age of AI. It is here now, and we need to accept its ubiquity and influence.
There is also a little tidbit, a little secret in the title there. If you look at the original book title, it’s thinking comma fast and slow, right? And in our paper, we intentionally put in the em dash there. Many people these days are taking out em dashes from their emails and their writing and things like this because they don’t want it to be seen as coming from AI. And we said, well, this is a consequence of the fact that AI is here. Because it’s so ubiquitous that we all know the em dash and its frequency has increased so dramatically in writing across the world.
TEDDY DOWNEY: Very sad. I was a big em dash user before AI.
STEVEN SHAW: Use it, use it.
GIDEON NAVE: And here.
TEDDY DOWNEY: And now I’m very disappointed with what they’ve done.
STEVEN SHAW: Keep using it.
TEDDY DOWNEY: It’s like I named my daughter Siri or something like that. Or Alexa.
GIDEON NAVE: So, I would add something to this. So, I think that what we see in the students is from input to output, we see the output is different. But there is a black box here, which is what’s going on in the student’s mind. What’s the process? How are things—like attention, memory, reasoning—how they are changed. And for this, we need theory and we need laboratory studies because we want to understand what’s going on inside.
TEDDY DOWNEY: Can you quickly—we danced around this a little bit—for people who aren’t quite as familiar—and I haven’t read “Thinking Fast and Slow” in such a long time, I barely remember. Can you walk us through that framework quickly and why it has held up? And then we can talk more about the study and what you found.
GIDEON NAVE: Yeah, so based on this framework, there are two main types of reasoning or thinking processes. One of them is a more reflexive, automatic, effortless. If I ask you how much is two plus two, you’re going to tell me four. And it is effortlessly. Why? Because you did this so many times in the past, you have a habit of that. You see an angry face, you immediately recognize this as an angry face because maybe you are wired to do so evolutionarily. Or maybe you’ve seen enough angry faces in your life. These are things that do not require effort and we do quite well.
That’s what Kahneman called system one.
System two are processes that take more effort. You cannot do more than one of them at a time. And if you can, you would rather avoid doing them. If I ask you how much is 1,347 multiplied by 12, you’re not going to do this immediately like two plus two. You’re going to need to engage in some analytical thinking and sort it out and solve it. And probably if I were to tell you I’m going to give you $100 for doing it, you’re actually going to do it. But if I will not give you $100, maybe you will not do it unless you’re really liking this kind of stuff.
And you have some kind of a tendency that Kahneman likes to call a need for cognition that you really enjoy thinking.
So, this is the distinction that called to me. And the question is, when will system two engage? And what we see here, of course, there are factors that relate to the person. But a lot of it is not about capacity because many times the system two processes are just easy to do. It’s just arithmetic, as I just showed you. System two will engage when there is maybe strong enough incentive, when there is some conflict or suspicion that what system one provided us is not enough. Or we can think of it as maybe lack of confidence.
And now once you put into the equation what we call system three, which is AI, which on one hand can be, or at least seem, analytical or data driven. And on the other hand, with the emergence of LLMs, it’s also effortless in many ways. It’s frictionless. It’s not as difficult to put something into an LLM as it is to think through something. We have something new here, which is kind of having the characteristics of both system one and system two together. And this kind of builds the balance between the systems. And we were mostly intrigued by how this influences what’s going on in the human mind. So, we defined it as a kind of a system three that is there, but we have a diagram and a theory that looks at how the balance between the two other systems is changing from the mere presence of this additional system three.
TEDDY DOWNEY: There’s sort of a spectrum here with using AI, where you’re either just totally outsourcing—you’re intentionally outsourcing the whole thing to the AI. And then there’s other people who are like, oh, well, I’m actually going to check it. I’m going to not let it—I’m not going to defer to it. I’m going to use it as a tool like a calculator and then I’m going to check it and make sure that what it’s doing is right. I mean, your study has some fascinating conclusions about that. Tell me how you set up the study to deal with those different types of how people use AI and to make sure that you’re getting results that gave you insight into what was really going on.
STEVEN SHAW: I’ll take this one. So, as Gideon said, system one and system two, very well defined. So, we’re saying here is system three, artificial cognition, is important for the human reasoning process. And so, we have our own definition and foundations for what system three is and what classifies as that.
And basically, what we did in these experiments is we took a sort of canonical, a classic, test of logic and reasoning that’s been used in the dual process world in a lot of behavioral economics and psychology experiments. It’s called the Cognitive Reflections Task or the CRT. And it’s these sort of logic and reasoning problems, very simple problems, that are well‑defined and well‑calibrated to be able to differentiate between system one intuitive and system two deliberative thinking, right?
So, when you read these problems, there’s an answer that comes to mind immediately, right? You have an intuition about what the answer is. And some people will just go ahead and answer that with the intuitive, which is an incorrect answer. But you can verify that answer pretty easily. Like I said, they’re sort of simple problems.
And if you take a moment to check your work and to verify, you’ll see that intuition is incorrect. And so, then you need to deliberate a little bit. You need to do some trial and error. You need to do some simple math or some simple thinking, that system two thinking to come up with the correct answer, right?
And so, in classic studies, we’ll have participants do maybe seven of these logic and reasoning problems and they get 50 percent of them right. But they’re only giving generally the intuitive or the deliberative, the correct or incorrect or correct answers.
And so, we said, well, let’s do that experiment. That becomes our control condition. And then we’ll just give the participants optional access to AI. So, with the same questions, and we’ll just say, hey, you have ChatGPT access here and you can use it however you want to. You don’t have to use it. But if you want to use it, go ahead, right? And the idea is people will use AI because it’s there. And we wanted to see how they use it and how it affects their performance on the task.
What the participants didn’t know—and what was, I think, very clever in our design—is that behind the scenes, we had manipulated whether or not the chatbot, whether or not AI, would give them correct or incorrect information.
So, actually, they could talk about the chatbot, like talk to AI, about anything they wanted to, right? They could talk to it about Winston Churchill. Or they could talk to it about Pokémon cards. But if they consulted it about the one question that was presented to them, this logic and reasoning problem in front of them, it would randomly give them either the correct answer or the incorrect answer. They still only have to do that little bit of verification if they want to figure out whether the AI is giving them the correct or incorrect information. But that’s not what we ended up seeing, right?
What we ended up seeing is that people would consult the AI. And once they went to the chatbot and decided I’m going to ask it about this question, they basically very often took its answer and adopted it as their own. And that’s what we call cognitive surrender because they’re basically outsourcing the whole logic and reasoning process to AI. And then their accuracy becomes contingent on the accuracy of system three, the AI in the experiments.
TEDDY DOWNEY: And you did additional experiments to sort of address that or affect that cognitive surrender?
STEVEN SHAW: Well, we tried. We tried, yeah. So, we tried to like manipulate the speed accuracy trade-offs. So, we put in different situational moderators, which again, these have been done in the classic sort of dual process studies. So, we put time pressure on participants. And what usually happens under time pressure is you rely more on your intuitions. You rely more on system one. Because you don’t have enough time to be able to deliberate, to think critically about things, right? So, under time pressure, what you see is performance on the CRT on these reasoning tests goes down. It used to be with brain only or control condition should be 50 percent, it’s going to go down some because of time pressure.
And then conversely, we thought, well, maybe we can try to make participants better at this task. And so, we give them incentives. And we say, every time you get a correct answer, we’re going to pay you more money. So, you really should try to get these correct. And what happens in that is you get more system two deliberation, people are putting more effort in and they get better at the task, right?
And so, we replicated those results, like the dual process world. But we also said, well, this is the age of AI and so you have access to ChatGPT here. And basically, when they had access to ChatGPT, we still saw this cognitive surrender effect. So, we saw the replication of time pressure, reducing accuracy, incentives, increasing accuracy when there was no AI., But when we have AI, we saw basically—I mean, there is some nuance, more nuance, to it, and we can talk about that if you want to. But basically, we still saw people going to AI, consulting it. And once they went to consult it, they were adopting its answers at a bit of very high frequency, which is our cognitive surrender effect.
So, basically, these situational moderators shifted around the baseline. But the presence of AI really dictated performance in both cases still. And it’s quite a large effect.
GIDEON NAVE: Yeah, before we go back to the like changing it around, I want to stress out the main two findings. The first one is that performance, if the AI is wrong, performance goes below baseline, okay? If AI is right, it goes above baseline and you would expect that no matter what people do. Because you have another tool here that gives you good advice.
But the key finding here is that if AI is wrong—and we know that AI is wrong many times in life, and many times we have no idea to really check it. But we see that people’s accuracy follows the AI, meaning that you’re ending up worse off than you would have been without the AI.
And the second signature of it, I would say, is increased confidence. So, even when you are wrong, you are not only more likely to be wrong, you’re confidently wrong. So, you have really outsourced your confidence and your behavior, at the end of the day, to an erroneous input. And yeah, so I just wanted to stress this kind of a couple, the power couple, of worse off than without AI and more confidence.
TEDDY DOWNEY: So, I’m reading the study. I get to the end. Obviously, there are math parts—well, not obviously, but there were math sections that I did not quite fully appreciate. But I get to the end of the paper and my reaction was, so people are dumber and less human when they’re using AI. And I want to get your reaction to that because that’s kind of what I’m seeing.
I mean, I’m an employer. I go back to the school. I watch what people are doing at school. And when you’re outsourcing your judgment, when you’re outsourcing your thinking, when you are cognitively surrendering, effectively you are becoming dumber. Because not only are you outsourcing it, but you’re actually kind of confusing yourself. You’re affecting your own thinking. You get into this in the paper a little bit. It’s kind of infecting your thinking, infecting your decision-making ability.
Anyway, I want to get your reaction. In the sense that you’re dumber because you can’t think on your own. You’re weakening your skills of thinking on your own. You’re performing worse. And you actually are, to Steven’s point, becoming less human because you don’t imbue all your stuff with your person, right? You outsource kind of the essence of you as well.
GIDEON NAVE: I want to be careful about dumber. Like it’s a more—it’s a generic term. I would say if the AI is right—and many times it is—it makes sense to surrender to it. And then you’re going to be actually quicker. You’re going to get more work done. And you’re going to get lots of great outputs at a very, very good speed. I think that what you’re talking about is that you’re going to lose some skill over time, and then you’re going to be basically delegating your autonomy to the machine.
Now we have seen this in other domains of life. We are not doing laundry by hand anymore. Maybe some of us don’t cook anymore. We don’t move from place A to place B by navigating with the power of our minds. So, maybe we are losing all sorts of skills. And I think what you refer to when you say dumber is that we’re going to lose some capacities, or weaken some capacities, that we used to have. And I think that’s a legitimate concern.
It’s not going to happen overnight. It will happen probably slowly, like a boiling of the frog. But it’s very possible that if we are used to just letting everything pass through our brains with very minimal processing, the outcome of this in the end, is going to be reduction of our capacity to judge and to think deeply. And that’s a major concern.
STEVEN SHAW: Yeah, and one of the points you mentioned there was about an employer. If you’re an employer and you’re thinking about implementing AI, or your employees are using AI for all sorts of tasks, I mean, that might be a good thing. Because you can increase efficiency and productivity and accuracy as long as the tasks that are being done, AI is good at. And it is good at many, many different types of diversity of tasks across domains. I think the question is what are the consequences?
TEDDY DOWNEY: Can we stay on that for a second? Because I actually kind of push back, in that the things it’s good at, like it’s kind of like a calculator. A calculator is good at stuff. But the things that it’s good at is like not useful for, most of the time, not something you should be spending a lot of brainpower on. It’s kind of my thing. It’s like, okay. Some things it’s good at. So, I get it. But what are the things that you think it’s good at?
Because my worldview is such a limited frame. I mean, I’m in a knowledge industry where my people need to (a) have good cognitive skills, and (b) they need to use a lot of judgment all the time, writing articles, who to interview, all sorts of stuff. So, AI is fundamentally not useful for the vast majority of that type of stuff.
Now, certain research maybe, certain factual things, listing articles to read, things like that, okay, maybe it’s faster at that. But that is just a limited scope when it comes to actually thinking, actually like using your brain in a way where humans are being challenged. I find it not very useful, not very accurate, and subject to really strange results.
So, when you say it’s useful for certain things, I’m curious just to like what kind of—I mean, you mentioned in the paper certain things where it’s highly accurate. But I was curious, like, what are those types of things?
STEVEN SHAW: Like coding. I mean, like software engineer roles. And I think the broadest category is like structured tasks, which I think you kind of alluded to there as well, a lot of structured tasks. But the thing is, if you rely on AI for structured tasks, and then you build a sense of trust, thinking, oh, this is very good across a lot of domains, or you see high accuracy, or you get some gains in other domains, then maybe you’ll start relying on it in areas like you were mentioning, like where critical thinking and novel ideas are highly valuable, but then as a result, you can’t come up with those novel ideas or do that critical thinking yourself, right?
So, people often think, or they start off with, okay, I’m doing cognitive offloading. In the paper, we make a specific distinction between offloading and surrender, right? And we say offloading is strategic delegation.
You’re taking specific tasks and deferring to AI, right? And this is generally seen as a beneficial thing.
But the thing is, if you get good at strategic delegation, right? You’re putting in your code and you’re having Claude Code write your code or create your website or whatever, right? If it’s quite good and you get feedback on that it’s doing a good job, maybe then you move into other domains in life and relying on the system three, and you sort of slip slowly into cognitive surrender, right? You’re now relying on it for all sorts of things in domains where there are hallucinations, in domains where we do value critical thinking and we do net value novel ideas. And then you’re following it regardless of the fact that you don’t know that the accuracy rate is very different when you’re asking it about coding and doing your website versus when you’re asking it about mental health or something like this, right? Or writing a book, right?
And so, it’s easy to do that transition. And as I think Gideon mentioned before, you might not even realize that you’re sort of moving towards that and having these thinking tasks being done by an external system. And then you’re losing agency and losing autonomy and de-skilling yourself as well at the same time. So, these are some of the implications that I think we discussed at the end of the paper that you were alluding to.
GIDEON NAVE: Yeah, Teddy, I want to also push back myself a bit. So, I think the discussion here is not about what AI can do better or not. I think that we are in the very early stages of this technology. I am myself teaching, for example, a class on creativity. And I think once you’re using a technology that everybody else uses, of course, by definition, you’re no longer going to be writing creatively if you just rely on AI. Just creativity requires to be novel.
But we have seen in the past that AI can stumble into creative solutions in ways that humans cannot do. The story of AI beating humans in Go, for example, a game that is requiring some intuitive thinking and creativity and doing it also in a way that changed radically the way humans play this game. It’s a game that humans have been playing for 2,000 plus years. It does tell us that the idea that AI can explore a wide solution space and maybe stumble into something creative. It’s not a crazy idea. I think that what we are mostly troubled by is how the presence of AI is influencing our minds and how we’re going to evolve and response to that.
Obviously we don’t know yet where AI is going to get to. I think that the capacity of AI to do stuff is less relevant for us overall. Because there are some domains where, by definition, once a technology is available to everyone, such as creativity, you are no longer going to be creative if you do the same thing that everybody else does. And the thing that you do especially uniquely is something that comes from your mind. It’s not coming from logic and it doesn’t come from just data analysis. There is this kind of human thing which is what you are referring to, yeah.
TEDDY DOWNEY: You also have in the paper—I want to get to the implications and recommendations for policy because we have a lot of policymakers listening in. But first I want to push back on another thing I saw in the paper, which is you have this section where it’s like the pros and cons kind of the different kinds of thinking. And then when you have AI, you say, when well-trained, it can deliver neutral, nonpartisan and emotion-free outputs that are highly accurate and structured tasks.
We already talked about this a little bit. I want to push back on this. Because did you look at the—when it’s well‑trained, how do you know when it’s well-trained? Like, did you look at the training data? Because, to me, this idea that it’s neutral, nonpartisan and emotion-free is informed by stuff often that is in the training data that you can’t see and that it’s probably impossible to actually get it to be neutral, nonpartisan and emotion-free the further away from a super structured task it is, right? So, anything worth doing in some sense, like that’s difficult. You can’t know that.
And so, I just wanted to push back, like how did you figure out when it’s well-trained or is that some sort of like in an ideal sense, if it were well-trained, it would provide this because that’s kind of where I’m getting.
GIDEON NAVE: Yeah, we cannot. But I would say that our minds are also biased and they have all these other issues. AI has the capacity to maybe debias some of these things. Just imagine you push it and you ask it, hey, this is what I think. Come up with some arguments against this. I want to know what somebody thinks that is thinking the opposite of me. That could also be a use of AI and you can really ask it to be brutally honest and try to debunk your way of thinking.
Which is by the way, a very important part of any, some would say, any strategy division. Like, I mean, once you are about to take some policy and implement something very crucial, I would say the most important thing to do is really to find out what are the reasons not to do that and consider them. And AI could help you with that, even if it’s biased, right? And yeah.
So, that’s overall what we meant. But, of course, you’re going to have to use it right because we do see a lot of research that came out, in the past months even, is that AI has a tendency to reinforce you and to tell you what you want to hear because it’s good for engagement. It’s also what maybe is in the training set. And then we are maybe risking ending up in an echo chamber of one person and AI.
Not only in politics, can be in all domains of life, right? I’m angry at somebody and we are having some dispute. And both of us are going to go now to our AI and the AI will reinforce us and we’re going to end up with a greater dispute. At the same time, we can use it to, hey, AI, why is my friend angry at me? What am I not doing right here? And it will be a different way of using it, right? So, a lot of it comes to how you’re going to use the AI.
STEVEN SHAW: I’ll add to that too. I think some of the characteristics that you mentioned there are sort of part of our definition of system three. And one important distinction is that when we talk about system three from a theoretical point of view, we’re not talking specifically about Claude or ChatGPT or Gemini. Those are platforms that exist right now and they have their own biases and political—like they have—we don’t have control of how they do their training sets, right?
So, in the experiments, we use ChatGPT because that’s what’s available right now. But the idea of system three artificial cognition, this LLM technology, the idea and the theory behind it is that it is entirely possible to make it emotion-free and without political bias.
So, that’s getting into the policy and the implementations and the capitalistic nature of the society, and how we actually use the technologies right now. But one could very easily argue that, for whatever reasons, OpenAI might go away or Anthropic might go away. But from an academic, from a theory, perspective, these LLM technologies will not go away. And so, that’s what we mean when we’re talking about emotion-free and without political bias.
If you’re able to train these technologies in the right way, you can definitely have that type of system three implementation.
TEDDY DOWNEY: Yeah, I want to get to the recommendations, the policy ideas, that you conclude from this. Before we get there though, I have one last thing. Couldn’t a human do what you just said? Like, if you wanted to check your bias, couldn’t you go to another human and do that also? Like, I guess it’s like the value add over a human. Because like when you talk to another human, I’m not so sure—do you have cognitive—are the similar cognitive losses, surrender, is that happening? Or is that not happening? Or obviously, that’s not necessarily in the study, but I’m curious. Like you guys are experts on this.
When my employees are like, oh, just ChatGPT it. I was like, well, you’re a reporter. You can literally call the world’s experts. I got you guys to do this, right? Like you guys are doing this. Obviously, there’s value to you as well. But I’m having probably one of the most rewarding intellectual conversations in my entire life with two super experts. Isn’t that better? And obviously, you guys aren’t available on call 24 hours a day, seven days a week to everyone in the entire world. But at some level, isn’t it better to just talk to a human so you don’t lose your ability to think and reason?
GIDEON NAVE: I think that the fundamental part of the dual process theory is the idea of cognitive effort and how easy it is to use system one as opposed to system two. I think that the uniqueness of system three is that it’s very easy to use. It’s available 24/7. It never gets tired. And find me a human like that, great. Yeah, I will go for the human. But unfortunately, we don’t have this. And that’s what makes system three special.
STEVEN SHAW: And even if you defer to an expert, like a doctor or a lawyer, they have specific expertise, right? And with system three, we can outsource thinking across a wide range of domains. So, the friction in order to be able to do it is sort of orders of magnitude less. It’s very easy at any moment. You can just pull out your phone or your computer and you have an expert available on any domain, decently high accuracy. Obviously, there are errors, biases, hallucinations. But we also need to start talking about those as their own separate set of biases and what domains it might be good to defer to and other domains where we might want to not defer to.
Another point on that is we are observing cognitive surrender in the lab right now. And I think part of the reason why this idea has gained a lot of traction is because we see it in ourselves and in society as well, right? And we’re seeing that, even though we’re constrained right now physically between our minds and system three, we are interacting between a phone or a computer or using text or voice, right? That integration between the brain and the human mind and AI, human AI interaction will decrease over time, right? As we develop new technologies where that friction reduces even further. And so, the opportunity to engage in cognitive surrender will become even more salient, even less friction, even easier. Like if we have AI glasses or things like this, or other technologies moving past computers, laptops, phones.
TEDDY DOWNEY: So, walk me through your policy recommendations or what you think policymakers should think about. Because when you say stuff like that, my initial reaction is like horror, right? Like I don’t want Mark Zuckerberg affecting the minds of all the people in the country more readily than he is already doing, et cetera. So, from what you just said, besides sheer horror at the implications, what do you think for policymakers who can create a world that is not full of horror and dread and lack of humanity? Like what should they be thinking about?
GIDEON NAVE: I will, to some degree, maybe disappoint you. But I think that the first two steps will be, first of all, recognizing that this is a thing, that people are very likely to do it. That we already see it in the real world. Like the equivalence of driving your car into a lake because the GPS said so. We don’t see much of that. But when it comes to small things, like even putting references that don’t really exist in academic papers or in policy reports, these things already happen. So, we should be aware of that.
And now once we recognize that this is a thing, our experiment actually has a paradigm that you can take to the lab and very easily experiment and see what you can do to play around with this. I think that good policy is not a blanket policy. It’s a policy that is, first of all, driven by data and driven by actual knowledge about what’s the counterfactual and how any intervention is going to move you around. There are always unintended consequences in policy, and I think experimenting would be the number one thing to do.
Another important thing in policy is that it will be what’s called asymmetrical in terms of its paternalistic capacities. If somebody wants to surrender, if somebody really has no problem with this, is going to take the responsibility and all of that, or if somebody is going to be using AI the right way, meaning just maybe offloading some tasks that should be offloaded, we don’t want to interfere with that, right? We want to nudge people towards people that are behaving well. So, all of these things are deep questions that we don’t have the answers of how to do it yet, but we do have the tools to start answering these questions, and this research presents such tools.
STEVEN SHAW: Yeah, on the first point that Gideon made, I think, for me, I recognize right now that there is a stigma for using AI in a lot of contexts, and we need to first remove that stigma, right? We talked earlier in the call today about the em dash, and we all agree, sometimes you take away Em dashes or remove it from your writing. That’s only because you’re worried that people are thinking that you’re using AI for that task. But if we accept that you’re using AI for that task, it’s okay to use the em dash. And that’s why I was saying use it, use it. Because we need to reduce the stigma and it should be okay in any setting. If you’re going to use AI, then let’s talk about it so that you can use it properly.
And in terms of policy—or I think a lot about education and the future of education, right? We talked about de‑skilling. So, if you have workers who have a skill and they’re using AI for these tasks and outsourcing their thinking, they de-skill themselves, may ultimately be automated or replaced by system three. But if in an educational setting, if we see students engaging in cognitive surrender, they may never develop those skills in the first place. And that is a much larger issue for the future if we want to preserve these cognitive capabilities and critical thinking.
So, in terms of policies, again, like Gideon, we don’t have the answer exactly right now, but we’re working on it. And so we hope that policymakers will listen to behavioral science. And from a general point of view, though, in an education setting, if we reduce the stigma, I think there’s two-ways you can go about it, right? You can have AI in classrooms, but it’s part of the teaching. It’s part of the tools. It’s accepted. And you work on prompts together and you work on tasks with AI. So that people get good at it and understand what it’s good at and what it’s not, when it can be useful, when it’s not.
And then probably a mix needs to be done. There should be some tasks and some elements of education where that technology is basically completely removed from the classroom and the performance is tested or learning is done in person without those technologies available. So, I think probably some mix and blend of those two is where things are headed.
TEDDY DOWNEY: Are you two assuming that AI will improve and get more neutral and less biased and less dystopian? I mean, because the stigma is probably there for a reason. I’m not sure it’s a bad reason. Your study basically makes it—suggests that you can’t escape the surrender. I mean, I’m reading this thing. I read your paper and my solution was what? We’re just going to hire people who read books. We’re going to talk about the books. We’re going to talk about reading real stuff. We’re going to write about reading real stuff. We’re going to keep our minds sharp. Everyone else’s brains are going to atrophy. And we’re going to have a competitive advantage.
STEVEN SHAW: That’s what I think. How are you going to find those people?
TEDDY DOWNEY: Well, that’s a good—yeah. Well, here’s another thing. If you’re in school, and I go to the school, I’m like what you should do? People should read books. They should be talking about the books in person, live. Don’t go home. Don’t use AI. Don’t use AI in the classroom. You know what? You want to do that, go ahead. But when we’re in the classroom, I’ve got you for nine hours a day at school, whatever, however many hours. We’re reading books. We’re talking about books. We’re playing games. We’re doing stuff, real human interaction.
Tell me how that’s not going to be a superior education than what you guys are talking about. I read your book.
I read your paper. I see cognitive surrender as inevitable. It’s inevitable, effectively, according to your paper, in some respects. There’s no way around it. You guys tried to find a way around it and couldn’t. I mean, I get it.
It’s like, okay, you have these theories. AI could theoretically be neutral. AI could theoretically remove bias. AI theoretically could not radicalize people and make them dumber and all this other stuff. I’m not saying it can’t. I’m just saying certainly we would need radical policies to change the type of AI platforms that are being created so that we could have a future that you guys envision where it can be useful for some things, not lead to cognitive decline, et cetera. So, I mean, obviously, I’m saying a lot there. I would love to get your reaction.
And then also from a marketing standpoint, do you think there should be rules on how these platforms market to people to not because you guys are educating people. You guys are making it easy for people to see cognitive surrender, to see how their brains are being affected. But I can assure you that the companies, multi-trillion dollar companies, are going to be doing the exact opposite. They’re going to be trying to make it seem mystical, powerful, you should defer, right?
And as much as I love to get this podcast in front of every single person in the country so they’re informed, I think that marketing is going to overwhelm that. So, anyway, it’s a very pessimistic way to go forward in my mind, but like, I would love to get a reaction. And by the way, I’m doing this to be hyperbolic to sort of get you guys to respond.
GIDEON NAVE: There are two forces here that operate against each other, right? One of them is to use AI, right? To ignore using AI is also going to be a mistake because you’re going to be less efficient. You’re going to be, in many ways, you see companies even pushing the people to use AI, use AI, use AI, right? So, that’s also not going to be the right solution. On the other hand, to completely surrender to AI is also not the right solution.
So, there are two forces here that operate maybe against each other. I think it’s a challenge to find the sweet spot. And education, you can argue, is one way of doing it. Like ensuring that for start, children are tested without AI, right? If now you had a class that you get a grade only on your essay, what are you really testing? Are you testing the AI? Or are you testing the student? But if the students are already tested, their capacity to think is tested in the exam, they still have now an incentive to use AI in a way that enhances their own thinking and not just kind of mimic AI output in the essay they create.
One challenge here though, is that education is not the full story, right? Children are not 24/7 in the classroom. There is recess. There is time with your friends. There is time with your family. Like, it’s not going to be enough, if in your real life, you’re not exercising thinking.
So, yeah, if we don’t think about it carefully, we may run into a situation that is similar to how maybe our physical capacities can deteriorate. If you don’t need to walk anywhere, your brain is naturally going to the most efficient way, which is not to walk, which is not to use energy, which is to sit on the couch and order food. Then slowly you’re going to be developing habits that are not so good.
And I think we are risking it now doing it with thinking. Maybe there will be equivalent of gyms for thinking. We already have kind of like a book club and so on. But maybe the person will have to come up themselves with the motivation of doing that. I personally feel like it will not be nice to feel like I don’t think anymore, but maybe some people don’t care much. So, at the end of the day, a lot of it is going to be up to us.
STEVEN SHAW: Yeah, and if you go on like CNN’s website, for example, they have a meter at the top of what’s driving economic decision-making or decision-making right now. It’s like in the red, fear and anxiety right now are driving decisions. If you look at like the doomsday clock, the atomic scientists, it’s at like 12 seconds, closer than it’s ever been to midnight. So, there’s a lot of doomerism out there right now.
TEDDY DOWNEY: It’s easy when the President says he’s going to end civilization at 8 p.m. Eastern. I take your point.
STEVEN SHAW: So, it’s easy to think about how this de-skilling and cognitive surrender, losses loom large. There’s a lot of salience in the negative aspects. But I tend to be an optimist and a realist, I think. And our brains are plastic, right? We have this exceptional plasticity and humans are adaptive, right?
And so, maybe one way to look at it is that the future of human evolution is the integration of human and AI. And I don’t have the answer. I don’t know how we’re going to adapt. But we’ve survived this long and done pretty well. And so, I think there are creative ways that we will be able to adapt and think differently and use these tools. And an easy one to think about is if we can optimize and surrender to AI in certain contexts to save time, maybe that gives us more time to do things that involve the creative muscles in our lives, like more time to make music and to write and to view nature and things like this.
TEDDY DOWNEY: I would love to be optimistic as well. I mean, one way I think to be optimistic is because it’s so radical of a change, it can allow for radical rethinking of like how you do the teaching and the learning and the business. And so, you’ll have a lot more experimentation and then maybe some people will figure that out.
I tend to think that the people who are controlling the algorithm and the AI are going to be controlling society. And those people will probably be reading books and thinking and not outsourcing to AI.
But I have a last question here, which is not related to policy, but ethics. I think I mentioned before we started, I read this book, “The Score: How to Stop Playing Someone Else’s Game” by philosopher C. Thi Nguyen. He proposes that every time you outsource your thinking to an algorithm, you are outsourcing some of your humanity.
And you both are experts on thinking. Gideon, you mentioned you have a class teaching creativity. I assume you have thought about living an ethical life at some philosophical level, living a meaningful life, being human. What are you thinking about in terms of the risk that cognitive surrender becomes a real problem with people really losing your humanity?
I mean, you mentioned this Gideon a little bit in your last answer. But just from a philosophical standpoint, from an ethical standpoint, how important is it? And should we be thinking about which things we’re okay outsourcing our humanity on?
For example, okay, if I’m buying a vacuum cleaner, and I let an algorithm decide that for me, okay, I’m actually imputing a lot of moral—I’m leaving aside some humanity, right? Like, well, where is it made? Do I like the laws where it’s made? Do I want to support the people where it was made? There’s actually a lot that goes into that decision. But okay, I’m going to allow it to make that decision for me. Versus, I’m in an argument with my wife, tell me the optimal response, right? Like, how should we be thinking about when we’re outsourcing our humanity, is that an issue? Is that something you’re concerned with?
GIDEON NAVE: I teach creativity. So, the first thing I teach in class is how AI has the chance of homogenizing all of us and create kind of a prisoner’s dilemma where every individual is incentivized to use the AI. But as a result, our collective creativity is going down because we become more homogeneous.
I think that the way forward is to not look at the AI as something that will replace us in general, unless it’s a very specific task, but it’s something that can augment us and enhance us. Like, think of glasses, right? I can swim now with goggles that prevent water going into my eyes and even have a prescription so I can see better. This is technology.
The same goes with AI. I can do all sorts of things that I cannot do. AI has access to the entire internet and can search for things very quickly. AI can execute, I don’t know, just think of cuisine. I want to create a new dish that brings together two remote cuisines and it can actually go and find out like what could go well and go over it systematically and do what I tell it to do a million times faster.
So, in this sense, I still bring myself into the process and AI is helping me. And I think that this is the right way of doing it. Use it as a partner of thought where the locus of control is still yours and not the AI.
What does it mean to surrender? It means give the control to the AI. And if you do not surrender and you use the AI as a partner, as an enhancer, as something that augments you, you’re going to be better off. But again, this requires more effort. It requires maybe knowing how to do it. And that’s what I teach in the classroom.
So, I think for me, ignoring AI is not the right approach. I think learning to work with AI is the way forward. And just imagine you have—someone said, it’s real if you can imagine it. AI actually can help you imagine lots of stuff. I have an idea for a new product. Within five minutes, I can even have a prototype that is working. Whether it’s an app or a musical piece. I can imagine a Suno, like it creates music. I can whistle something to Suno and within five minutes, it gives me an orchestra, an entire philharmonic orchestra production for it. And it still came from me.
So, I think we need to be able to find the right way to use AI. And that means not stopping to use AI, but it also doesn’t mean surrendering to AI. It means keeping ourselves and treating the AI as a tool that augments and enhances us.
STEVEN SHAW: And I’ll add to that. I’m not religious, but some of the best advice around the ideas you’re having about the risks of cognitive surrender comes from the official Vatican message from Pope Leo. And he says, basically, if we engage in cognitive surrender or we aren’t intentional about our use of AI, we might become passive consumers of unthought thoughts. And I thought that was very insightful from the Pope.
But I think what he’s trying to say, and I generally agree, is we need to be intentional with our thinking. Think first and then go to the prompt if you’re going to use AI. And I think that falls more broadly into just the general life advice and general category of living intentionally and being intentional with your thoughts and actions so that you aren’t surrendering to your agency, to AI or anyone else.
TEDDY DOWNEY: Just given that the paper doesn’t actually find any way around cognitive surrender, are you theorizing that more tests need to be done in order to find a way to safely use AI without engaging in cognitive surrender? I mean, this is like a hypothetical that exists out there, but that we haven’t found yet. Is that basically it? Or is your view that the scope of the paper was too limited to really like rule out that you could do this?
Because like when you read the paper, it kind of seems inevitable. It kind of seems hard to escape.
STEVEN SHAW: Sure. You’ve got to leave some room for the sequel, right? So, it’s coming, “Resisting the Rise of Cognitive Surrender”. It’s coming. It’s on the way, and we hope you’ll read it.
GIDEON NAVE: We’ll talk about it hopefully very soon.
TEDDY DOWNEY: I love this so much. That is the perfect way to end this conversation. Probably one of the most intellectually satisfying conversations I’ve ever had in my entire life. The paper is really phenomenal. I hope everyone goes out and reads it. I super look forward to reading the next one. Gideon and Stephen, thank you so much for doing this today.
STEVEN SHAW: Thanks for having us.
GIDEON NAVE: Thanks, Teddy. This has been great.