Transcripts

Transcript of Conference Call on “Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity” with Chiara Longoni

Sep 23, 2025

On September 23, The Capitol Forum held a conference call with Chiara Longoni, Associate Professor of Marketing at Bocconi University in Italy and co-author of Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity, for a conversation on the paper’s findings and their implications for consumer behavior and the adoption of artificial intelligence. The full transcript, which has been modified slightly for accuracy, can be found below.

TEDDY DOWNEY: Welcome to our conference call on Consumer Attitudes Towards Artificial Intelligence. I am Teddy Downey, Executive Editor here at The Capitol Forum. And today’s guest is Chiara Longoni, Associate Professor of Marketing at Bocconi University in Italy and co-author of “Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity.” Her research focuses on how consumer psychology shapes the adoption of emerging technologies, particularly AI. And we’ll be talking about how consumers’ understanding of AI influences their attitudes and adoption of the technology.

Chiara, thank you so much for being here today.

CHIARA LONGONI: Thank you. Thank you for the lovely introduction. A pleasure to be here.

TEDDY DOWNEY: And before we get started, if you have questions, please type them into the questions pane of the control panel. We’ll collect questions throughout the call and address them later in the call. If you have any issues, email events@tcfpress.com.

And Chiara, I think the first step is just how did you think about doing this paper? I think it’s a really fascinating topic. I would never have thought of it. So, I’m curious to get in your head of how did you even begin to think about doing this? And then maybe we can get into sort of the structure of how you went about the study.

CHIARA LONGONI: Sounds great. So, this is going to be at the intersection of being in the trenches in the review process and evidence as to how research is mesearch. So, this paper came about because for another paper, the reviewers basically asked us to test whether the effect that we were studying in this other paper was modulated by AI literacy. So, by how much people objectively know about AI.

And so, we started research thinking that surely there must be existing measures of AI literacy, given that everybody and their cousins is talking about AI. And it was only when we couldn’t come up with an existing measure of that, that we decided that we needed to basically create and validate our own.

So, it is not entirely true that we don’t have alternative measures of AI literacy. But what exists is either specific to a platform, like there are scales measuring how much a person knows about Facebook? Or how much people located in Germany, in a very specific occupation, know about computers. Whereas, what we needed was something that was across domains of application that tested knowledge, conceptual knowledge, of AI and how AI works and the ethical implications of that.

The other limitation of existing skills was thatand this is something that has come up in how this paper is being received, which is that there tends to be a conflation or a muddling between objective knowledge of AI and subjective knowledge of AI.

So, what we wanted to get and what we test is how much a person objectively knows about AI and how it works, not how much a person believes that they know about AI. And so, basically, because we couldn’t find what we needed, we had to roll up our sleeves. And so, we createdI think in the paper you can find the two versions of that.

So, we started assembling a set of questions, testing, again, conceptual and technical knowledge of AI. And then we narrowed it down. And we have two versions that we used in the paper. One is 25 items. And the other one is a little bit shorter. It’s 17 items. And so, because we needed this measure, we wanted to validate it. And if you want, we can get into the details on how we did that.

And when we had it put together, we thought for a sanity check to see whether it would predict something that a good measure, a valid measure, of AI knowledge should predict. So, we thought about AI adoption. So, we ran the studies in which essentially AI literacy was the independent variable or the predictor. And we used the measure of AI receptivity. So, proclivity to using AI, openness to using AI, tendency to offload a task to AI versus doing the task oneself.

So, we ran this study, got the results. We’re like, oh, okay. The distribution on the pools of participants that we usually use is decent. So, the scale is not everybody’s failing it, and not everybody’s super good at it. So, we’re like, fantastic.

And then when we looked at the correlation between literacy and receptivity, we found that the correlation was negative. So, we found, what we described in the paper, that the less people know about AI, the more likely they are to use it.

And the three of us looked at ourselves and we thought, okay. Certainly, we are wrong. Something must have happened becauseI’m guessing many of you, and in the paper, we report a number ofpolling studieswe had the opposite intuition. And if you look at also in marketing, we were talking before about that, general models of adoption of innovation and technology see knowledge of the technology sort of a precursor to adoption of that innovation.

Things are more nuanced than that. But generally, the more a person knows about technology, the less uncertain the person feels about that technology, which leads to greater propensity to adopt it.

So, these results did not make sense to us. So, what did we do? We replicated the study. We ran the same study over and over again. And we kept being confronted by the same results. That’s how this paper was born. Because that’s when we realized that from one measurement to be utilized in a different research project, now we were facing something that was a puzzle and was something that was really interesting because it was a counter to what sort of our hunch was telling us.

TEDDY DOWNEY: Yeah, I couldn’t agree more that it was a counterintuitive and surprising result. How did you define AI literacy? I think that’s a really interesting challenge in and of itself. Maybe you mentioned you could get into the weeds a little bit more on that. I’m certainly interested in learning more about that.

CHIARA LONGONI: Yeah. So, in that way, we were helped with existing conceptualizations of AI literacy. So, if you look at the scales, there are some sort of sub-dimensions to that. You can roughly split them into two subgroups.

One is conceptual knowledge about AI, what AI is at a conceptual level. And technical knowledge of AI, which entails some programming knowledge, how these algorithms work, but also what the ethical safeguards are embedded in it. There are some questions also more applied in terms of privacy, specific to certain domains, general across domains and so forth. But generally speaking, we relied on existing work that has conceptualized what it means to be literate of AI.

And then we used that as a basis to come up with basically a quiz. So, it’s a set of multiple choicequestions. As I mentioned, there’s a version that is longer, and then we narrowed it down to a version that is shorter.

And the first one was developed by us. And actually, the second one was developed with the help of Claude and GPT-4. So, we are very much collaborating with AI in that. And I wanted to mention that even LLMs oftentimes now we are talking about digital twins and using LLMs to predict what human respondents would say.

So, just like we polled people on X, on business managers that were coming to our classes and everybody else, we also asked to make this prediction of the relationship between literacy and receptivity to LLMs. And they too got the question wrong, which was very much a point of pride for us. Because we’re like, okay. Not even AI can predict who’s going to really be open to adopting AI.

TEDDY DOWNEY: And so, when you think about the results, you mentioned the reaction to your paper. I know I had a very specific intellectual reaction. I’m curious what has been the reaction to the paper? Like who is coming to you and saying I’m thinking about how to apply this? What’s been the reaction?

CHIARA LONGONI: Okay., First of all, there has been a lot of reaction. Most of the times, sadly, the work that academics do stays confined in the silos of academia. And this paper has actually been published online at the beginning of this year. And so, it’s already heavily cited. It went viral on certain platforms.

So, I think that is the product of the fact that, again, it resonates with people in the sense that it’s a question that is interesting, and it’s an answer that is counterintuitive to what people would naturally come to think about.

So, within this interest, there have been heterogeneity of reactions. So, some people took it in a way that is more prescriptive than we actually mean. Meaning, what we do is purely descriptive. We are documenting an effect, and testing hypotheses for the reason why this effect occurs. We are documenting that it is those who have lower AI literacy that are more prone to use AI.

And then we came up with some explanations on why that is the case, some alternative explanations. We tested both. And we presented our results. And then based on these findings, we made certain recommendations. But we are very, very careful not to make prescriptions. And, in fact, you probably saw it. We brought a couple of companion pieces. Because reading an academic paper is pretty boring and it gets pretty dense. So, we usually like to write a version that is a lot more interesting, a lot more digestible.

But across all these versions that vary in brevity, we are very, very careful to alert that we are not calling to sort of prey on those who know less about AI, with snake oil, with things that are maybe not even AI, but are marketed as such to prey on the feeling of magicalness that they can feel towards AI. So, part of that reaction is coming, I think, from a misconception that we are preaching to prey on those who know less about AI.

The other reaction that comes from, I think, having the paper taken virally across audiences is that what we test is very much AI literacy. So, it’s neither general knowledge. We do see the correlation, and test the correlation, between general knowledge. So, how much a person knows about history or philosophy or geography, and so on and so forth. But our predictor of AI receptivity is knowledge of AI. It’s not how literate or illiterate a person is. And we certainly do not test IQ.

So, I’ve seen also this paper being discussed in the forums of are you saying that people that are not very intelligent are more prone to using AI? And this is so completely orthogonal to what we do. Because we do mention general knowledge. We don’t even have a measurement of IQ. I will stay very, very clear on making such claims.

So, these are in general. Oh, and there’s been another reaction that I can see why people will think of that is that people liken this effect toand perhaps you’re familiar with thatthe famous or infamous Dunning-Kruger effect. So, this is like, oh no, okay. So, this is, oh, go read it. Because it’s really interesting. And I’m sure you will find examples around you that display this cognitive bias.

So, the paper’s title says it all. It basically says unskilled and unaware. So, it shows how it is the people that feel more confident in being highly knowledgeable about a subject matter than tend to be overly calibrated about how actually competent they are in the subject matter.

So, the more you know, the more you think you know, the less you might actually know. And so, people think, telling you, from now on, you’ll be looking around, you’ll be like, hmm. Dunning-Kruger exemplar facing.

TEDDY DOWNEY: Yeah, this should be a very famous paper in Washington, D.C., where we have a tremendous, tremendous problem with that.

CHIARA LONGONI: Yeah, yeah, totally. But I can see how people think that also our paper is a sort of a version of that. But recall what I said before. We are not looking at subjective knowledge. So, we don’t know. I may have an opinion, but I like to speak out of data. So, we have no data to say that those who know the most about AI literacy might actually be the more “humble” and maybe they think they know less than people who actually don’t know much, but think they know a lot.

And then to round out the reactions, we also gotmost of the reactions actually are from both on the sides of, well, there’s the interest from academics in terms of extensions of this work. And this paper lends itself to at least a conversation about how to educate people, increase literacy of people, how toand I think you mentioned that before in our little chitchat that we hadhow to communicateand this is a question that also comes from managers and marketers and developers of these productshow to communicate about AI products and services to different segments of people that might vary on the level of AI literacy. And so, these are more straightforward, albeit still somewhat ethical, implications that one can hope to get from these findings.

TEDDY DOWNEY: Yeah. My reaction was that AI companies will face a choice, right? One is a moral path where you say, hey, morally, it seems like we need to educate and create more AI literacy so that people can make their own decisions. And the other is to lean into and you say prey upon or exploit and steer away from literacy education and into creating a mysticism, creating a magical feeling, around AI, to sell more to those people.

And I even look at the paper and I see a geographic effect here, and maybe you can walk us through that. There was a range of receptivity and literacy according to different countries. And that seems to have an international effect. Did you get a reaction like that from anyone? And I’m curious, when you were talking to AI companies, is it more the sales and marketing people that reach out? Are they asking ethical questions? Or are they just like, how do I get more? How do I lean more into the mysticism side of this?

Because to me, I come from this from reading a ton of books and doing a lot of interviews about AI choosing, Big Tech choosing, financial and growth paths when they’re faced with the moral choice over the moral opportunity to, hey, we’re going to choose a little bit less growth, but we’re going to have less suicide kits sold. Or we’re going to have fewer depressed young women and fewer suicidal teenagers. They’ve opted in the past to go with growth over that sort of moral, but lower, somewhat of a financial hit. And I’m curious if anyone has had that reaction and how you think about that in terms of this study, the results.

I know that’s a lot wrapped into one question. We’ve got the geographic and the international stuff as well. But I’m very interested to get your thoughts on all this.

CHIARA LONGONI: Absolutely. And I want to say that all your points are very, very valid. And as I was listening to your question, I also feel like maybe I have a glimmer of factual hope, which I’ll get into.

So, let’s begin with a cross-country study. That was to show – so, let me explain what it is to people that haven’t looked at the paper rightfully so. So, because most of the studies, those are experiments that we run, are conducted in the United States. So, we tested this effect between literacy and receptivity across a number of populations. So, students, respondents recruited on online panels, and people recruited from certain types of occupations and so on and so forth.

But then we wanted to see whether this effect extends beyond the sort of the borders of the United States. And to do that, we combined two different data sets, cross-country, so globally. One data set was from Tortoise Media, and there’s a measure in there of AI talent. These are basically across countries, proportion of jobs, occupations, education, and skills that are related to AI.

So, disciplines that belong to computer science and AI. And we took that as a version of a measurement of AI literacy. And then we utilized another data set, this time from Ipsos, to have a proxy of what, in the paper, in the other experiments, we used to measure AI receptivity. And there we take this measurement of interest in AI. So, Ipsos does these surveys that are the same across many different countries. So, there are a number of forwardlooking statements that are meant to capture interest in using AI products, and sort of the beliefs about the potential for these products to make our lives better. And that served, again, as a version of the AI receptivity measurements that we have in the experiments.

So, by combining these two data sets, we basically took these two measurements on all the countries with which we had overlap. And those are the data points that you see in the graphs. And you see that, albeit with geographical differences, there is this trend that carries over cross country and globally, where once again, it is those who have lower AI literacy, or in the verbiage of the study, they have lower AI talent, they show the greatest AI interest. Or in other words, the greater interest, receptivity to AI. And then these also allowed usI don’t think you want to go into the weeds of thatto control with the potential effect of other factors. So, we look at the GDP. We look at education, and so on and so forth. So, these relationships, these correlations, stand, even when the effect of other variables is parsed out.

So, the purpose of the study is to show that this is not an effect. Because it could be because we know, for instance, that AI is welcome more in Asia rather than in Western countries. But these relationships between these two variables is not an American or U.S. only effect. It is something that extends globally.

And if this is clear, I’ll answer your second question.

TEDDY DOWNEY: Yeah, I had a lot of questions. Yeah, keep going.

CHIARA LONGONI: So, you asked me about how didthe people that reached out to us. So, I can only speak to those who reached out. And my sense is that that filters out a lot. So, I think the points that you made are valid. And you asked me if there are sort of predators, marketers, that want to use these findings to be more skillful in targeting audiences that might be more prone to follow blindly. I don’t think these people will reach out to us, and they haven’t.

Those who reached out are two sets of people. Either they are policymakersI know there’s a program within the United Nation Development Program that looks at these kinds of measurements of literacy level that are good for society. They wanted to circulate the paper and the measurements within their organizations. And I would only imagine that for them it is a way to figure out what I was mentioning before. So, where is it necessary to increase AI literacy and how to do so? So, how to design programs that teach the things to the people that need to be taught these things.

And then on the side of AI companies and tech companies, those who reached out are those that feltand I think earnestly sovalidated in the fact that they want tothere are so many ways in whichand I’m an AI optimistbut there are so many ways in which a company utilizing AI can be somewhat deceitful, and sometimes not always out of malicious intent. Sometimes it’s really difficult to define exactly what AI is. But those who reached out to us, they almost felt proud that they were intending to market their products in full transparency of what the product is and what it does. I wanted to mention a couple more points.

One is that these findings, the applicability of these findings, sort of can be seen in two steps. So, the first is alerting marketers, those that design and deploy and promote AI tools, that they may be speaking to the wrong target segment. If they, like us, like you, share the intuition that they should be marketing these products only to those who know a lot about AI.

So, what we are saying is that, first of all, stop and consider that different segments might have different AI literacy levels, and if so, they should be addressed and talked to differently. And this, by the way, is not just consumer segments on the side of end users. But AI applications are everywhere. You can think even within an organization in hiring and promotions.

Those are AI in medicine, in education and in healthcare. Those are AI tools whose adoption will also covary with the AI literacy of the person that has to adopt and implement the tool. So, first of all, consider that different types of people will have different AI literacy levels. And therefore, their adoption propensity and rates will differ. And then what we say is not so much only speak to one segment for the sole scope of drawing them to your product, but speak to them in different ways. And it’s not more vicious marketing than any marketing that is done in any other fields. You want to talk to your audience in a way that makes sense to them.

And so, explanations that are meant to highlight, for instance, capability, are more suited and more appropriate and better heard from segments that know a lot about AI, that already have a good working knowledge about what AI is and how it works. And so, what they need to be convinced about, not just with hollow persuasion, but with facts is the capabilities and performance of AI.

Whereas, to those who have less knowledge of AI, conceptual or technical, the way to speak to them is not to overwhelm them with these performance indicators or these capability explanations, but it is to show them what AI is and what it does.

TEDDY DOWNEY: Yeah, it seems like there’s a fine line between manipulating people and educating them with the marketing.

CHIARA LONGONI: Yes.

TEDDY DOWNEY: And that, to me, is it doesn’t necessarily need to be malicious. But to me that’s sort of where sort of the ethical, sort of that moral, choice might come in. I had a specific question. You mentioned at some point in the paper that there’s a Google AI agent called Magical, which I was unfamiliar with. Is that the name of a Google AI agent already?

CHIARA LONGONI: Yes. You said already. I don’t know if he still exists.

TEDDY DOWNEY: Oh, okay. Do you think that the AI companies already know what you found? Do you think that this has already been going on?

CHIARA LONGONI: This is completely my opinion, okay? We are void of the paper. I don’t think so. I think you mentioned before anthropomorphism. I think there’s a degree to which even the programmers areit’s a different type of magic. It’s not the magic that we show in the paper that comes from not knowing about how it works. It is the magical passion. When you’re so passionate about a product, you also see the latest innovation and iteration of what it does as magic. But it’s a different type ofyou see what I’m saying?

It’s not magic because it looks like it’s conjuring up answers out of thin air, which is the magic that people like me who are low AI literacy see. When, for instance, they see AI composing a song or writing a beautiful poem, I experienced that type of magic. I think a programmer experiences a very different type of magic, which is more like a deep involving excitement and interest for that particular technology and product. But again, this is my opinion. Maybe they had this all along and they used to see AI as magic themselves.

TEDDY DOWNEY: Yeah, I thought that really jumped out at me. I was like, well, that wouldmake sense if they want to lean into the marketing here. And then there’s another part of the paper that I think isthere are just so many things that you define here that I think are actually reallynot controversial, but I think interesting to delve into.

Let’s talk about anthropomorphism for a second. You have a small section in the paper where you say people tend to ascribe human characteristics to non-human agents, especially when these agents exhibit behavior that resembles human action. You say there’s an illusory perception that AI has human attributes.

There’s a misattribution of human attributes to AI systems. I would love to talk about that a little bit more, why that happens, how often that happens. I’ve noticedI read a lot of judicial decisions about AI. And the number of times in which, first of all, there is a sense of magic for these judges in how they describe the transformative nature of AI, which has a legal implication, that from a technical standpoint, are they transforming the content? But also, there’s a mystical element to some of the way that these judges use the word.

And then, separately, assigning the rights of humans to AI because they are anthropomorphizing the technology. And you state pretty clearly, this is a misattribution. I agree, obviously, intellectually. But can you talk a little bit about this as a phenomenon-or how you looked at it and how you accounted for it when you did the study?

CHIARA LONGONI: Yeah. So, there are so many ways where I could go in answering this very interesting question. First of all, another set of very valid points. If you’re interested in that, there’s a great paper that talks about conceptual borrowing. I’m not the one who wrote this paperwhich talks about the inevitability and also the dangers in using thesein borrowing concepts from other disciplines across disciplines. And that happens bidirectionally between humans and machines.

So, think about the human brains. We tend to think about it as a machine, as a computing little agent. And then we do the same to artificial intelligent agents, which we superimpose sometimes above our awareness and sometimes non-consciously, we refer to them, we imbue them, with human-like characteristics, abilities, and skills.

So, this is not a phenomenon-that is new. And we’ve been doing that not just to artificial agents, but we’ve been doing that to artifacts too. We’ve been doing that to computers. We do that to our inanimate objects. And we do it to all sorts of entities, regardless of the level of actual sentience and consciousness that they have behind it.

That said, this habit, which is very natural, because why do we do that? It comes from the necessity to explain very difficult things, utilizing concepts that we are more fluent with, that are more easy to tackle. But that carries certain types of dangers. Remember when ChatGPT wasI mean, we are still talking about hallucinations, the very words that we use to describe successes or failures or calibrations in error rates, are very human-like. The system hallucinates. The system tells me that I’m right.

And so, these are exacerbated with AI, because unlike other types of technologies, AIs, agents, especially with the launch of GPT, and with the current rate of betterments in performance, more and more, it really is performing tasks that used to be thought of by most of us as strictly the purview of human abilities.

So, this is where things change dramatically, between regular other types of technology and AI, is the rate at which AI seems to display ability to execute tasks that we used to think only humans could do. And this is an angle that we tackle in the paperI think this is what you were getting atwhere we make the distinction between tasks that are assumed to be performed requiring skills and capabilities, that you can imagine a range of tasks, and you can imagine that the execution of these tasks ranges on a continuum from it requires skills that both humans and AI possessthink about crunching numbers or performing a mathematical operationto tasks that are perceived to be executable by having skills that are uniquely or distinctly human. Think about composing a song, writing a poem, or cracking a joke. So, people of all levels of AI literacy view tasks as requiring either shared capabilities, between humans and AI, or distinctly human capabilities.

Now, where it gets interesting is that people that have high AI literacy recognize that for AI to crack a joke, it doesn’t have to possess a sense of humor to be able to crack a funny joke. Because they know that underlying is just pattern recognition, making predictions of what has been reinforced to be liked by most people. And so, on average, this joke with these words will be perceived as funny. So, cracking a joke is a distinctly human task. But if you have high AI literacy, you understand how AI can do it without actually having any sense of humor.

But to a person that has low AI literacy, it’s kind of like seeing a magic trick. If they see AI cracking a joke, they think, oh, it must possess a skill that I think only humans can have. Because only humans can have an opinion and can crack a joke.

So, this is the sort of the why under the why. It’s why people that have lower AI literacy tend to view AI performing these tasks that are perceived to require uniquely human characteristics and abilities as an expression of magic. And that’s why people with lower AI literacy tend to view AI as more magical than people with higher AI literacy. And it’s these magical perceptions of AI that in turn explain why lower AI literacy is associated with greater AI receptivity.

And I would not go into the details of this study, but this is where we basically have a study where we systematically measure and vary the skills and characteristics, whether they’re distinctly human or shared. And then we measure AI literacy and correlate that with AI receptivity.

TEDDY DOWNEY: And I think that fits really nicely in with my question around how you talk to policymakers about this. Because to me, if I’m a policymaker, I’m particularly concerned with predatory, or let’s just call it the more manipulative or malicious side of the marketing opportunity here. And to your point, it seems like it’s focused on these creative side, where it’s poems, art, music, things that humans are really uniquely suited to or historically that people assign with human ingenuity. That also happens to be a similar area where there’s predatory conduct around copyright and theft of intellectual property or otherwise creative works of humans.

Is that fair to say that if you are making recommendations for policymakers around how companies would act in a predatory way to come up with laws or policies that would discourage that, that it would be in those areas? It would be in the focus on where the creative, what you just talked about? It’s like, okay, the creative side of AI. When AI is doing creative works, or seemingly creative works, that’s when it has the ability to come off as magical. Or when you’re talking to policymakers, is it a broader conversation?

I mean, it sounds like when you talk to policymakers, they’re more focused on how do I create more literacy? As opposed to how do I ostensibly discourage predatory conduct? But if you were getting that question, how would you think about it? Or am I thinking about it wrong?

CHIARA LONGONI: It’s a valid question. And I’m not a policymaker or an advisor of policymaking. What I would say is, as far as I understand, policymaking, first concern is to avoid harm. And harm can come from in two directions. Either from under adoption, when adoption of something in this case of AI would be beneficial, or from over adoption out of, for instance, of AI snake oil, when adoption is miscalibrated to the actual benefits of AI.

And so, from a policymaking perspective, my objective would be to recalibrate adoption to expectations to explanations. And that would cut across tasks that you labeled as more creative, or more emotional. I agree with you, that is those seem to be the underlying qualities of these uniquely human tasks.

But you would want to regulate also the tasks that seem to be shared. Because those tap into the actual capabilities of AI, what I can actually can or cannot do. And both of these more creative and emotional tasks, and these more computation and objective tasks, can produce harm.

So, sometimes you’re nudging me to think about a legal perspective. I’m in Europe now. So, I think about the European Union’s AI Act that is very much a legal binding classification of AI technologies, some of which are forbidden. They’re even prohibited from being implemented. And I may be wrong, but as far as I understand it, some of them are not even commercially and not even available yet.

For instance, one of the classes of that is when AI tools have manipulative intent. So, they cause harm because they are persuasively manipulative. Another one is those that do surveillance and facial recognition, so on and so forth, if I’m not mistaken.

So, you see that those are difficult for me to map them onto strictly creative or emotional tools, or strictly objective or more computational tools. But I can see how the principle, the regulatory principle, should be that of preventing harm. And I can see how, linking it back to our paper, they could appeal differently to people that have different literacy level.

And maybe this is theand feel free to interrupt me. But I mentioned before thatand we haven’t fully tested thatbut I mentioned before that there’s a glimmer of hope in our findings. Because one of the alternative explanations that we tested was that of fear of AI and ethicality of AI.

So, it was the idea that maybe it’s not magical perceptions that drive low literacy people to adopt AI. Maybe it is that people that have lower knowledge of AI view AI as more capable. Or maybe they are less fearful of AI. Or maybe they view AI as more ethical to use than those that have higher knowledge. And sure enough, we find the opposite even in these regards.

So, it is not that people with lower AI literacy want to use AI because they don’t fear it as much as those with higher knowledge. It’s actually the reverse. So, the glimmer of hope lays in the fact that a predatory company that tries to really build the wrong sense of allure around AI might actually cause, instill, fear on the side of potential adopters who have lower literacy. And if so, this would actually be counterproductive to them. A very same reasoning when it comes to capability and when it comes to ethicality.

TEDDY DOWNEY: That is good that, look, if you want to act in this predatory way when it comes to marketing, there are some risks there as well. Not just if some lawmaker found out but actually in just turning the user, the customer, against you potentially.

And also, I thought it was really interesting that obviously there’s two different phases of this. One is regulating and assessing the harm or benefit of the underlying technology, the underlying tool, the underlying capability, versus the marketing, right? And it’s almost those are two separate things here. And I think that presents a challenge as well.

It’s like regulating marketing is a tricky thing in and of itself, beyond just being honest and truthful. But we obviously have a lot of policymakers and decision makers listening into this. Do you have any other thoughts for them to consider and that you really want them to take away from your paper? And anything else that you’re looking at that you think as a follow-up to the paper that you want people to know about?

CHIARA LONGONI: Just a couple of things, random thoughts that are coming to my mind as I listen to you. One is you mentioned it’s difficult to regulate AI tools. And then there’s a piece also about marketing. And that had me thinking about the very definition of AI because that is part of marketing too, right? Like when you present a tool, do you call it AI? Do you call it technology? Do you call it automation? What do you call it? And that difficult, that ethical, that’s an ethical decision, isn’t it? In part, it is a factual decision. But in part there’s also a gray area basically. And what we should be calling AI on the basis of facts and what we should be calling AI on the basis of regulation and legality.

So, it is no coincidence that the definitions of AI legally across regulations in Europe and in the U.S. is not the same. I think the latest administration’s AI action doesn’t even include the word artificial intelligence in their regulation. Maybe it talks about systems or something like that. So, this is because from a policymaking perspective, the definition of what counts as AI is incredibly important because it determines where the regulation applies.

And that also should take into consideration sort of more technical, factual considerations, but also more social considerations. Because it also matters what people consider AI. Do people, do consumers, think that Alexa is AI? And is Alexa regulated legally or is it not? And these are questions that are both philosophical, questions of business. You mentioned before, trade-offs between adoption and doing the moral and ethical thing, and so on and so forth. And, of course, they have legal ramifications as well.

You also asked me if I have follow-up work. With this specific paper especially, we are collecting measures of AI longitudinally. I think we linked it in the Harvard Business Review piece that we wrote. It’s just a free website where people can go if they’re curious about their level of AI literacy. And, of course, if they want to leave us this information, we are simply collecting AI literacy. So, they get a score and we record that score. Any other information is not collected and is fully anonymized.

But basically, we are interested in tracking AI literacy over time. We are also exploring AI literacy in connection with more perceptions of the risks and trade-offs between benefits and risks of AI. And then because you mentioned before intellectual property rights, but here we need another hour to talk about that. This is a different paper that we have in the work, which looks at actually intellectual property rights.

So, we’re looking at perceptions that AI can own the content that it produces and the consequences of that differential ownershipbecause spoiler alert, AI cannot own something to the same extent that a human creator can own what it makes. And that has repercussions for how misappropriations are viewed morally.

So, what we find is we call it the human AI unethicality gap. So, stealing from AI is perceived to be less unethical than stealing from a person. The reason being that AI cannot own what it produces. So, what it makes is sort of finders keepers. And that has all sorts of repercussions.

TEDDY DOWNEY: It reminds me of when thebecause ostensibly when you’re putting in the input, you’re prompting the AI to do something. It’s like when you’re playing a video game and people record themselves playing video games. I’ve seen this as an interesting IP issue. It’s like, well, I’m playing the video game.

CHIARA LONGONI: That’s so interesting.

TEDDY DOWNEY: And so, if I just take out the music or I take out something that’s not totally me doing it, I get the it’s not your IP. It’s my IP.

So, it reminds me of that debate where like, well, I’m putting in the input, but you’re putting in all the software. And like, who’s IP really is it? But as you mentioned, I mean, I’m very interested in that. I look forward to that paper so I have a chance to invite you back. Because this has been absolutely fascinating. You’re one of the very few people that I’ve interviewed that has thought about marketing, moral sort of ethical questions going through the technology, how it works, the general perception of things. This is just such a rich way to think about the world and technology.

I’m super grateful that you’ve shared your expertise here and I can’t wait to read your next papers. And I can’t thank you enough for doing this. This has been really, really amazing.

CHIARA LONGONI: Thank you. Much appreciated.

TEDDY DOWNEY: Yeah. And thanks to everyone for joining us today. And thank you, Chiara. And this concludes the call. Thanks everyone. Bye-bye.