Transcripts

Transcript of Conference Call on Zeta Global and Data Collection in the Adtech Industry with Brian Ray and Joseph Turow

Nov 07, 2024

On October 30, The Capitol Forum hosted a conference call with Brian Ray and Joseph Turow to discuss Zeta Global and data collection practices in the adtech industry. The full transcript, which has been modified slightly for accuracy, can be found below.

ETHAN EHRENHAFT:  Awesome, hi everyone. Thank you so much for joining us and good afternoon. Welcome to our conference call on Zeta Global and Data Collection in the Adtech Industry. I’m Ethan Ehrenhaft, the Technology and Privacy Correspondent at The Capitol Forum.

Today’s guests are Brian Ray, Director of the Cleveland State University College of Law Center for Cybersecurity and Privacy Protection and Joseph Turow, a Professor of Media Systems and Industries at the University of Pennsylvania’s Annenberg School for Communication.

Today, we’ll be discussing privacy and data collection practices within the adtech industry and particularly those of Zeta Global. In 2017, Zeta acquired Disqus, a commenting platform service that claims to be the web’s largest conversation platform.

For audience members that are new to our work, we’ve written several stories now seen on Zeta and Disqus’ privacy policies as examples of potential dark patterns employed by the adtech industry to collect millions of consumers’ personal data for use in targeted advertising.

Brian and Joe, thank you so much for joining us today.

And before we get started, I’d like to go over a few housekeeping items. For our listeners, you’ve joined the presentation using your computer’s speaker system by default. If you prefer to join via telephone, just select telephone in the audio panel and the dial-in information will be displayed. And then lastly, to submit questions to today’s speakers, please type them into the questions section of the control panel.  We’ll collect questions throughout the call and address them during the Q&A session at the end of today’s conversation.

With all that said, we’d love to dive into our discussion and I think get started with a bit of the big picture. So, I’d love to discuss how data collection within the adtech industry has grown in recent years and why it’s becoming an increasingly large part of this data ecosystem that we’ve seen.  So maybe I’ll turn things over to Joe to get started.

JOSEPH TUROW:  Okay. Recently is a tough term. I’d say in 2018 there were two big developments. There was the Cambridge Analytica scandal, we might call it, and Equifax. I think that turned a lot of heads in the government, both state and national. And then the question is, how much has it made a difference?

There was a flurry of attempts at national privacy policies that have gone nowhere so far. States have tried to pick up the pieces, one could argue, California with the most effort and perhaps Illinois with its BIPA, biometrics activities. In general, though, I would argue that companies have figured out how to get around a lot of this stuff and data collection is taking place at a rate at least as great as before 2018.

ETHAN EHRENHAFT:  All right. Well, thanks for that overview. Brian.

BRIAN RAY:  I just want to jump in because I’m curious. Joe’s perspective, I read it, interesting. So, I’m the law guy, not the industry guy, but I read an interesting analysis. So clearly the trend line in state laws is at least for opt out, right? And, of course, Apple now is requiring opt in for certain circumstances. And you’re absolutely correct, at the national level, we keep having these attempts that don’t get passed for reasons that really don’t have to do with the substance as much as—a little bit the substance, but really the procedure around who’s going to control this and whether it’ll be private rights of action. Just curious, the analysis I read seemed to suggest that maybe that we’re heading in a direction where ad firms are going to have to recalibrate, but it sounds like you’re skeptical of that. You think they’re going to find ways around it, Joe?

JOSEPH TUROW:  Yeah, I think recalibration is an interesting idea. I think what we will have is a movement—and we’re beginning to see this with OneTrust—a movement toward opt in, and that’s going to cause companies to use more artificial intelligence to make inferences about people, often based on smaller samples. But as you can see with Zeta Global, they’re moving full steam ahead and contending that it’s all opt in, which is kind of bizarre, but that’s the way it is.

ETHAN EHRENHAFT:  And real quick before we dive into Zeta and their policies, would love to kind of just define off the bat what cross-context behavioral advertising is for our listeners. Because I think that’s a term you hear tossed around a lot in policies and kind of terms and conditions, but the average consumer might not really have a general understanding of that. So maybe Joe.

JOSEPH TUROW:  Well, you guys can correct me. “Cross-context” is an interesting word that doesn’t always get attached to the term behavioral advertising. Basically, it means tracking you wherever you go.

I mean, that’s the real advance that’s been happening in the last ten years. Most people think they’re tracked online, whatever that exactly means, but maybe they’ll know they’re tracked on their phones. They’re tracked on the web. But more and more companies are tracking you when you watch TV. They’re tracking you when you go in stores.  And I think that’s what companies mean when they talk about cross-context.

BRIAN RAY:  And it’s both part of the evolution of the power of these tools, but also a little bit a reaction to how privacy laws started. So, privacy laws define personally identifiable information. Now they have categories of sensitive information. These are the protected categories. And they started out as very direct, concrete things.

And then, of course, the advertisers will say, “Well, we’re not collecting that. We’re not collecting social numbers. We’re collecting information, that then we’re using to analyze in aggregate ways, that allow you to nonetheless target the individuals.”

And now increasingly the newer crop of state laws, including the latest iteration of California’s laws, are recognizing that, okay, those things, when you collect those kinds of information, because they can be put back together to both identify and then do things to people that they’re concerned about, it’s sensitive.

And then that term of art is in California’s law and it’s defined more broadly than the newer state laws, which use the broader—they use the term targeted, which sounds broader, but in fact is defined more narrowly. So, there’s a legal dimension to it.  But in essence, yeah, I think that’s what we’re talking about is how these advertisers know so much about us, even though we didn’t think we told them anything.

ETHAN EHRENHAFT:  And would love to get into some of the California privacy laws, I think a little later on. But starting off by looking at Zeta again and specifically the role it plays. In its latest 10K, the company claims to have one of the largest compilations of personal data relating to U.S. and international consumers in the world. They say that data set consists of “240 million opted-in individuals in the U.S. and more than 535 million opted-in individuals globally”.  So maybe Joe, to start off, what initially struck you about Zeta, the size of that data set, and then the claim that these millions, hundreds of millions, of data points are opted in?

JOSEPH TUROW:  Well, two things. The first was the idea of opt-in, which is a fascinating phenomenon. And as I said, I think we’re going to see more of that. And the way it’s done is also interesting and I would argue deceptive.

The second is, do [you know] how long their privacy policy is? I actually read it today. The privacy policy is 10,578 words. And it has three parts to it. We could talk about the second and third, which frankly confused me. But basically, what this company is arguing is that it can be an all-purpose identifier for marketers who want to use email and other forms of connection with consumers to essentially figure out what they’re interested in, partly by tracking them and partly through artificial intelligence, and then sending them ads that will be specific to the interests that have been inferred and presumably tracking whether those ads work.

ETHAN EHRENHAFT:  Right. And Brian, for you, any initial takeaways, having looked at that policy and some of these points that Joe was bringing up, especially about just this concept of re-identification?

BRIAN RAY:  Yeah. And so, just to take the top point, I’m very curious as to what they mean by opt-in. I assume that they maybe mean that there was some form of consent.

And I know we’re going to talk a little bit about some evolution around what you need for consent later. But yeah, I agree with Joe, a little skeptical about what that really means.

And then absolutely, Zeta is clearly, as Joe also mentioned, very carefully trying to tread the evolving lines around privacy requirements here in ways that don’t meaningfully circumscribe what they do. And so, clearly, they’re being careful in some ways in elaborating out and being very specific about definitions and things, but seems to me moving very squarely in areas that have raised concern and have raised regulatory scrutiny.

ETHAN EHRENHAFT:  And Joe, could you also talk a little more about just this segmentation of consumers into categories that might include potentially sensitive information, like certain health conditions, religious, political affiliation?

JOSEPH TUROW:  You brought this out in your piece, and it’s very interesting. And the privacy policy speaks about it as well. They really do say that if you give them the information that’s sensitive, they will use it.

And they also say, while they don’t look at your health data, meaning some medical specifics from a hospital, for example, or a doctor, they infer it from the kinds of things you might say or the kind of places you might go. They’re not the only ones that do this, but they’re so upfront about it.

One of the things that was interesting about the privacy policy, and also looking at their website, is that they are at once seemingly easy to understand, and at the same time, extremely vague on important topics. And the idea of finding out, for example, if you’re a gay man or knowing what kind of diseases you’re interested in is part of their stock and trade. They don’t say who would be interested in this. They don’t talk about the specific advertisers they work for. But they have no shame in using that information.

ETHAN EHRENHAFT:  Brian, moving a bit into just other notions of consumer surveillance. Last month, the FTC released an 84-page report which examined consumer surveillance practices of major social media companies that are used to build these targeted advertising campaigns that Joe was talking about. But that report also detailed the FTC’s authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, including unfair data collection practices.

So, I was wondering if you could talk a bit about Section 5 enforcement specifically and how it might be applicable to some of the practices that we’ve been talking about in the ad tech industry.

BRIAN RAY:  Right, and so that report very specifically looked at this kind of activity in the context of large social media platforms deploying it on user data. So, there were sort of two dimensions to that. It’s a little bit different when we’re talking about a company like Zeta that is directly selling the capabilities of developing targeted advertising programs.

And so, one of the big critiques in there was, well, these are social media platforms. Even with the kind of consent that’s currently required, I mean, maybe in this day and age, we should expect that consumers understand that when they’re using social media platforms that their data is being exploited in these ways. But that was one of the critiques there is that I’m using these as a service for a certain thing. And then these companies are then using the captive data that they have, the captive information they have, to then sell lots of advertising and then creatively exploit all that great information that we’re providing for free because we want the benefits of the social media.

But then the FTC went into very great detail around what I would characterize as the problematic dimensions of this. And the report is careful not to specifically call out anything as definitively necessarily violative of Section 5.

And just to take a step back, the Federal Trade Commission, its core authority, is to prevent deceptive and unfair practice with respect to consumers. It has broadly, I mean, not broadly, but it has somewhat entrepreneurially interpreted that to really give it very broad-based authority over privacy as well as data security. And privacy practices can straddle both deceptive and unfair.

But in this space, and we talk about ad tech, recent enforcement actions from the Federal Trade Commission clearly are focused on these kinds of activities, although almost always with some added dimension where there’s evidence that the entity involved did not fully transparently disclose in its privacy policies, in the consent that it obtained, how it was actually using the information that it was collecting or outright lied in some cases. But that’s one of the key differences.

In this report, it’s a kind of policy statement by the FTC raising attention to the fact that, hey, this stuff is happening. People don’t really fully appreciate the depth of it. It is concerning and poses risks.

But one of the main calls to action at the end is we need new laws. We need greater authority to deal with this because these companies tend to be careful about, and as Joe said, track evolving legal crimes in ways that allow them to continue to do it.

JOSEPH TUROW:  Can I make a point about the FTC report, please? I found it very interesting, and it brought up some issues that people don’t typically think about. But I had a couple of frustrations with it. And after I read it, these almost got me annoyed.

First, they started taking the data in 2021. That was three years ago. If a student took so much time to do a research project, even a dissertation, we’d be very concerned. The problem with having data from 2021 analyzed now is it’s really easy for companies to say, oh, that was then. We’re not doing it now.

The other thing about it is they never mention a company. Everything is anonymous. And the problem with that is it’s almost as if the FTC’s afraid to say whom they’re dealing with. Why don’t they call out the people that they’re talking about? Certainly, it’s not because these people did them a favor to give them the data. They were required to.

So, it’s a very strange report. And if I remember correctly, and I can’t give chapter and verse here, somebody connected to the FTC actually said that they don’t necessarily agree with the staff suggestions. So, this is a staff report, not a commissioner report. It’s old data. And they don’t mention the companies. So, it hasn’t gotten a lot of publicity, I think, specifically for those reasons.

BRIAN RAY:  So, just to add a little bit of context there, Joe. They do, at the very top of the preface, call out the companies that they collected the data from. But you’re right. It’s very careful not to then tag any individual within. And the short answer is politics. It’s a divided board, with fairly sharp disagreements around how the FTC ought to be approaching this. And so, when staff develop reports like this, it’s certainly not a regulation, and it is a policy statement in a sense, right? But it’s got to tread that line.

JOSEPH TUROW:  This is exactly the problem the FTC has had over the last several years. On the one hand, they’re very concerned about the issues of privacy. On the other hand, they have to say, well, we don’t want to kill the goose that laid the golden egg.

The punishments they’ve had for companies have been rather weak when you think of the power of those firms, even the Facebook agreement. I mean, we could talk more about this, but it’s very frustrating to see data from three or four years ago being used in a current report.

ETHAN EHRENHAFT:  What did you both also make of the report exclusively focusing on social media and video streaming platforms? I know to a certain degree, when you’re writing a report that big, you have to narrow the focus. But it does allude to, again, this large network of third-party data brokers, adtech’s, without diving in too extensively into what that system looks like beyond the big-name companies that we’ve all heard of. So curious to get your take on that.

JOSEPH TUROW:  Well, I agree with you. In the so-called golden age of privacy commissioners, I’m thinking of Jessica Rich , Edith Ramirez, and Jon Leibowitz. They would talk about, particularly Rich, data providers. And even making, if I remember correctly, there were a couple of attempts in Congress of getting laws about this. But nothing’s happened.

And companies like Zeta, even as they—and you pointed this out in your article—will talk about how they might be hindered by laws that come about or by some actions that might hurt them. They’re moving ahead, full steam ahead.

BRIAN RAY:  So again, I think politics is the explanation for why the big-name social media companies were– and also Amazon, maybe Amazon’s a social media company, I don’t know, but not a direct data broker.

Now, on the one, it’s part of it. Don’t forget part of the report, and part of the FTC’s jurisdiction, is on antitrust. And there’s a fairly large analysis of the anti-competitive effects of these platforms. Which is, I’m not an expert, but I don’t get a sense that the adtech industry has the same kind of consolidation or market power that these entities have.

So, there’s a sort of, they’re getting into that intersection. But you look at the date of the very first footnote, the bipartisan statement by a Democrat and Republican commissioner calling out the problems around these platforms, 2020.

And recall, and we’re still dealing with today, those concerns are overlapping, but not the same. There are concerns on the right that these platforms are inappropriately curating algorithms in ways that amplify liberal or anti-right content. That’s one set of concerns. And then there are concerns—again, bipartisan concerns—around protecting consumers. But I think that helps explain why they’re focused on this.

Now, if you look at their enforcement actions, their enforcement actions are targeting smaller entities. But again, they have to have that hook of something that is outright deceptive. Which to Joe’s point, one of the things they say here is we need better tools. We need legislation. And they’ve been saying this forever because they don’t have—they have a very odd kind of prospective regulatory authority. It takes—it’s very, very difficult for them to pass the kind of regulations that most agencies pass. So, they can’t.

And I would say [Lina] Khan has been, I mean, there’s even a recent New York Times article saying some of the Democrats are worried about whether Khan will survive even a Harris administration because she has been among one of the most outspoken.

And Sam Levine, who is the lead author on this, I believe, he spoke at our conference and was one of the most direct about these kinds of risks, which are a little more complex than the kinds of direct privacy risks that the agency has historically dealt with or addressed.

ETHAN EHRENHAFT:  Yeah, and before we move on from the FTC report, was curious to also get your take on children’s privacy, because this was something that the report addressed as well. And that in Zeta’s privacy policy, you’ll see “the site and services we provide are intended for general audiences.” And that is not intended for use by persons under the age of 16. We will delete personal information if found to be belonging to a user under the age of 16. So, could you talk a bit maybe about COPPA or how the report also tried to address this kind of claim that we’re not aware of children using our service when these are such expansive services?

JOSEPH TUROW:  Yeah, I think that at least they went to 16. It doesn’t hurt them to say this. There’s a large enough audience in general that they don’t have to imply that they’re going after kids too.

Obviously, the 16 number is something that the U.S. has been fighting over for years. It’s crazy. And it doesn’t seem to be changing very much soon. I think companies are so wedded to the notion of teenagers that if Zeta goes after 17 and 18 year olds, maybe they feel that’s enough. And it makes them look good to say they don’t go after, or at least try to figure out if a person’s under 16.

In general, I think the issue of how companies think about harm, one of the big problems I think overall in federal regulatory activities, particularly with the FTC, is the understanding of harm. And this is true in class action suits where you have to somehow prove physical or health harm or some other kind of monetary harm. And there have been attempts, in the FTC—correct me if I’m wrong, Brian—a first-time offense can’t get a monetary punishment. Am I correct? It has to be after the first time?

BRIAN RAY:  So, let me circle back to that, but let me address the children’s issue. So, we got sort of nested consensus or varying consensus within the United States. And one thing’s clear. Children’s always raised heightened concern. And so, the FTC has a separate statute, COPPA, which gives them that direct regulatory authority. And they’ve been able to do much more specific things.

Now it’s tagged to children who are 13 and under. And so, one of the things this report identifies is, A, these companies are being sort of willfully blind, not at all proactive certainly, around trying to identify whether they’re—and in fact, they clearly know, right? Because of the nature of the content that’s being promoted in the ads that they’re selling, that these are children targeted and really children accounts that are being captured by these. And so, that’s one of the things they say.

And then the other related issue there is, well, there’s a kind of moral sense that, all right. Well, 13 is what the law says, but come on. Do we think a 14-year-old ought to be susceptible to this? And so, there’s this sense in which moving up somewhere shy of 18. And so, you’re seeing that.

And then, of course, the state laws as well are being more specific and having more specific requirements with respect to children. And some of these companies have recently reacted to this somewhat.

Now, it might be window dressing to Joe’s earlier point, but they have taken more direct action. Meta’s done some things to try to be more proactive around identifying children’s accounts and creating more limitations and giving parents a little bit more control, but it still puts a lot on the user.

JOSEPH TUROW:  What about, am I correct, Brian, about this idea that the FTC can’t financially punish a company for the first offense?

BRIAN RAY:  Yeah, so the FTC has limits on when it can impose monetary penalties and it has to go through a much more extended process to do that.

JOSEPH TUROW:  Right. So, that’s been a problem too. And basically, it’s chump change even, as I suggested earlier, for some of the companies. They have, it seems to me, made a calculation, particularly post 2018, that they can pay off the federal government and the state government for certain things.

Like Facebook, essentially in its latest large rulings, made a deal with, I think it was the State of Texas, and the federal government, saying, “We’ll pay you. But this means that you’ll give us warning if we’re doing something bad ahead of time and you won’t go after us the way you went after us now.” So, they’re giving themselves a lot of breathing room in exchange for that money.

ETHAN EHRENHAFT:  All right. And putting a pin in the FTC report for a minute, one of the things that we thought made Zeta a bit unique as an adtech was its ownership of Disqus, which is this online commenting platform that essentially functions as Zeta’s own little social media, and I hesitate to say little, but social media network, directly feeding it users’ personal data. And again, claiming to be the web’s largest conversation platform with something like 50 million comments per month across millions of different websites. So, signing up for Disqus only requires you to enter your name, email address, and a password, and then with the click of a button, agree to Disqus’ privacy in terms of service policies.

So, Joe, we also got a question from a listener about this. What might be deceptive about that opt-in mechanism and what do you make of a consent model like this?

JOSEPH TUROW:  Well, it’s questionable as to whether that’s really opt-in, the way I’ve seen it look. Essentially, what they do is they give you a list of what they’ll be taking. And then they say, if you go in here, we’ll be doing that. There’s no affirmative “I agree” statement for this or ability to think about the various possibilities among the choices.

It astonishes me that someone who would read the particulars of what they do, for example, with the voting discussion, would still do it. But I have a feeling that a lot of people just shrug and go in there not understanding the particular possibilities of having those data and what those data can do. People, particularly with apps, want to just get to the app and they go past a lot of this stuff. So, opt-ins are not necessarily going to stop a lot of data extraction.

ETHAN EHRENHAFT:  And of course, Disqus is used usually as this commenting widget on a website that might be political related. It might be sports. It might be shopping. So, the commenting widget is ostensibly there just to facilitate comments. But then, of course, it leads to all these other uses potentially.

JOSEPH TUROW:  Did you say The New York Times uses it?

ETHAN EHRENHAFT:  The Wirecutter product reviewer.

JOSEPH TUROW:  Wirecutter, right. Which is something that’s kind of ironic because they supposedly care about your web health, economic health, whatever. But I don’t think that most people would stop to read all the data that they suggest they take.

ETHAN EHRENHAFT:  Brian, anything you want to add on that? Or just this idea of reasonable expectations of a consumer when they’re visiting a site and might just be clicking either to comment or to accept cookies without understanding?

BRIAN RAY:  Right. I mean, Joe said it correctly. Especially if I’m motivated to comment on something – I’m really interested, I want to do it—I’m almost certainly not paying close attention. I mean, what Disqus is doing is standard practice where you enable the cookies. And in enabling the cookies, you accept the privacy policy. The privacy policy then discloses that it will sell or share. And including, in this case, it’s sharing because Disqus is owned by Zeta. And then if you go to Zeta’s policy, or just actually their description of how they operate right there, then aggregating that information and using it to enable more, you know, have a richer data set that they can connect with customers and then be able to identify when people who have used the Disqus are visiting the customer sites and then create targeted advertising around that.

I mean, that is one of the clear trends in state law starting with California. You have to be able to opt out. And there’s a new—we are evolving towards better technical mechanisms that would make that easier. Including Colorado requires, there’s a new technical standard that’s out there that would allow a uniform opt out as opposed to what you have to do now, which is there are varying—it used to just be either accept or not. And you have to go really deep in. Now most cookie settings at least have something that will allow you to go deeper and select into functional versus advertising.

ETHAN EHRENHAFT:  Yeah, and I want to be sure to save some time here in the second half for talking about what California has done specifically because they seem to be on sort of the front end of a lot of this stuff. So, maybe starting with, Brian, if you don’t mind walking the audience through a bit about just the CCPA, CPRA, just these new California privacy laws that have emerged over the past few years and how they seek to target some dark patterns that we’ve talked about.

BRIAN RAY:  So, on dark patterns specifically, I mean, broadly speaking, California, I mean, they followed GDPR. They were a little bit different, but they followed Europe in creating a set of very specific data privacy rights and giving consumers a greater set of tools in which to try to at least get some handle on what data is being collected and to exercise some control over. It still puts a lot of onus on the consumer to educate him or herself and to do something.

With respect to dark patterns in particular, you had to highlight the recent Enforcement Advisory. So, dark patterns, I mean, the Federal Trade Commission has used this term. It comes from academic literature. I think it appropriately sounds somewhat nefarious. But in terms of what the CPRA, the agency, has done, they’re defining very specifically as any mechanism that makes it harder for a consumer to opt out than to opt in.

And so, it’s what Cass Sunstein would have called it, or behavioral economists would have called a nudge. The nudge has to be, in order for it not to be a dark pattern, if it’s a dark pattern, the nudge is keeping the consumer away from making a higher set of transaction costs for the consumer to exercise these rights, in particular, in this case, to opt out, to not consent, that it is to consent than it is to not consent. So, trying to keep them trapped in. And to Joe’s point, really tap into the natural inclination to not make an effort to either understand what’s going on or to not consent to the collection and use.

JOSEPH TUROW:  Yeah, I think that’s a fascinating approach that they took this idea of the sort of relative energy that it takes. I also think that generally speaking, we’re going to have to figure out ways, structural ways, to get around the problem of, as Brian said, putting the onus on the person.

Frankly, in this piece that you wrote, you have the Predict It, sort of little picture of the Predict It opt-in, supposed opt-in [to the Predict It political betting market owned by Zeta]. [The privacy disclosure is] only five points. If you read those five points, you say, “My God, what am I doing?” And yet, obviously, millions of people do this. So, it can’t simply be a matter of giving people some basic information. Because people, a) don’t know where the information is going. [(b)]They don’t understand the depth and breadth of the activities. And [c)] they’re in a rush.

So, for example, one of the things I’ve always thought should be done with apps is when you want to download an app and you have to approve the privacy policy, my feeling is that you should have the company send you the privacy policy by email, not being allowed to use your email for anything else, your email address. And then within a day or two, you should be able to send the company back a statement that you don’t want them to use your data. So, give people some time to reflect. At this point, that’s not happening.

ETHAN EHRENHAFT:  Yeah. And Brian, you also alluded to the CPPA’s second ever Enforcement Advisory that was issued at the start of last month that called out some of these dark patterns, but especially this concept of symmetry in choice. So, the need to have those symmetrical opt-in versus opt-out consent systems. So, I was wondering if you could talk a little more, just what you think even of the significance of that kind of being the second ever advisory that the CPPA decided to hone in on.

BRIAN RAY:  Well, so there’s inputs and outputs. And so, on the front end, there’s been long-standing concern about consent mechanisms for all the reasons Joe mentioned. We’re not going to go look at what we’re actually consenting to. We just know that. And so, how do you get meaningful consent? And so, this is an effort to target that front end.

So, one of the things you were allowed to do is to opt out. Well, Okay. And again, Joe’s commentary on how entities have evolved to avoid this. Well, okay. We’ll allow you to opt out. We’re going to make it hard. And so, that’s what this is attempting to really address and say, well, you can’t make it harder to opt out than it is to opt in. You can’t make it substantially easier to opt in. You’ve got to at least make that choice even in terms of the transaction costs involved. So, in some respects, it seems very, very obvious, but it will change a lot of things.

Now, the guidance itself just says you have to think about these things. And they give some scenarios without saying which ones are better or worse. But it’s saying you’ve got to think about it. You’ve got to ask, is the language easy to read and understandable? Is it straightforward? Is the path to saying “no” longer than to saying “yes”? So that’s the concrete definition there. And does it make it more difficult? That still won’t necessarily solve the problem, but at least it makes it easier.

And so, that combined with education, you might start to get a culture where people are reflexively saying no. Now, Colorado and some other states have gone further by saying, well, you’ve got to have what they call a universal opt-out mechanism. And so, this is a more standardized, that specifies in more concrete ways how you do it, and then allows you to do it prospectively across.

So, it’s kind of the inverse of the contextual behavior advertising where the advertisers are collecting information across different platforms and using it. And this allows a consumer then to prospectively just send out a signal that, hey, I’m an opt outer. Which that strikes me as a much more effective tool.

JOSEPH TUROW:  But whatever happened to the do not track? Remember that? It still hasn’t become a facet that companies will accept, even if the browser has the possibility for it.

BRIAN RAY:  Right. And so, part of it is the intersection of these—I mean, I have four different privacy tools operating on my browser, and I have to in various points turn some or all of them off to get the functionality I need. And so, the idea is, of course, at the bottom, this is a technical problem that can be solved technically. And, of course, Apple’s done that to an extent or at least mitigated it. And Google said it was going to. Google said, we’re going to turn off the main ad tracking mechanism. And then they reversed course on that.

JOSEPH TUROW:  Yeah.

BRIAN RAY:  And this is one of the things that the FTC calls out is that self-regulation hasn’t worked. So, companies have found ways to either make it really hard to use these technical tools or to work around them.

JOSEPH TUROW:  What’s fascinating is toward the bottom of the Zeta privacy policy, the statistics on the exercise of privacy rights during 2023, the number of requests for access, 3,322, the number of requests for opt-out, 141,481, out of millions and millions of people. So, it’s kind of like a joke.

BRIAN RAY:  Yeah, I talked to in-house counsel who said, hey, they went to all these great lengths to set up California specific to implement those rights and it rarely gets used. So that—to their earlier point around giving consumers control—only goes so far. Because you have to be paying attention and you have to make the effort. And that’s what the FTC is calling it, right? They’re saying, “Hey, this is not working. We need a new approach that attracts-puts the onus on the advertisers.”

JOSEPH TUROW:  Can I say, we did a survey a couple of years ago, in 2022, looking at the whole issue of can Americans consent? And two parts of the survey was we asked people questions, I forget how many of them, true/false, like 20-some-odd questions around practices and policies that companies and marketers use. And we found that the 53—well, no, something like 70 percent of Americans failed the test, which meant they got half wrong, at least. And most people, I think two people got all of the questions right. And these were not crazy sort of questions. These were basic questions of how the web works and how marketers do their thing.

The other thing we find consistently, in three national surveys we’ve done, is that Americans are basically resigned to all of this. If you give them two statements, and we do it in a rotating way with 17 other statements randomly, one is something like, I’d like to control the data that companies have about me. And the other is I’ve come to the conclusion that I can’t really control data that companies have about me, something like that.

We find both, you have to have both of those to be resigned. Because resignation means I’d like something to happen, but I don’t think it will. Around 70 percent of Americans are resigned to their data being used by marketers. And we find that if you take a look—if consent, the idea of meaningful consent, taking it, for example, out of the medical literature—is knowing what’s going on and having a sense of autonomy toward it, Americans have neither.

ETHAN EHRENHAFT:  Yeah, I think that’s a really powerful point to transition to some audience questions because I know we want to save a little time for that. But thank you both so much for your time and walking us through it. I know we could go much longer on any of these points. One initial question we got was about the legality of alternative IDs as an alternative to cookies. We didn’t talk about cookies so much, but was wondering if either of you had thoughts on that, on alternative IDs.

JOSEPH TUROW:  Well, cookies are, of course, historically important, but nowadays it’s probably better to talk about trackers. There are so many ways to talk about trackers, from pixel trackers to these new universal IDs that companies like the Trade Desk are putting together. I think there are always going to be ways to do this, to try to get around what browsers will do with cookies. Cookies are, by now, a kind of old-fashioned technology. They’re still used, of course.

By the way, I think we give the first-party cookies too much breadth. It’s not clear to me that first-party cookies should be given a pass the way they often are. Apple’s thing is about tracking people into third-party apps, and there are so many concerns now about third parties, but I think first-party data are problematic as well, and I don’t think that people realize how companies are using those data. And even their attempts to use the first-party data as third-party data and second-party data. So, I think we give first-party data too much credit as to people’s desire to accept them.

ETHAN EHRENHAFT:  Brian, did you have any other thoughts on that point?

BRIAN RAY:  Yeah, I mean, I’m not an expert on the technical aspects of alternative IDs. What I understand about them is that they are, to Joe’s point, attempt to use other mechanisms to create a similar, but generally less effective, tool to identify, without violating the specific requirements of the privacy laws, to nonetheless still identify on some level why a particular ad will be of interest to a particular person. And so, it’s creating a larger, less direct connection in using data science in different ways, but without getting as close as we currently do with the cookie data.

ETHAN EHRENHAFT:  We had one question that was kind of a deeper dive on the FTC, and it says, building on Brian’s comment regarding the FTC’s dual authority and antitrust and consumer protection, the agency is uniquely positioned to assess both privacy and competition impacts within digital advertising. Do you sense that policymakers and regulators are hesitant to pursue an enforcement and legislative approach that consent for third‑party data use may require direct consumer brand recognition of the entity, as it may favor large platforms with established ubiquitous first-party relationships, further entrenching their market dominance?

JOSEPH TUROW:  Yeah, that’s a lot of the problems that people in Europe had when Google started saying, we’re not going to allow you to track the people that we see in the EU, because it’s against EU rulings, and they’d say, “Hey, you’re just making it so that you guys see more than we do.”

I think that the whole problem of antitrust is very interesting. I noticed in the Google antitrust case, the report that the judge wrote, Judge Mehta, he spent maybe two paragraphs on privacy and data. It’s just not a big thing in that discussion. One could make a case that—in fact, I think a good case—that the way these companies have maintained their power has been to collect data and keep data that other companies couldn’t get. But in the antitrust decision, privacy played, it seems to me, a relatively small part.

ETHAN EHRENHAFT:  We’ve been getting a number of questions about why focus on an adtech like Zeta, as opposed to Google or one of these other players. Any actual violations that you think Zeta or Disqus may be committing?

BRIAN RAY:  I think Joe connected these dots, where the exact same practices that the FTC was calling out in this report that it issued are being done by adtech companies without the additional valence of the market authority and market dominance that these social media platforms have.

Now, Google plays—Google and Apple, because they’re the nexus of all this—they play very specific roles, very powerful roles, in terms of enabling this to happen. They’ve been subject of scrutiny. Now, Apple has positioned itself as a privacy forward. Although, it’s used privacy in many ways to try to resist or to try to engage in what arguably is anti-competitive behavior. They’ve basically said, “Hey, let us keep our app environment clean. Let us control it.” Both Google and Apple, by the way, when it was the pandemic, exerted extraordinary authority to shut down digital contact tracing. They basically said to sovereign governments, you can only use Bluetooth. It’s an interesting conundrum.

ETHAN EHRENHAFT:  Joe, this question of why Zeta?

JOSEPH TUROW:  It’s a pet peeve of mine, actually. It’s a really—I mean, clearly, Google, Facebook, Apple, Amazon are biggies. And they take data and everybody knows this. And as Brian was saying, they’re paradigmatic in the ways that they do their data. But we can talk about two worlds that are happening now.

On the one hand, you have these basically walled gardens, these companies that know everything about us, that keep those data. They share rather little of it with outsiders. That in itself is driving a lot of marketers crazy. And so, what they’re trying to do is to enhance the non‑walled garden ways of getting data. Because this way they can track particular individuals, often with their names, and often with a sense of what the end result is that Google and Facebook may not tell them exactly.

So, companies like The Trade Desk that I mentioned and some others have had these new trackers. But there are also, as a consequence of this, smaller websites that take enormous amounts of data, partly because they think they can get away with it because nobody looks at them.

So, for example, I got a Harris poll request not long ago, and it says fill out this survey. It didn’t tell me exactly what the survey was for. Before I did that, I went to the privacy policy. Their policy is almost like Zeta’s. They say they take your data for virtually everything, including watching your TV behavior. They track you all over the place. Why would I fill out a survey questionnaire to somebody that might do that?

So, the thing about it is they’re not the only one. Go to Calendly. Go to a lot of the things that we use every day. They do similar things. And not everybody is going to [have time to read and process them.]– There’s this classic study by Alicia MacDonald and Lorie Cranor. from 20 years ago. [E]ven then, it took hundreds of hours to read all the web privacy policies that people go through. So, I totally am sympathetic with the idea that we have to look at the smaller guys as well as the bigger ones and to try to do something about both.

ETHAN EHRENHAFT:  Thanks, Joe. And this is an interesting question on U.K. policy. So, maybe Brian, if you might be able to speak to it. In the U.K., the model for publishers is now consent or pay. Is that what you would prefer for the U.S.?

BRIAN RAY:  I mean, this is one of the perennial defenses of the market model we have where data enables free content. I don’t necessarily think that that’s a better model. I do see the clear consequences of it. I mean, interestingly we’ve seen, certainly in the media market, an evolution towards the companies that are surviving are on a pay model. No, I honestly think that we can create better tools.

Now, those tools and everything will constrict the economic benefit that you can get from advertising. So, whatever that might end up resulting in something like that, I still think you could have a mix with that. Although, I guess if everybody starts using a universal opt-out mechanism.

But that gets to that question about alternative IDs. And then relatedly other ways that you can more indirectly, but nonetheless somewhat effectively, do this kind of targeting without getting as close as we do in terms of being able to track someone in intensively personal ways.

JOSEPH TUROW:  But you can still use very personal—I mean, the way companies have figured this out through various kinds of matching and clean rooms and stuff like this. You can, without knowing a person’s identity, know an enormous amount about them, including very sensitive stuff.

I think the biggest problem with the GDPR and its related regulations is opt in still counts, and you’re always going to have these sorts of issues. I would bet—and this is a crazy suggestion, but I might as well say it. If we went back to contextual advertising and everybody could do contextual advertising and no other advertising, I think that the market for digital advertising would not implode. I think that what we’re finding in today’s world is this escalatory dimension that companies feel they have to learn more and more about us because their competitors are learning more and more about us.

We’re moving into biometrics now on a variety of levels. What’s the next set of intrusions going to be? It’s what are our grandkids going to see, you know? So, I think these are issues that we really haven’t discussed as a society in any serious way.

BRIAN RAY:  Yeah, it’s a great point, Joe. Right? And I wonder how much more effective is it really that you’re targeting individuals in this hyper-precise way? Would it ultimately affect sales?

JOSEPH TUROW:  Yeah, I don’t know. I think you could do a lot with contextual advertising and artificial intelligence. And I do think that we have to think about what does it mean when we live in a society where we’re constantly splitting people off from one another, discriminating them – talking about high-value customers, low-value customers and really creating tensions within people, tensions within society, basically dividing it up based upon these increasingly problematic technologies?

BRIAN RAY:  And there’s the parallel issue with using algorithms to—and it’s related, right? But using algorithms to keep people in a particular platform and using that. And so, you combine that thing with targeted advertising. So, you’re getting this sort of hyper‑specialized content feed.

JOSEPH TUROW:  Right.

BRIAN RAY:  And what most innovations do is they give something to somebody that’s new and they weren’t expecting it. And all of a sudden, that’s the thing, right? So yeah, I do wonder if it’s a kind of – well, it certainly has a self-reinforcing aspect that may or may not be tied to the ultimate objective from an advertiser’s perspective. And certainly, from a social perspective, there are indications we don’t want to live like this? Although, we do. Because we eat it up and we don’t object to it.

ETHAN EHRENHAFT:  And in the last few minutes, I think it might be good to just talk a little more about AI as you both have mentioned and machine learning when applied to these millions upon millions of data points. Joe, I know you mentioned Zeta makes reference to the use of AI and algorithms in its own policy. But just what can be some of these ripple effects, in the last few minutes we have, of when you take this already massive cross-contextual advertising and behavioral advertising system and then apply algorithms to it?

JOSEPH TUROW:  Well, if I may start, it does extend this division that I was talking about. If we want to talk about a harm that we rarely discuss, it’s the idea of dividing people off from one another. But what Zeta is implying in their activity is a combination of predictive analytics AI and generative AI. And when you bring the two together—that is, predict what a person is going to like based upon the data you have and then create a commercial message through artificial intelligence that will be suited in dynamic ways just to them—it’s a whole new ball game. I’m not saying it’s going to [make it so all] people will directly respond. But it is an environment where people are getting reflections of themselves that other people have made, and in fact, that computers have made. So, essentially people are seeing their depictions and their status being created through a kind of computer prism.

BRIAN RAY:  Right. And yeah, to Joe’s point, we’re all driven by multifaceted things. And one of the real scary potentials of AI is to be very, very powerfully accurate in terms of what motivates us. And so, you go back to the old days when we were worried about subliminal messages, right? Well, this might be the reality of something like subliminal messaging. And Cambridge Analytica claimed it could do this. It turns out it probably really couldn’t it in the ways that it claimed it could. But it was projecting forward what we seem to be hurtling towards.

JOSEPH TUROW:  Even if it doesn’t [always persuade], it makes you think that companies are seeing you in a certain way. And after that, you may say to yourself, well, am I better in the marketplace than Brian? Is he getting different ads than I am? Is he getting discounts that are better than I am? Is it going to affect the way I get a loan? All of these things begin to ferment in people’s heads when they’re constantly surrounded, almost like a social credit system, by this artificially created environment.

BRIAN RAY:  Well, and that’s a real risk that FTC and others are paying attention to, discriminatory advertising. That really is a real problem. And to your point, right. It’s a de facto. It’s the capitalist version of a social credit score that, when you live in a capitalist world, well, that’s going to constrict most of your life, even if the government doesn’t use it directly.

JOSEPH TUROW:  Right.

ETHAN EHRENHAFT:  Well, with that, that might be a great place to end on.

JOSEPH TUROW:  Scary thought.

ETHAN EHRENHAFT:  A little depressing one. But thank you both so much again for joining us today and everybody who tuned in, really appreciate both of your time again.

JOSEPH TUROW:  Thank you for asking me. Nice meeting you, Brian.

BRIAN RAY:  Yeah, very nice meeting you, Joe. That was fun. Thanks, Ethan.