Antitrust Tech Tuesday: Dems Line Up to Support USTR; Even More Senators Support KOSA; FTC Protects Consumers from Scams, Proposes Additional Protections for AI-Generated Impersonations; DOJ Set to Investigate Sports Streaming Platform; Open Markets Institute Webinar on Journalism and Tech; FTC v. Amazon Trial Date

Published on Feb 20, 2024

Democratic representatives in support of USTR digital trade policy. A group of 87 House Democrats led by Representative Rosa DeLauro (D-CT) sent a letter to the Biden administration on February 13 announcing their support for U.S. Trade Representative Katherine Tai’s “worker-centered” approach to digital trade policy, which, they write, has allowed Congress to debate and enact domestic policy without the influence of trade negotiations.

“I am proud to lead so many of my colleagues in voicing our support for Ambassador Katherine Tai for holding firm in ensuring our trade policies – especially as it relates to the digital economy – put American workers first,” said DeLauro in a statement about the letter. “With Ambassador Tai, workers can breathe easy knowing that the U.S. top trade negotiator has their back.”

The letter specifically recognizes how Tai’s work and public statements on digital trade policy emphasize that the USTR will not encroach on Congress’s domestic policymaking, something the lawmakers reference occurred under former president Donald Trump’s administration.

For example, the lawmakers cite the Big Tech-led provisions that were added to the U.S.-Mexico-Canada Agreement under the Trump administration’s watch, which “had not been in past U.S. trade agreements” and were designed to “limit the regulation of domestic online privacy and data security matters, gig worker protection, AI oversight, tech anti-monopoly, and other important policies,” the letter said.

The lawmakers also applauded Tai for her privacy work in limiting the flow of U.S. data to countries like Russia and China, with the withdrawal of Trump-era WTO proposals granting data brokers and digital platforms “all but total control of Americans’ data.”

A full list of signatories can be found here.

Congressional support for KOSA increases as privacy groups speak out against it. The Kids Online Safety Act, or KOSA, received a large level of attention in the lead-up to the recent Senate Judiciary Committee hearing on child sexual exploitation when SNAP (SNAP) departed from its trade group, NetChoice, to support the bill, and then again during the hearing, when X, formerly Twitter, CEO Linda Yaccarino publicly announced the company’s support for the bill.

Since the hearing, even more senators have voiced their support for the bill, and lead sponsors Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) have updated the bill with new text meant to protect vulnerable populations from being targeted.

The biggest change to the bill removes authority from state attorneys general to sue entities that do not comply with the bill’s “duty of care” provision, which requires online platforms to take steps to reasonably protect minors from harms they could be exposed to on the platforms. Instead, FTC officials will have a larger presence in regulating compliance.

Additionally, the updated bill includes a definition of a “design feature” that an online platform must limit for minors, which describes any component of a platform that could increase the activity of kids on the platform, like infinite scrolling or auto play and in-game purchases.

“This overwhelming bipartisan support for the Kids Online Safety Act—62 total co-sponsors, Democrats and Republicans—reflects the powerful voices of young people and parents who want Congress to act,” said Blumenthal and Blackburn in a statement about KOSA. “The recent watershed hearing with Big Tech CEOs showcased the urgent need for reform. With new changes to strengthen the bill and growing support, we should seize this moment to take action. We must listen to the kids, parents, experts, and advocates, and finally hold Big Tech accountable by passing the Kids Online Safety Act into law.”

Still, despite growing support for KOSA, some privacy advocates argue that the bill is nothing more than a censorship law, despite the updates that have been made.

“KOSA remains a dangerous bill that would allow the government to decide what types of information can be shared and read online by everyone,” the Electronic Frontier Foundation (EFF) shared in a statement about the updates to the bill.

The EFF argued that online platforms’ design features, like infinite scrolling or auto play, are protected by the First Amendment.

“KOSA is essentially trying to use features of a service as a proxy to create liability for speech online that the bill’s authors do not like,” the EFF continued. “But the list of harmful designs shows that the legislators backing KOSA want to regulate online content, not just design.”

Other groups that have come out publicly against KOSA include tech think tank TechFreedom and the ACLU, both of which have written letters to the Senate Commerce Committee on what they believe is the unconstitutionality of the bill.

With 62 senators now backing the bill, KOSA could easily pass the Senate, but even if that happens, it is unclear if the House would take up the bill.

Senate Judiciary, Commerce Committees call on Meta to answer questions asked at recent hearing on child sexual exploitation. Senators Ted Cruz (R-TX), a ranking member of the Senate Commerce Committee, and Dick Durbin (D-IL), committee chair of the Senate Judiciary Committee, sent a letter last week to Meta (META) CEO Mark Zuckerberg asking him to answer questions about an Instagram product feature that displayed a warning to users searching for child sexual abuse material on the platform.

The “seemingly now-defunct” feature warned users of the content available on the platform, and then prompted the user to “Get resources” or “See results anyway.”

Cruz and Durbin asked Zuckerberg to provide the dates the warning feature was active, as well as how many times the “See results anyway” option was selected, in total and by minors. The senators also asked for a list of search terms that would prompt the feature to appear and for statistics detailing how often those terms were searched.

The warning screen came under fire in a Wall Street Journal article last year which first revealed the feature. Meta did not respond to the Wall Street Journal or subsequent requests by Cruz for answers at the time.

FTC finalizes Government and Business Impersonation rule, proposes additional measures to combat AI-generated impersonations. The FTC finalized the Government and Business Impersonation rule on Thursday, which would allow the agency to directly file federal court cases against scammers who have made money off the impersonation of a business or government entity.

 

The last time the FTC finalized a new trade regulation rule prohibiting an unfair or deceptive practice was 1980, according to a statement from the FTC commissioners. Losses related to impersonation scams have remained high over the years, with consumers losing $2.7 billion to scams in 2023.

The rule would allow the FTC to seek monetary damages in federal court from scammers who use government seals or business logos, spoof government or business email addresses or webpages or falsely imply government affiliation in communications with consumers.

The agency also proposed amendments to the rule which would declare it unlawful for a firm, like an AI platform, to knowingly provide a product that can harm a consumer through impersonation. This supplemental notice of proposed rulemaking was issued in response to public comments received from the Government and Business Impersonation rule. The FTC will open this amendment to public comment for 60 days following its publication in the Federal Register.

“The rise of generative AI technologies risks making these problems worse by turbocharging scammers’ ability to defraud the public in new, more personalized ways,” the commissioners state.

The amendments, if adopted, could be a first step at preventing the prevalence of deepfakes on the internet. The commissioners offer the example of a deepfake of an IRS official that could be used to deceive people about the status of their taxes. Liability for such a scam would fall on the AI developer.

“Ensuring that the upstream actors best positioned to halt unlawful use of their tools are not shielded from liability will help align responsibility with capability and control,” the commissioners state.

Concluding their statement on the rule, the commissioners recall a 2020 Supreme Court ruling preventing them from using their Section 13(b) authority under the FTC ACT to counteract consumer fraud by returning money to defrauded consumers. They view the rulemaking as a way to ensure consumers can be “made whole” following a scam, though they do suggest that it is not a “substitute for a legislative fix.”

Open Markets Institute hosts “The Value of News” webinar. The Open Markets Institute will host a webinar on February 26 called “The Value of News,” bringing together industry experts and stakeholders to discuss the future of journalism amidst the changing tech landscape.

One of the main topics of discussion will be the influence and prevalence of AI in the journalism space, an issue that has been brought to the forefront by The New York Times Company’s lawsuit against OpenAI and Microsoft, which will take on AI copyright infringement.

Panelists will include Cristina Caffara, antitrust expert and professor at University College London; Ermela Hoxha, associate director of Strategic & Platform Partnerships at The Guardian; Alexis Johann, Executive Behavioral Designer & Managing Partner at Fehr Advice; and Dr. Anya Schiffrin, Director of the Technology, Media, and Communications Specialization in the School of International and Public Affairs at Columbia University.

The panel will be moderated by Dr. Courtney Radsch, Director of the Center for Journalism & Liberty at Open Markets Institute.

Ongoing Investigations

DOJ set to investigate Disney-Fox-Warner sports streaming platform. The DOJ will likely investigate the planned sports streaming platform created from a joint venture among Walt Disney Company (DIS) subsidiary ESPN, FOX and Warner Bros Discovery (WBD) once it is finalized, according to a Bloomberg Law report from February 15. The reports stated that the DOJ would be looking into the joint venture’s potential impacts to consumers, sports leagues and rival streaming platforms.

The streaming platform, which was announced two weeks ago, is scheduled to launch this fall, and be an entirely new platform with a new app, separate from Disney’s Disney+ or Hulu platforms and Warner’s Max platform.

Cable television replacement Fubo (FUBO), which describes itself as “sports-first,” criticized the announcement, calling for scrutiny.

“Every consumer in America should be concerned about the intent behind this joint venture and its impact on fair market competition,” a statement from the company read. “This joint venture spotlights a concerning trend where an alliance with significant market share, reportedly controlling 60-85% of all sports content, could dictate market terms in a manner that may not serve the broader interests of consumers.”

Ongoing Litigation

Trial date set in FTC v. Amazon. The FTC’s antitrust lawsuit against Amazon (AMZN) will go to trial in October 2026, according to a scheduling order filed last week.

The FTC filed the lawsuit against Amazon with the support of 17 state attorneys general in September 2023, alleging the online retail company is a monopoly that uses exclusionary conduct to push out competitors in the space, and then overcharge its customers while simultaneously degrading the products and services it provides them.

The trial will be preceded by a series of quarterly status conferences, set to start this June.