FTC On AI And Consumer Vulnerabilities: ‘One Self-Regulatory Failure Is Begetting Another’
The US FTC’s Sam Levine, director of the Bureau of Consumer Protection, discusses the advent of AI and the compounding risks consumers face in today’s evolving, largely unregulated, profoundly uncertain digital age.
The US Federal Trade Commission vows to learn from historical mistakes and not leave consumers to fend for themselves as their personal and health data becomes increasingly commoditized with the dawning of artificial intelligence.
Sam Levine, director of the FTC’s Bureau of Consumer Protection, provided a progress report on the agency’s key priorities and “a warning on AI self-regulation” in his 19 September presentation at the National Advertising Division (NAD) 2023 conference.
“We are not going to sit back and … let those looking to monetize AI write their own rules. We will use every tool we have to protect the public. And when we find that our tools are falling short, we will be upfront with Congress about what we believe is needed,” he said.
“In a sharply divided country, the fact that so many Americans are unhappy with how the digital economy is working is a damning indictment in our experiment in privacy self-regulation."
Levine painted a troubling picture of where the public currently stands in the digital age.
“We now live in a world where companies can routinely surveil consumers around the most intimate details of our lives, from where we worship to whether we’re trying to conceive. We live in a world where scammers have harnessed the power of digital advertising so effectively that the FTC saw an 18-fold increase in fraud losses originating on social media from 2017 to 2021.”
He continued, “We live in a world where local media, once key guardians of our democracy and our civic cohesion, has been decimated, often replaced by clickbait, fine-tuned by digital surveillance, to manipulate rather than to inform,” and where “teens, especially teenage girls, are experiencing a mental health crisis that many attribute in part to social media.”
Levine harked back to the mid-2000s and the emergence of Web 2.0, wherein innovative companies such as Google and Facebook arose, along with new opportunity to hoard and monetize the personal data of users who believed they were engaging with free, complication-free services.
“These business models drove companies to develop endlessly invasive ways to track us,” Levine said. Consumer privacy and security has only deteriorated as tech giants have adapted strategies to neuter competition, resulting in a concentrated field of dominant companies with questionable commitments to market fairness and consumer interests.
“AI is already leading market participants to accelerate their data collection, with firm after firm changing their privacy policies to make it easier for them to collect even more data from us and use it in new ways,” Levine said, pointing to a May 2023 opinion in The New York Times by FTC chair Lina Khan.
Consumer health and personal-care companies routinely collect biometric data from those they sell to or engage with online, including physical, biological or behavioral traits, characteristics or measurements, in order to inform marketing, provide security to websites, or personalize the user experience.
On its website, the FTC underscores that consumers care about the privacy of their personal data and health-related information, and companies making privacy promises – expressly or by implication – are required to “live up to those claims.” It further notes, “[E]ven if you don’t make specific claims, you still have an obligation to maintain security that’s appropriate in light of the nature of the data you possess.”
In Levine’s view, Congress has failed to pass privacy legislation over more than two decades of considerations. While in 2000 the FTC voted 3-2 to recommend that Congress codify fair information principles, it has vacillated in that position since, ultimately relying on self-regulation. “We need to accept that self-regulation around digital privacy is not working, and I think we need to learn from these mistakes as we confront the next wave of emerging technology,” he said.
Self-regulation is effective, he said, when there are robust policy objectives, a dedicated independent institutional structure to develop and enforce rules, a clear legal framework, and an external enforcer to police the beat as needed – eg, the FTC, Levine said.
NAD is an example of industry self-regulation that works, he said. (Also see "Advertisers Using Browsing Data To Target Ads Warned To Be Sure Consumers Have Control" - HBW Insight, 22 Jun, 2022.)
“Unfortunately, some of the key ingredients to NAD success simply are missing when it comes to artificial intelligence,” he said. “When it comes to key policy questions surrounding AI, such as trade-offs around transparency, how to report vulnerabilities and limitations, and the rights of creators, there’s not even clear consensus on what our objectives as a society should be.”
The NAD provides a relatively low-cost forum for companies to bring complaints against their peers for alleged anti-competitive practices. The system has proven to support overall industry health, competitiveness, and the welfare of consumers. But “[i]n contrast to what we see among NAD participants – where firms often challenge their competitors when their advertising steps out of line – here, firms are racing in lockstep to supercharge their data collection,” Levine said.
“We’ve allowed companies to harvest consumer data without any real limitations, and now some of these some companies are creating opaque AI systems that capitalize on – and accelerate – their earlier hoarding. It seems, in other words, like one self-regulatory failure is begetting another,” he said.
Public opinion supports bolder action in the FTC’s perspective. Two thirds of US consumers think technology companies have too much power over the economy, and eight in 10 adults feel they have little or no control over how these companies use their personal information, according to Levine.
“In a sharply divided country, the fact that so many Americans are unhappy with how the digital economy is working is a damning indictment in our experiment in privacy self-regulation, and it should lead us to question whether anyone can honestly say it was wise to not pass comprehensive privacy legislation” years ago, he said.
As of 20 September, 13 US states have taken measures to plug perceived federal legislative gaps by passing consumer data privacy laws. Those states include California, Connecticut, Colorado, Delaware, Florida, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Utah and Virginia, , according to global law firm White & Case LLP.
Voice-Cloning, Other GenAI Deceptions
Generative AI is a category of AI in which machines generate new content rather than simply analyze or manipulate existing data, FTC notes on its website. “By using models trained on vast amounts of data, generative AI can generate content – such as text, photos, audio or video – that is sometimes indistinguishable from content crafted directly by humans.”
The FTC has made enforcing against deceptive use of AI a “fourth prong” of its strategy laid out last year to tackle the biggest challenges facing consumers. “We have made clear that if you use or claim to use AI to defraud the public, or you help others do the same, you can be liable under the FTC Act, the Telemarketing Sales Rule [TSR], and other laws we enforce,” Levine said.
The TSR requires sellers and telemarketers to disclose all material restrictions, limitations or conditions to purchase, receive, or use goods or services offered. Failure to provide the required information in a “clear and conspicuous” manner before the consumer pays for the good or services is a deceptive act or practice in violation of the TSR, subjecting the seller to civil penalties of $50,120 per violation.
Additionally, in September 2022, the FTC proposed the Trade Regulation Rule on Impersonation of Government and Businesses, which would allow it to seek civil penalties and money for harmed consumers from those who use voice cloning and other such technologies to defraud the public.
On 30 June, 2023, the FTC proposed the Trade Regulation Rule on the Use of Consumer Reviews and Testimonials, which cites the emergency of AI chatbots as one of many reasons it should move forward “expeditiously” to address fake reviews online and pose civil penalties on violators. (Also see "FTC Will Move Forward ‘Expeditiously’ With Proposed Rule Targeting Fake Reviews Online" - HBW Insight, 1 Jul, 2023.)