FCC Chair Proposes Making AI-Generated Voice Calls Illegal Under TCPA

In this fabricated message, residents of New Hampshire were advised against participating in the state's primary election.

On January 31, United States Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel made a significant announcement proposing that calls featuring artificial intelligence (AI)-generated voices should be deemed illegal.

Such calls would be subject to the regulations and penalties outlined in the Telephone Consumer Protection Act (TCPA).

This proposal comes in the wake of a recent incident where AI technology was used to create a false message, imitating the voice of U.S. President Joe Biden.

In this fabricated message, residents of New Hampshire were advised against participating in the state’s primary election.

The motive behind these automated messages was to interfere with the upcoming 2024 presidential election. However, the state’s attorney general’s office promptly denounced these calls as misinformation.

Rosenworcel’s proposal seeks to put an end to robocalls, in line with the TCPA, a law enacted in 1991 to regulate automated political and marketing calls that are made without the recipient’s consent.

The primary objective of the TCPA is to shield consumers from unwanted and invasive communications, such as unsolicited telemarketing calls and automated messages.

The proliferation of such calls in recent years has raised concerns, as technology has evolved to the point where it can convincingly mimic the voices of celebrities, political figures, and even family members.

By adopting this proposal, the FCC intends to provide state attorneys general across the country with additional tools to pursue those responsible for malicious robocalls and enforce legal consequences.

Back in November 2023, the FCC initiated a Notice of Inquiry to gather information about addressing illegal robocalls and the potential role of AI in this issue.

READ MORE: Bank of Japan and Government Collaborate on Digital Yen, Targeting 2024 Resolution

The inquiry specifically sought input on how AI could be involved in scams and voice impersonations, and whether it should be subject to TCPA regulation.

Additionally, the FCC aimed to gain insights into the positive uses of AI, such as identifying and preventing illegal robocalls.

The White House also weighed in on AI-related matters, releasing a fact sheet on January 29 that outlined key actions taken in response to President Biden’s executive order on AI issued three months earlier.

The fact sheet highlighted “substantial progress” toward the president’s goal of protecting Americans from the potential risks posed by AI systems.

One pressing concern in the realm of AI-generated content is deepfakes, which have been on the rise.

The World Economic Forum, in its 19th Global Risks Report, drew attention to the adverse consequences of AI technologies, particularly deepfakes.

The Canadian Security Intelligence Service, Canada’s primary national intelligence agency, expressed concerns about disinformation campaigns conducted on the internet using AI deepfakes.

In response, U.S. lawmakers have called for legislation that would criminalize the production of deepfake images, spurred by instances like the widespread circulation of explicit fake photos of Taylor Swift.

This underscores the growing need to address the challenges posed by AI-generated content and the importance of robust regulation and enforcement.

Discover the Crypto Intelligence Blockchain Council

No information published in Crypto Intelligence News constitutes financial advice; crypto investments are high-risk and speculative in nature.