Texas Firm and Individual Identified as Source of Biden Deepfake Robocalls in New Hampshire

Utilizing an artificial intelligence (AI) deepfake tool, these automated messages aimed to interfere with the 2024 presidential election.

The originator of the robocall that circulated in New Hampshire, bearing what appeared to be the voice of United States President Joe Biden urging citizens not to vote in the Jan. 23 primary, has been identified as Life Corporation and an individual named Walter Monk, as revealed by the New Hampshire Department of Justice.

Attorney General John Formella disclosed that the source of the calls was traced to a Texas-based firm, Life Corporation, and Walter Monk.

Utilizing an artificial intelligence (AI) deepfake tool, these automated messages aimed to interfere with the 2024 presidential election.

The state attorney general’s office swiftly labeled these robocalls as misinformation and advised voters in New Hampshire to disregard the message.

AI deepfake tools utilize advanced algorithms to fabricate convincing digital content, such as videos, audio recordings, or images, with the intention to deceive.

The Election Law Unit, in collaboration with state and federal partners like the Anti-Robocall Multistate Litigation Task Force and the Federal Communications Commission Enforcement Bureau, launched an investigation upon identifying voter suppression calls in mid-January.

READ MORE: South Korean Financial Regulator FSS Seeks Insights on Spot Bitcoin ETFs from US SEC

Following the investigation, the Election Law Unit issued a cease-and-desist order to Life Corporation for violating New Hampshire’s statutes on bribery, intimidation, and suppression.

Immediate compliance was demanded, with the unit reserving the option for further enforcement actions based on prior conduct.

Investigators from the Election Law Unit traced the calls to a Texas-based telecoms provider, Lingo Telecom.

Simultaneously, the Federal Communications Commission issued a cease-and-desist letter to Lingo Telecom for its alleged involvement in supporting illegal robocall traffic.

In response to these events, FCC Chairwoman Jessica Rosenworcel proposed considering calls featuring AI-generated voices as illegal, subject to regulations and penalties outlined in the Telephone Consumer Protection Act.

The proliferation of deepfakes has heightened concerns regarding AI-generated content, with institutions like the World Economic Forum and Canada’s primary national intelligence agency, the Canadian Security Intelligence Service, highlighting the risks associated with such technology and its potential for disinformation campaigns across the internet.

Discover the Crypto Intelligence Blockchain Council

No information published in Crypto Intelligence News constitutes financial advice; crypto investments are high-risk and speculative in nature.