U.S. Senators Seek FTC’s Response on AI Scams Targeting Older Americans

To gain a comprehensive understanding of the FTC's approach, the senators posed four specific questions regarding AI scam data collection practices.

Four U.S. Senators have penned a letter to Federal Trade Commission (FTC) Chair Lina Khan, seeking information about the FTC’s initiatives to combat the use of artificial intelligence (AI) in scams targeting older Americans.

Senators Robert Casey, Richard Blumenthal, John Fetterman, and Kirsten Gillibrand emphasized the importance of addressing AI-enabled fraud effectively.

In their correspondence, the senators emphasized the necessity of comprehending the scope of the threat posed by AI-driven scams in order to devise suitable countermeasures.

They requested the FTC to share insights into its efforts to collect data on AI-related scams and ensure their accurate representation in the Consumer Sentinel Network (Sentinel) database.

Consumer Sentinel serves as the FTC’s investigative cyber tool, assisting federal, state, and local law enforcement agencies in combating various scams.

To gain a comprehensive understanding of the FTC’s approach, the senators posed four specific questions regarding AI scam data collection practices.

First, they inquired about the FTC’s capabilities in identifying AI-powered scams and appropriately tagging them in the Sentinel database.

They also sought clarification on whether the FTC could recognize generative AI scams that may go unnoticed by victims.

READ MORE: BlackRock Secures $100,000 Seed Investment for Bitcoin ETF as Approval Nears

Furthermore, the lawmakers requested a detailed breakdown of Sentinel’s data to identify the popularity and success rates of various scam types.

Lastly, they inquired whether the FTC employs AI in processing the data collected by Sentinel.

Notably, Senator Casey, in addition to his role in this inquiry, serves as the chairman of the Senate Special Committee on Aging, which focuses on issues affecting older Americans.

In related news, on November 27, the United States, along with the United Kingdom, Australia, and 15 other nations, collectively released global guidelines aimed at safeguarding artificial intelligence (AI) models from tampering.

The guidelines underscore the importance of ensuring AI models are “secure by design.”

Key recommendations include closely monitoring the AI model’s infrastructure, both before and after release, and providing cybersecurity training to staff.

However, it is worth noting that these guidelines do not address potential controls related to image-generating models, deepfakes, data collection methods, or their use in training AI models.

As AI technology continues to evolve, policymakers and regulators are actively exploring ways to mitigate associated risks and protect vulnerable populations.

Discover the Crypto Intelligence Blockchain Council

No information published in Crypto Intelligence News constitutes financial advice; crypto investments are high-risk and speculative in nature.