The Canadian Security Intelligence Service (CSIS), Canada’s primary national intelligence agency, has expressed growing concerns regarding the use of artificial intelligence (AI) deepfakes in disinformation campaigns on the internet.
These deepfakes, which are becoming increasingly realistic, pose a significant threat to Canadians, as they are often difficult to recognize or detect.
CSIS has highlighted instances where deepfakes have been utilized to harm individuals, emphasizing the potential risks associated with this technology.
In its report, CSIS warns that deepfakes and other advanced AI technologies have the potential to undermine democracy, as certain actors may exploit uncertainty or propagate false information based on synthetic or falsified content.
This threat is exacerbated when governments are unable to prove the authenticity of their official content.
CSIS also referenced Cointelegraph’s coverage of deepfakes targeting crypto investors, particularly those featuring Elon Musk.
Since 2022, malicious actors have been using sophisticated deepfake videos to deceive unsuspecting crypto investors into parting with their funds.
Elon Musk himself issued a warning against deepfakes after a fabricated video of him endorsing a cryptocurrency platform with unrealistic returns circulated on X (formerly Twitter).
READ MORE: New York Tightens Cryptocurrency Listing and Delisting Rules to Enhance Investor Protection
In addition to the threat of deepfakes, CSIS has identified other concerns related to AI, including privacy violations, social manipulation, and bias.
The agency recommends that governmental policies, directives, and initiatives evolve in response to the increasing realism of deepfakes and synthetic media.
CSIS emphasizes the need for governments to act swiftly, as delaying interventions may render them irrelevant.
CSIS proposes collaboration among partner governments, allies, and industry experts to address the global distribution of legitimate information.
Canada has taken steps to involve allied nations in addressing AI concerns, as evidenced by the Group of Seven (G7) industrial countries’ agreement on an AI code of conduct for developers on October 30.
This code comprises 11 points aimed at promoting safe, secure, and trustworthy AI worldwide while addressing and mitigating the associated risks.
In conclusion, the Canadian Security Intelligence Service is deeply concerned about the use of deepfake technology in disinformation campaigns and its potential impact on democracy and individuals.
CSIS calls for proactive measures, international collaboration, and the development of policies to counter the growing threat posed by AI deepfakes and synthetic media.
Discover the Crypto Intelligence Blockchain Council