The FCC votes to outlaw robocall scams using artificial intelligence voice generators
On Thursday, the Federal Communications Commission (FCC) announced the immediate prohibition of fraudulent robocalls employing synthetic, artificially created voices, targeting the clampdown on "deepfake" technology deemed capable of compromising election integrity or intensifying fraudulent activities.
The FCC's unanimous decision expands existing regulations against unsolicited robocalls to encompass AI-generated deepfake calls, categorizing these voices as "artificial" under federal laws governing telemarketing and robocalls.
This action by the FCC equips state attorneys general with additional legal mechanisms to pursue perpetrators of illegal robocalls employing AI-engineered voices to deceive Americans, the FCC emphasized.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters,” said FCC Chairwoman Jessica Rosenworcel in a statement. “We’re putting the fraudsters behind these robocalls on notice.”
In its statement on Thursday, the FCC outlined that individuals intending to initiate robocalls "must secure prior express consent from the recipient before placing a call that utilizes artificial or prerecorded voice simulated or generated through AI technology."