Skip to main content
SearchLoginLogin or Signup

Bad Blood with AI: Deepfakes and Taylor Swift, Elections, and Financial Fraud

Published onMay 03, 2024
Bad Blood with AI: Deepfakes and Taylor Swift, Elections, and Financial Fraud
·

On February 8, 2024, the Federal Communications Commission (FCC) unanimously adopted a Declaratory Ruling classifying robocalls using AI-generated voice cloning technology as “artificial” under the Telephone Consumer Protection Act. Previously, State Attorneys Generals could only prosecute the fraud that resulted from the use of AI. The new Declaratory Ruling makes “the act of using AI to generate the voice in such robocalls itself illegal,” thus broadening enforcement authority. FCC Chairwoman Jessica Rosenworcel explained the reasoning behind this ruling, stating that “[b]ad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice.”

The ruling was prompted by an estimated 5,000 to 25,000 robocalls made to voters in New Hampshire days before the state’s presidential primary, dissuading voters from voting. The voice was created by AI to mimic the voice of President Biden and told voters “Your voice makes a difference in November, not this Tuesday.” After opening an investigation, the state attorney general believed the calls came from a Texas-based company. The perpetrator of the robocalls falsified the caller ID, deceiving voters into believing they received legitimate calls “from the former New Hampshire chairwoman of the Democratic Party.”

There is a heightened sense of concern surrounding AI deepfakes and upcoming 2024 elections. This concern is justifiable, considering “[a]round 70 countries estimated to cover nearly half of the world’s population– roughly 4 billion people– are set to hold national elections this year.” Outside of the U.S., AI has already been used to create misleading or inaccurate depictions of politicians and political issues in Argentina, Australia, Britain and Canada. In Pakistan, the former Prime Minister Imran Khan, whose party won the majority of seats, “used an A.I. voice to declare victory while in prison.

The Senate responded to such concerns by introducing a bill in September of 2023 titled “Protect Elections from Deceptive AI Act,” which would prohibit the distribution of misleading AI generated content of candidates for federal office, but the bill has not yet become law. Key developers of AI technology have also responded to such concerns. Open AI, a leader in the AI industry, stated it was actively working to prevent the use of its tools in elections by prohibiting users from creating what appears “to be real people or institutions.” Google said it is working on preventing its AI chatbot, Bard, from responding to election-related prompts. Meta, owner of Facebook and Instagram, committed to better label AI content in order to help voters decipher what information is real.

Anxieties over the dangers of AI deepfake capabilities were further heightened after alarming incidents involving American singer-songwriter Taylor Swift. At the end of January, AI-generated, sexually explicit images of Swift circulated across various social media platforms. On X, a social media platform formerly known as Twitter, an image “was viewed 47 million times before the account was suspended.” Taylor Swift was also the target of an AI generated fake consumer ad for Le Creuset Cookware. Swift’s use and love for the cookware brand is well-known, but she was not involved in an endorsement of their products. This fake ad used AI to mimic Swift’s voice and prompted consumers to click a button, answer a few questions, and “pay a ‘small shipping fee of $9.96’ for the [Le Creuset] cookware.” Consumers who were deceived by the ad faced unforeseen monthly costs and were never sent the advertised cookware.

Taylor Swift is not the only celebrity to fall victim to AI deepfakes. Over the summer, Luke Combs was featured in an AI-generated ad promoting weight loss gummies that Lainey Wilson, another country musician, purportedly recommended to him. In October, both Tom Hanks and Gayle King were featured in fake advertisements. Hanks was featured in an ad for dental plan promotions while King was featured in an ad for a weight-loss product.

The threat posed by AI deepfakes was further demonstrated in a disturbing instance of fraud committed on a financial worker of a multinational firm. The worker was scammed into paying a total of $200 million Hong Kong dollars, which is about $25 million U.S. dollars, to scammers who used AI technology to create a video conference call that impersonated the firm’s Chief Financial Officer. The video call included impersonations of several other staff members who the worker claimed to recognize as both looking and sounding like co-workers. This scam was only discovered when the worker checked in with the firm’s head office.

The harms resulting from the rise in AI deepfakes have prompted lawmakers to take action. Although there isn’t a comprehensive federal legislative framework yet, a plethora of bills have been introduced in Congress. As of 2024, “at least 40 states, Puerto Rico, the Virgin Islands and Washington, D.C., introduced AI bills” to address the risks and harms associated with AI.

Although there are significant concerns surrounding the prevalence of AI deepfakes, the rise of AI has resulted in demonstrable positive benefits, including the increase of communication and the spread of information. A recent report by McKinsey & Co. in June of 2023 shows that AI technology has the potential to have considerable positive impacts across various industry sectors. By increasing productivity, AI has the potential to add “trillions of dollars in value to the global economy.

However, these benefits are not without costs, as the rise of AI deepfakes have impacted individuals all over the world.  As this technology rapidly develops, AI companies, alongside state and federal lawmakers, should keep these concerns in mind and work to develop ways to keep this technology in check.

Paxton Gentry is a second-year law student at Wake Forest University of Law who recently joined the editorial staff of the Journal of Business and Intellectual Property Law. She graduated from Wake Forest University with a major in Business and Enterprise Management and a minor in writing. Upon graduation, Paxton plans to practice corporate transactional law. 

Reach Paxton here:

LinkedIn: www.linkedin.com/in/paxton-gentry-01a6631a7

Email: [email protected]

Comments
0
comment
No comments here
Why not start the discussion?