The onslaught of high-quality, AI-generated political “deepfakes” has already begun ahead of the 2024 presidential election – and Big Tech corporations usually are not prepared for the chaos, experts have told The Post.
The rise of generative AI platforms like ChatGPT and photo-centric Midjourney has made it easy to create fake or misleading posts, photos, and even videos – from fabricated recordings of politicians giving controversial speeches to fake images and videos of events that never happened .
Striking examples of AI-generated disinformation have already circulated online – including Deepfake video of President Biden verbal attacks on transgender people, fake photos of former President Donald Trump resisting arrest, and viral photos of Pope Francis wearing a Balenciaga down jacket.
The result, based on experts, is uncharted territory for tech corporations like Facebook, Twitter, Google-owned YouTube and TikTok, which is able to face an unprecedented surge of high-quality fake content from each American social media users and nefarious foreign actors.
To this point, the businesses have provided few details about their plans to guard users.
In response to Bradley Tusk, political consultant and CEO of Tusk Enterprise Partners, the Silicon Valley giants are “not prepared” to fight election deepfakes because they “haven’t any motivation” to tackle the difficulty.
“The truth is, the incentives are practically the other – if someone creates a deepfake of Trump or Biden that goes viral, it means more engagement and eyeballs on this social media platform,” Tusk told The Post.
“Platforms have been unable and unwilling to stop the spread of harmful human-generated content. This problem is getting exponentially worse with the proliferation of generative AI,” he added.
Candidates have also began using generative AI. Last month, Trump shared a fake video which featured CNN anchor Anderson Cooper claiming that the previous president had just finished “ripping” the net with a “recent hole”.
GOP presidential candidate and Florida Governor Ron DeSantis’ campaign team shared an ad with manipulated photos of Trump hugging Dr. Anthony Fauci in the course of the COVID-19 pandemic.
Misleading AI-generated posts from political campaigns are only a part of the issue.
A much bigger concern, based on many experts, is the likelihood that foreign adversaries and rogue elements will use generative AI to control voters or otherwise affect the fairness of U.S. elections.
In May, a possibly AI-generated photo of a fake explosion on the Pentagon went viral on Twitter – where it was shared by the Kremlin-backed news site RT – and sparked a brief sell-off in the stock market.
Rapid progress in generative AI means “the speed of misinformation could increase dramatically” in comparison with the last election, based on Center for AI Safety director Dan Hendrycks, whose non-profit organization recently organized a letter comparing the specter of AI to weapons nuclear or pandemics.
“They were creating content without today’s AI systems,” said Hendrycks. “Imagine how rather more efficient they will probably be after they have AI to assist them create stories, rewrite them to make them more compelling, and tailor them to specific audiences.”
A few of the most outstanding figures in the tech world, including Elon Musk and OpenAI CEO Sam Altman, have identified AI-generated misinformation as one of the crucial serious threats posed by emerging technology.
In May, Altman told the Senate he was “nervous” about the opportunity of AI disrupting elections and called it a “significant area of concern” that requires federal regulation.
Other experts, including “Godfather of AI” Geoffrey Hinton and Microsoft chief economist Michael Schwarz, have also publicly warned against bad actors using AI to control voters during elections.
Asked for comment, a Google representative pointed to recent assessments by CEO Sundar Pichai, who touted the corporate’s investment in tools to detect and flag synthetic content.
Last month, the corporate said it might begin tagging AI-generated images with identifying metadata and watermarks.
YouTube’s content policy prohibits content that has been falsified to control other users, and removes offensive posts using machine learning and reviewers.
A TikTok spokesperson drew attention to the ByteDance-owned app introduced a policy on synthetic media earlier this 12 months, requiring you to obviously label any AI-generated or otherwise manipulated content that depicts a sensible scene.
“We’re strongly committed to developing barriers to the protected and transparent use of AI, which is why we announced a recent policy on synthetic media in March 2023,” a TikTok spokesperson said in a press release. “Like most of our industry, we proceed to work with experts, monitor the progress of this technology and develop our approach.”
A Snapchat representative said the corporate “repeatedly evaluates[s] our policies to make sure our security keeps up with technology developments, including artificial intelligence.”
Representatives from other major technology platforms, including Twitter, Meta and Microsoft, didn’t reply to requests for comment.
In response to Sheldon Jacobson, Champagne Consultant.
Efforts to stop AI deepfakes could be interpreted as political bias against a specific party or candidate, Jacobson said.
As well as, tech corporations have “little or no control” over the actions of foreign adversaries who decide to misuse technology for nefarious reasons.
“We’re not China where we’re trying to regulate things,” Jacobson said. “It is a free communication system – but there are risks and disinformation will probably be transmitted. And now if you bring in generative AI, it’s an entire recent level.”
With the election still over a 12 months away, Jacobson said tech leaders at big corporations are likely attempting to develop a technique to combat AI-generated deepfakes.
“I believe they do not say anything because they do not know what they will do. It is a problem, he added.
In response to Tusk, Big Tech firms is not going to take decisive motion to stop misinformation from flowing through AI-generated content unless lawmakers repeal Section 230 – a controversial clause that protects corporations from profitability for harmful content posted on their platforms.
In May, the Supreme Court decided to go away Article 230 intact in two cases which have to date been identified as crucial challenges of the civil liability shield. Nevertheless, lawmakers from either side proceed to call for Section 230 to be amended or repealed.
“If the financial repercussions of doing nothing are sufficiently big, the platforms will actually kick in and help prevent harmful content that has a negative impact on our democracy,” Tusk said.