Days after the Israel-Hamas war erupted last weekend, social media platforms like Meta, TikTok and X (formerly Twitter) received a stark warning from a top European regulator to remain vigilant about disinformation and violent posts related to the conflict.
The messages, from European Commissioner for the inner market Thierry Breton, included a warning about how failure to comply with the region’s rules about illegal online posts under the Digital Services Act could impact their businesses.
“I remind you that following the opening of a possible investigation and a finding of non-compliance, penalties could be imposed,” Breton wrote to X owner Elon Musk, for instance.
The warning goes beyond the sort that may likely be possible within the U.S., where the First Amendment protects many sorts of abhorrent speech and bars the federal government from stifling it. The truth is, the U.S. government’s efforts to get platforms to moderate misinformation about elections and Covid-19 is the topic of a current legal battle brought by Republican state attorneys general.
In that case, the AGs argued that the Biden administration was overly coercive in its suggestions to social media firms that they remove such posts. An appeals court ruled last month that the White House, the Surgeon General’s office and the Federal Bureau of Investigation likely violated the First Amendment by coercing content moderation. The Biden administration now waits for the Supreme Court to weigh in on whether the restrictions on its contact with online platforms granted by the lower court will undergo.
Based on that case, Electronic Frontier Foundation Civil Liberties Director David Greene said, “I do not think the U.S. government could constitutionally send a letter like that,” referring to Breton’s messages.
The U.S. doesn’t have a legal definition of hate speech or disinformation because they don’t seem to be punishable under the structure, said Kevin Goldberg, First Amendment specialist on the Freedom Forum.
“What we do have are very narrow exemptions from the First Amendment for things that will involve what people discover as hate speech or misinformation,” Goldberg said. For instance, some statements one might consider to be hate speech might fall under a First Amendment exemption for “incitement to imminent lawless violence,” Goldberg said. And a few types of misinformation could also be punished after they break laws about fraud or defamation.
However the First Amendment makes it so among the provisions of the Digital Services Act likely would not be viable within the U.S.
Within the U.S., “we won’t have government officials leaning on social media platforms and telling them, ‘You actually ought to be taking a look at this more closely. You actually ought to be taking motion on this area,’ just like the EU regulators are doing at once on this Israel-Hamas conflict,” Goldberg said. “Because an excessive amount of coercion is itself a type of regulation, even in the event that they don’t specifically say, ‘we’ll punish you.'”
Christoph Schmon, international policy director at EFF, said he sees Breton’s calls as “a warning signal for platforms that European Commission is looking quite closely about what is going on on.”
Under the DSA, large online platforms should have robust procedures for removing hate speech and disinformation, though they need to be balanced against free expression concerns. Corporations that fail to comply with the foundations could be fined as much as 6% of their global annual revenues.
Within the U.S., a threat of a penalty by the federal government may very well be dangerous.
“Governments must be mindful after they make the request to be very explicit that that is only a request, and that there is not some variety of threat of enforcement motion or a penalty behind it,” Greene said.
A series of letters from Recent York AG Letitia James to several social media sites on Thursday exemplifies how U.S. officials may attempt to walk that line.
James asked Google, Meta, X, TikTok, Reddit and Rumble for information on how they’re identifying and removing calls for violence and terrorist acts. James pointed to “reports of growing antisemitism and Islamophobia” following “the horrific terrorist attacks in Israel.”
But notably, unlike the letters from Breton, they don’t threaten penalties for a failure to remove such posts.
It isn’t yet clear exactly how the brand new rules and warnings from Europe will impact how tech platforms approach content moderation each within the region and worldwide.
Goldberg noted that social media firms have already handled restrictions on the sorts of speech they will host in numerous countries, so it’s possible they’ll decide to contain any recent policies to Europe. Still, the tech industry up to now has applied policies just like the EU’s General Data Privacy Regulation (GDPR) more broadly.
It’s comprehensible if individual users want to vary their settings to exclude certain sorts of posts they’d fairly not be exposed to, Goldberg said. But, he added, that ought to be as much as each individual user.
With a history as complicated as that of the Middle East, Goldberg said, people “must have access to as much content as they need and have to figure it out for themselves, not the content that the federal government thinks is suitable for them to know and not know.”
WATCH: EU’s Digital Services Act will present the largest threat to Twitter, think tank says