A federal judge on Tuesday rejected efforts by major social media companies to dismiss nationwide litigation accusing them of illegally enticing after which addicting thousands and thousands of children to their platforms, damaging their mental health.
US District Judge Yvonne Gonzalez Rogers in Oakland, Calif., ruled against Alphabet, which operates Google and YouTube; Meta Platforms, which operates Facebook and Instagram; ByteDance, which operates TikTok; and Snap, which operates Snapchat.
The choice covers tons of of lawsuits filed on behalf of individual children who allegedly suffered negative physical, mental and emotional health effects from social media use including anxiety, depression, and sometimes suicide.
The litigation seeks, amongst other remedies, damages and a halt to the defendants’ alleged wrongful practices.
Greater than 140 school districts have filed similar lawsuits against the industry, and 42 states plus the District of Columbia last month sued Meta for youth addiction to its social media platforms.
![Facebook and Google apps](https://nypost.com/wp-content/uploads/sites/2/2023/11/NYPICHPDPICT000018408355.jpg?w=1024)
The companies didn’t immediately respond to requests for comment.
The plaintiffs’ lead lawyers – Lexi Hazam, Previn Warren and Chris Seeger – in a joint statement called the ruling “a big victory for the families which have been harmed by the risks of social media.”
In her 52-page ruling, Rogers rejected arguments that the companies were immune from being sued under the Structure’s First Amendment and a provision of the federal Communications Decency Act that shields web companies from third-party actions.
The companies said that provision, Section 230, provides immunity from liability for anything users publish on their platforms, and required the dismissal of all claims.
But Rogers said the plaintiffs’ claims were broader than simply specializing in third-party content, and the defendants didn’t address why they mustn’t be answerable for providing defective parental controls.
She cited for instance allegations that companies could have used age-verification tools to warn parents when their children were online.
“Accordingly, they pose a plausible theory under which failure to validly confirm user age harms users that’s distinct from harm attributable to consumption of third-party content on defendants’ platforms,” Rogers wrote.
The judge, though, dismissed some claims that the defendants’ platforms were defectively designed.