Praedicat Blog – Ongoing social media litigation

Praedicat Blog – Ongoing social media litigation

Praedicat Blog – Ongoing social media litigation 2560 1276 Adam Grossman
Praedicat Blog – Ongoing social media litigation
by Adam Grossman & Stephen Jones

If you’ve heard our Nine Newly Known Unknowns briefing, you know that for several years before Frances Haugen leaked the Facebook Papers and testified before Congress, Praedicat was monitoring the rapidly developing science around excessive social media use, which has since been recognised as an addiction.  Praedicat characterizes and models the risk as a business practice addictive software design — and as the scientific evidence mounted linking problematic computer, phone, and social media use to mental health harms, especially in adolescents, the potential for litigation to emerge over addictive software design became clear.  It also became clear that plaintiffs’ would attempt to hold the platforms liable for designing the algorithms that make them so addictive.  Although the addictive software/social media litigation is still in its early stages, we’ve been watching two cases that are on everyone’s radar, Gonzalez v. Google and Twitter v. Taamneh, for signals of how courts might interpret the law in this context.  The cases were decided by the U.S. Supreme Court on May 18, and are both interesting and significant as they relate to the interpretation of “Section 230” and the growing efforts to assign liability to algorithms and their designers. 

Though the subject matter is not social media addiction — these cases deal with social media platforms’ liability for aiding/abetting terrorism — the defendants in the addictive software MDL were on the record hoping these decisions would provide guidance, opting to delay briefing any Section 230 immunity arguments until after the Supreme Court spoke in Gonzalez.  For those who aren’t aware, Section 230 of Title 47 of the United States Code (https://www.law.cornell.edu/uscode/text/47/230), enacted as part of the Communications Decency Act of 1996, can provide immunity for social media platforms when they are sued over third-party content by stating that the platform may not be treated as the publisher of the content.  Arguments based on Section 230 were briefed in the Gonzalez case. 

However, the Supreme Court declined to say anything substantive about Section 230 in its decisions.  Instead, it side-stepped that analysis and focused on whether “aiding and abetting” includes algorithmically recommending and/or failing to remove content that valorizes terrorism.  The Court found that it does not as a matter of law. 

Now, there’s a deep difference between the aiding/abetting analysis in Twitter/Gonzalez and the analysis of Section 230’s effects on liability that we’ll see in the addictive software MDL.  Nonetheless, the Court’s discussion in Twitter provides some clues as to what facts courts might find persuasive and therefore the facts that addictive software defendants will likely focus on. 

In Twitter, the Court emphasized that the terrorist group “was able to upload content to the platforms, just like everyone else.”  And the content from the terrorist group was presented to “users most likely to be interested in that content — again, just like any other content” based on YouTube’s recommendation algorithms.  In other words, the Court found no aiding and abetting in part because YouTube treated the third-party content that caused harm the same as any other algorithmically recommended content on the platform.  That the terrorism content wasn’t specifically singled out for recommendation weighed heavily in the Court’s decision. 

This tracks with some of the arguments we’re seeing from addictive software MDL defendants in their filings.  The MDL defendants argue that recommending content is not “affirmative conduct or misfeasance” that creates the risk of harm.  And they cite caselaw for the principle that merely providing these social media services “does not encourage specific third-party actors” to cause harm (our emphasis added). 

The addictive software defendants’ arguments go even further, though.  First they argue that a social media platform isn’t a “product” for the purposes of product liability – and they cite several cases that would appear to back up their argument.  Regardless of whether that argument succeeds, the defendants’ argument continues by claiming that Section 230’s protections include not only the third-party content but the algorithms that help people find the content they’re interested in.  In essence, they’re claiming that users of their platforms have no recourse for any ill effects arising from using the platform as intended. 

The plaintiffs, of course, beg to differ.  In an obvious attempt to avoid Section 230, their initial complaints make clear that they expressly disavow any claim related to the content hosted by these platforms.  Instead they focus on the addictive nature of the platforms — intentionally designed to be that way, according to the complaints.  They have ample evidence from the Facebook Papers that social media platforms are intentionally designed to maximize users’ time spent on the platform so that the company can maximize advertising revenue.  The scientific evidence also confirms that people — especially young people — can become addicted to social media and that it can have devastating mental health consequences.   

At Praedicat, obviously we’re good at predicting future litigation but we don’t want to hazard a guess as to which of these arguments a court will find most persuasive.  While the analysis in Twitter/Gonzalez is not directly applicable to the social media litigation we wouldn’t be surprised to see the decisions cited for the general principle that recommendation algorithms shouldn’t lead to liability.  If courts reach the merits of whether Section 230 provides immunity and decide in plaintiffs’ favor we know for certain the defendants will still use any precedent that might save them from paying many millions of dollars to the plaintiffs. 

Contact a member of our team to talk about our emerging interest risks

Learn about Praedicat’s Emerging Risk Lifecycle