Friday, April 19, 2024
HomeUSASupreme Court Won’t Hold Tech Companies Liable for User Posts

Supreme Court Won’t Hold Tech Companies Liable for User Posts

The Supreme Court handed twin victories to technology platforms on Thursday by declining in two cases to hold them liable for content posted by their users.

In a case involving Google, the court for now rejected efforts to limit the sweep of the law that frees the platforms from liability for user content, Section 230 of the Communications Decency Act.

In a separate case involving Twitter, the court ruled unanimously that another law allowing suits for aiding terrorism did not apply to the ordinary activities of social media companies.

The rulings did not definitively resolve the question of what responsibility platforms should have for the content posted on and recommended by their sites, an issue that has grown increasingly pressing as social media has become ubiquitous in modern life. But the decision by the court to pass for now on clarifying the breadth of Section 230, which dates to 1996, was cheered by the technology industry, which has long portrayed the law as integral to the development of the internet.

“Companies, scholars, content creators and civil society organizations who joined with us in this case will be reassured by this result,” Halimah DeLaine Prado, Google’s general counsel, said in a statement.

The Twitter case concerned Nawras Alassaf, who was killed in a terrorist attack at the Reina nightclub in Istanbul in 2017 for which the Islamic State claimed responsibility. His family sued Twitter, Google and Facebook, saying they had allowed ISIS to use their platforms to recruit and train terrorists.

Justice Clarence Thomas, writing for the court, said the “plaintiffs’ allegations are insufficient to establish that these defendants aided and abetted ISIS in carrying out the relevant attack.”

He wrote that the defendants transmitted staggering amounts of content. “It appears that for every minute of the day, approximately 500 hours of video are uploaded to YouTube, 510,000 comments are posted on Facebook, and 347,000 tweets are sent on Twitter,” Justice Thomas wrote.

And he acknowledged that the platforms use algorithms to steer users toward content that interests them.

“So, for example,” Justice Thomas wrote, “a person who watches cooking shows on YouTube is more likely to see cooking-based videos and advertisements for cookbooks, whereas someone who likes to watch professorial lectures might see collegiate debates and advertisements for TED Talks.

“But,” he added, “not all of the content on defendants’ platforms is so benign.” In particular, “ISIS uploaded videos that fund-raised for weapons of terror and that showed brutal executions of soldiers and civilians alike.”

The platforms’ failure to remove such content, Justice Thomas wrote, was not enough to establish liability for aiding and abetting, which he said required plausible allegations that they “gave such knowing and substantial assistance to ISIS that they culpably participated in the Reina attack.”

The plaintiffs had not cleared that bar, Justice Thomas wrote. “Plaintiffs’ claims fall far short of plausibly alleging that defendants aided and abetted the Reina attack,” he wrote.

The platforms’ algorithms did not change the analysis, he wrote.

“The algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content,” Justice Thomas wrote. “The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting.”

A contrary ruling, he added, would expose the platforms to potential liability for “each and every ISIS terrorist act committed anywhere in the world.”

The court’s decision in the case, Twitter v. Taamneh, No. 21-1496, allowed the justices to avoid ruling on the scope of Section 230, a law intended to nurture what was then a nascent creation called the internet.

Section 230 was a reaction to a decision holding an online message board liable for what a user had posted because the service had engaged in some content moderation. The provision said, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Section 230 helped enable the rise of huge social networks like Facebook and Twitter by ensuring that the sites did not assume legal liability with every new tweet, status update and comment. Limiting the sweep of the law could expose the platforms to lawsuits claiming they had steered people to posts and videos that promoted extremism, urged violence, harmed reputations and caused emotional distress.

The case against Google was brought by the family of Nohemi Gonzalez, a 23-year-old college student who was killed in a restaurant in Paris during terrorist attacks there in November 2015, which also targeted the Bataclan concert hall. The family’s lawyers argued that YouTube, a subsidiary of Google, had used algorithms to push Islamic State videos to interested viewers.

In a brief, unsigned opinion in the case, Gonzalez v. Google, No. 21-1333, the court said it would not “address the application of Section 230 to a complaint that appears to state little, if any, plausible claim for relief.” The court instead returned the case to the appeals court “in light of our decision in Twitter.”

It is unclear what the ruling will mean for legislative efforts to eliminate or modify the legal shield.

A growing group of bipartisan lawmakers, academics and activists have grown skeptical of Section 230 and say that it has shielded giant tech companies from consequences for disinformation, discrimination and violent content across their platforms.

In recent years, they have advanced a new argument: that the platforms forfeit their protections when their algorithms recommend content, target ads or introduce new connections to their users. These recommendation engines are pervasive, powering features like YouTube’s autoplay function and Instagram’s suggestions of accounts to follow. Judges have mostly rejected this reasoning.

Members of Congress have also called for changes to the law. But political realities have largely stopped those proposals from gaining traction. Republicans, angered by tech companies that remove posts by conservative politicians and publishers, want the platforms to take down less content. Democrats want the platforms to remove more, like false information about Covid-19.

Critics of Section 230 had mixed responses to the court’s decision, or lack of one, in the Gonzalez case.

Senator Marsha Blackburn, a Tennessee Republican who has criticized major tech platforms, said on Twitter that Congress needed to step in to reform the law because the companies “turn a blind eye” to harmful activities online.

Hany Farid, a computer science professor at the University of California, Berkeley, who signed a brief supporting the Gonzalez family’s case, said that he was heartened that the court had not offered a full-throated defense of the Section 230 liability shield.

He added that he thought “the door is still open for a better case with better facts” to challenge the tech platforms’ immunity.

Tech companies and their allies have warned that any alterations to Section 230 would cause the online platforms to take down far more content to avoid any potential legal liability.

Jess Miers, legal advocacy counsel for Chamber of Progress, a lobbying group that represents tech firms including Google and Meta, the parent company of Facebook and Instagram, said in a statement that arguments in the case made clear that “changing Section 230’s interpretation would create more issues than it would solve.”

David McCabe contributed reporting.



Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments