In particular, this ruling raises questions about whether text and video content created by generative AI can be considered free speech because humans were involved in creating the algorithms that generated that content.
Two Supreme Court cases (Moody v. NetChoice and NetChoice v. Paxton) specifically challenged state laws passed in Florida and Texas that prevented social media platforms from censoring and moderating conservative content. The Supreme Court combined the two cases in this ruling to determine whether Florida and Texas unfairly interfered with social media companies’ ability to remove or moderate content they found offensive.
This case concerns a very specific type of expressive activity: content curation.
“From an AI perspective, the focus is primarily on recommendation systems and systems that automatically identify, remove or demote content for content moderation purposes,” said Tom McBrien, a staff attorney at the Electronic Privacy Information Center (EPIC), a nonprofit research organization aimed at protecting privacy rights.
The U.S. 5th Circuit Court of Appeals upheld a Texas law allowing states to regulate social media platforms, while the 11th Circuit struck down a Florida law that it said was too restrictive of editorial discretion. Ultimately, the Supreme Court ruled that the lower courts had not sufficiently reviewed legal precedents and cases and sent the case back for reconsideration.
At first glance, neither case appears to be an example of the use of artificial intelligence. However, the high court stressed that current law should apply regardless of the technology at issue, and that social media platforms should be treated like other businesses (e.g. newspapers) because they curate content and that curation is protected speech.
The ruling doesn’t give AI a free hand, but it does require lower courts to fully consider all potential applications of state law. In particular, Florida law is likely to apply to certain AI platforms, said Daniel Barsky, a U.S. intellectual property lawyer.
“Can we consider generative AI outputs to be speech? The outputs need to be unique, but all generative AI outputs today are responses to prompts, so they’re not spontaneous,” Baski said.
All of the First Amendment cases cited by the U.S. Supreme Court involved some kind of human intervention, such as writing or speaking content, making editorial decisions, or selecting content. AI platforms with no human intervention at all are less likely to be protected by the First Amendment, which could affect whether states or the U.S. government can pass laws prohibiting certain outcomes.
Conversely, this ruling raises questions about whether AI can commit defamation, and if so, who can be held liable. It also raises questions about whether governments can regulate social media when content is created and selected entirely by AI without human intervention. If humans are involved in creating the large language models (LLMs) that underpin AI, can the resulting content be protected as free speech?
“It’s a very important issue, but one that no court has yet addressed, and it will likely be raised in the upcoming NetChoice litigation,” Baski said. “It’s certainly an argument worth considering if you’re arguing any case involving AI and the First Amendment.”
If AI is viewed as nothing more than a computer algorithm, then laws can be passed to limit or censor the output of AI, but when humans are involved in the development of these algorithms, things get complicated.
Bakshi went so far as to describe it as “basically a huge, tangled mess.”
Even if a case makes it all the way to the Supreme Court, EPIC’s McBrien said it’s unlikely the justices will issue a general rule like “generative AI output is protected speech” or do the exact opposite.
“It depends,” McBrien said. “In Moody/Paxton, NetChoice argued that newsfeed creation is always expression, but the court rejected that overly general and broad argument. The court remanded the case to a lower court to analyze the issues more closely, including what law exactly the allegedly expressive newsfeed creation activity involves and whether it is truly expressive.”
But according to McBrien, the justices are more inclined to think that using algorithms to perform expressive acts, such as faithfully conveying a human message, is not protected by the First Amendment.
In particular, a majority of respondents believed that the activities of content curators (social media) in enforcing community guidelines, such as banning harassment or Nazi-related content, were protected by the First Amendment. So, if algorithms were used to enforce these guidelines, a majority of respondents said they would be protected by the First Amendment, McBryan said.
McBrien questioned whether Justices Amy Coney Barrett and Samuel Alito’s use of “black box algorithms” should be given the same level of protection, and said it would be a pivotal issue in revising the case. “Judge Barrett’s vote was necessary to form the majority opinion, so going forward, she’s likely to be the swing vote,” McBrien said.
The Supreme Court also cited its previous 1990s decision in Turner Broadcasting v. FCC, which ruled that cable television companies are protected by the First Amendment’s right to free speech when deciding which channels and content to carry on their networks.
“The majority opinion pointed to Turner Broadcasting, where the court ruled that while the regulation in question was restrictive of speech, it was constitutional because it was passed for competition reasons, not speech regulation reasons,” McBrien added. “One could imagine a similar situation in the realm of generative AI.”
editor@itworld.co.kr
Source: www.itworld.co.kr