What is the optimal relationship between innovation and regulation?

Whether society is ready or not, we're about to find out.

Right now, the U.S. Supreme Court is hearing two tech-focused cases – Gonzalez v. Google and Twitter v. Taamneh – that have the potential to influence how we experience artificial intelligence here in the States, with global ripple effects.

Gonzalez v. Google

On Feb. 21, the U.S. Supreme Court began hearing oral arguments for Gonzalez v. Google, which is about Section 230.

The key question: Are publishers like Google, Facebook, Craigslist, Reddit, and Instagram liable for the content recommendations their algorithms surface based on users' interests?

This is an important distinction and interpretation the Court must provide, as Section 230 currently protects publishers from displaying user-generated content.

Let's use YouTube as an example. Right now, YouTube isn't liable for publishing harmful content uploaded by its users – who are uploading 500 new hours of content per minute.

But what if YouTube were held liable for its algorithms boosting and distributing harmful content? This alone would signal enormous changes for how YouTube filters, moderates, and recommends content from here on out.

And what if YouTube, Facebook, TikTok, Reddit, Wikipedia, Google, Bing, and every other tech company with an algorithmic feed were all liable for the impact of their recommendation engines?

This single decision is a fascinating precedent that could lead to further regulations, restrictions, and protections in all areas of Internet law. We'll hear more on this case by the end of June.

[MIT Technology Review, Washington Post, SCOTUS Blog]

Twitter v. Taamneh

Twitter v. Taamneh is a related case with oral arguments heard the day after Gonzalez v. Google.

The Supreme Court will need to interpret whether social media companies can be sued for aiding and abetting terrorism when hosting harmful videos and posts.

If they can, the door is open for everyday people to bring lawsuits against tech companies based on the content they publish.

The Implications of Accountability for the Internet

1️⃣ These two cases could set a new precedent that websites have legal responsibility for the content they host and distribute.

What could this mean?

  • New lawsuits against Big Tech companies for any decisions made by algorithms
  • New approaches to moderation, content filtering, and recommendations – which may have unintended consequences like censorship or continuing filter biases
  • New monetization models from Big Tech companies
  • New demands for transparency and clarification on how proprietary algorithms are created, trained, and maintained

2️⃣ Or the Supreme Court could rule that platforms are still protected under Section 230 and are not liable for the content they publish, distribute, or amplify.

What could this mean?

  • New legislation that specifically makes publishers liable for their algorithms
  • Public and legal initiatives to ban certain apps and publishers for moral or other reasons
  • State-specific legislation that mandates or prevents content restriction

The Implications of Accountability for AI

Do tech companies have a moral or ethical obligation to share how their algorithm is designed and optimized? What about tech companies building AI?

Last week, during the Gonzalez v. Google hearing, Supreme Court Justice Neil Gorsuch brought AI into his remarks:

"Artificial intelligence generates poetry. It generates polemics today that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected. Let's assume that's right. Then the question becomes, what do we do about recommendations?"

Matt Perault writes about the legal ramifications for large language models (LLMs) like ChatGPT in Lawfare. It's a long read that's well worth it if you're interested in the legal aspect of AI.

Some thoughts I wrote down from his analysis:

  • If LLMs are liable for content they generate based on users' prompts, then parent companies will be forced to restrict or limit their access to remain compliant.
  • Depending on state legislation, we may see LLMs banned from certain states or jurisdictions – or AI systems that behave differently in California than in Ohio or Florida.
  • The legal and regulatory environment may restrict and disincentivize startups from launching in the space, leaving large-scale AI deployment to Big Tech.
  • What's the right protective framework for humans and technology companies large and small?

I'm hoping that as a nation, we're able to come up with an approach that protects our competitive advantage as innovators and pioneers, while also safeguarding the public during these early "Wild West" days.

To keep in touch with the human side of artificial intelligence development, be sure to join Creative Intelligence, our premium AI newsletter and community.

Share this post