Brandon Rozek

Photo of Brandon Rozek

PhD Student @ RPI studying Automated Reasoning in AI and Linux Enthusiast.

AI fearmongering for regulatory moats

Published on

3 minute reading time

On May 16th 2023, Sam Altman testified before a congress subcommittee on the need for AI regulation. During so, he proposes three actions:

  1. Form a new government agency charged with licensing large AI models, and allow it to revoke it for companies whose don’t comply.
  2. Create a set of safety standards for AI models.
  3. Require independent audits, by independent experts.

Okay I can get behind the idea behind “safety standards” but what about this new government agency? Why would Sam Altman the CEO of OpenAI want the creation of a new government agency?

Some (including me) speculate that he’s trying to create a regulatory moat for OpenAI.

We can see where this goes. There are thousands of AI startups trying to sell models of various kinds, the idea is that they all need to register and be regularly monitored by a government agency?

This would probably lead to long wait times for these AI licenses with a few companies holding the majority of the licenses. Startups funded through Y-combinator rarely go beyond two years.

It’s a great way of trying to stamp out competition really. Currently it’s highly expensive to train one of these large language models (LLMs) but would that always be the case? In the inevitable future where other companies have the compute to generate one of these models, OpenAI needs to find a way to keep its market share.

Now you might think I’m being to pessimistic. What if Sam Altman is trying to think about the best for society? If that’s the case, then I don’t think he would threaten to not provide AI services in the EU due to their stringent regulations.

He’s walked back the comment since then, but let’s take a brief look at what the EU regulations look like.

Generative models are considered at the time of writing to be a “high-risk” AI model. The additional regulations the EU imposes on these companies are:

  • adequate risk assessment and mitigation systems;
  • high quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • logging of activity to ensure traceability of results;
  • detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • clear and adequate information to the user;
  • appropriate human oversight measures to minimise risk;
  • high level of robustness, security and accuracy.

I feel like most of those bullet points fall under Altman’s “safety standards for AI models”. Though I think what he really has an issue with is the data privacy requirements set forth by the EU. In fact this is an often criticized part of these models trained on Internet data:

  • Personally identifiable information
  • Copyright information
  • Conspiracy/Misinformation
  • etc.

My personal opinion is that these models should not be trained on the above data, but this isn’t a new discussion. More recently, the issue of copyright has been brought up with generative code products such as GitHub Copilot. I am interested to see how it makes its way through the US court systems here.

Another question I think we need to answer as a society is whether to hold companies accountable for defamation. There are already instances where ChatGPT incorrectly accused a law professor of sexual harrasment. With the push to replace search with these automatically generated responses, we can see how misinformation may be propagated and then further reinforced by these models.

I don’t want to say that it’s a bad idea for congress to regulate AI. I’m mostly suggesting that perhaps we shouldn’t call upon people who hold direct financial interest to come up with the regulations that will apply to themselves. That might lead these actors to bring attention to certain issue while avoiding others…

Reply via Email Buy me a Coffee
Was this useful? Feel free to share: Hacker News Reddit Twitter

Published a response to this? :