AI Governance Will Be Highly Ironic

London Lowmanstone
3 min readApr 5, 2023

--

Originally released on April 5th, 2023.

This is a snippet.

I’ve been facilitating a course on Artificial General Intelligence Safety and having conversations with people at the University of Minnesota’s Natural Language Processing Lab about how we ensure that AI has a highly positive impact on humanity.

A lot of these conversations have to do with the role of governments in ensuring that companies don’t make artificial intelligence (AI) systems that do bad things just because it makes them money.

However, my current take is that in order to do a good job of stopping people from building harmful AIs, governments will need to use the same technologies that they’re trying to put limitations on in order to do their job well.

For example, let’s say that we find out that there are huge bias issues with existing AIs, causing them to complete sentences about people of different races and gender extremely differently. (This is fairly realistic.) The process of trying to understand the current landscape of AI models that behave this way, scheduling meetings with relevant people, writing emails, generating drafts of potentials laws, etc. will likely all be done more effectively with the usage of the exact AI systems that the government officials are working on regulation for.

In short, AI systems that are built over the next few years will likely have issues that require large-scale regulation by the government, but that won’t seem that bad to individual users in comparison to the benefits. Since officials in the government are individual users, they’ll likely benefit from using the same technology they’re trying to regulate in the process of determining how to regulate it.

Using the tools you’re trying to regulate has a bad PR look, but I’d much rather the government use existing unregulated AIs well in order to make good and effective regulations, rather than discard the usage of those tools just because of the irony and bad PR.

I’ve had two events that have recently made me believe in this more.

  1. We had a session on AI governance in the AI Alignment course I’m facilitating. The idea behind the session was to have each person pretend to be a stakeholder in a conversation about AI regulation, and to see how the conversation would play out in order to understand the dynamics better. However, we found it was much faster and much more interesting to just explain the roles to ChatGPT-4 and have it play through the conversation, and then critique its output as a group. This lead me to believe that AI models could probably generate legislation and respond to critiques of that legislation much faster than our usual legislation drafting processes.
  2. I saw the Japanese government compare the response of their Prime Minister, Fumio Kishida, to that of ChatGPT. ChatGPT was likely far faster in generating its response, though, as mentioned by the video, it was less specific. This leads me to believe that people using AI systems in government will be able to complete some tasks far faster with the use of AI. However, the process of checking that the AI’s answer is correct may slow that down somewhat.

These two events have made me a bit more certain that good AI regulation will probably require using the exact same AIs that need regulation.

Update 1/28/2025: Oh, look what happened: OpenAI is offering an easy way for U.S. government officials to use ChatGPT.

Unfortunately, the announcement by OpenAI doesn’t seem to be accompanied by any study of how adding in this sort of technology could lead to systemic bias within an organization depending on the biases of the AI system.

It seems likely to me that if AI is embedded into the day-to-day operation of government, there may be some long-term trends that are predictable about the outcomes of that government that change based on how the AI systems they use are trained and what biases those AIs have.

Similar to how a very small floating point rounding error in the Vancouver Stock Exchange built up over time to a significant loss, I imagine that very small biases in AI systems used by entire governments could lead towards the government having some predictable blind spots / bias because all of the AI systems they use also have these blind spots / bias.

If the U.S. government is smart about this, rather than being extremely efficient, they’ll use AI systems from multiple different companies, so that if a flaw is discovered in one provider or line of models, it doesn’t mean the entire U.S. government is impacted by that flaw.

--

--

London Lowmanstone
London Lowmanstone

Written by London Lowmanstone

I’m a visionary, philosopher, and computer scientist sharing and getting feedback (from you!) on ideas I believe are important for the world.

No responses yet