Why I signed the open letter to pause AI…

While I am somewhat concerned about regulation stifling AI developments, I’m absolutely terrified at the thought of a world without AI regulation*…

The Thrill of ChatGPT

When ChatGPT was released at the end of November last year, I was thrilled. Finally! We had an AI solution that actually worked like AI was supposed to - it could understand what I was asking for, follow conversations, create content for me that ranged from travel itineraries to business plan outlines, and so much more. I felt as excited about ChatGPT as I felt when I first used Google**. Like Google, ChatGPT was available to virtually everyone with an internet connection ***. To me, and the hundreds of millions of ChatGPT users, this was a gamechanger in terms of bringing high-end AI to a broad audience. Even better, it was free****!

It didn’t take me very long to incorporate it into my daily routine.

Though ChatGPT has its faults, such as its need to hallucinate when it doesn't know the answer to something, or its unreliability, where functionality like data analysis switches on and off, I was willing to tolerate these faults in exchange for the content it created.

Suffice it to say, I am a fan.

Pressing Pause

So then why did I so eagerly sign up to pause the experiments? After all, these were the experiments that begat ChatGPT, and a slew of other generative AI tools.

The simple reason is: I believe the unregulated development of AI systems could be wholly destructive.

The Risks of Unregulated AI Development

This may sound overly dramatic, but imagine a world in which it is hard to tell what content you read is real or fake? Or worse yet, what images, voices and videos are fake. Right now, we can still tell the difference (mostly) but what happens when the AI improves and we can't distinguish between the real and the fake?

While the photos of the pope in a puffer jacket are entertaining, what about less benign uses? Put it this way: we are not far from a dystopian world where it is possible for someone to fake a video call from your child, telling you that they have been kidnapped and that you need to send ransom money to get them back.

It’s scenarios like this that reinforce my belief that we need to regulate AI development to ensure that it is being used ethically and responsibly.

The Open Letter to Pause Giant AI Experiments

The open letter to pause giant AI experiments was written to address the concerns about AI systems with “human-competitive intelligence”, where appropriate measures to ensure safety of these systems need to be put in place.

The open letter causes for a pause of at least six months to the training of AI systems more powerful than the current version of ChatGPT (i.e. GPT-4). The ask is that “AI research and development [is] refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

**mic drop**

While I believe some of the open letter’s signatories may be agitating for a pause in order to gain ground on their own AI initiatives, I wholeheartedly support the prioritisation of efforts to make AI safer.

What I Think Should Happen During the Six-Month Pause

The open letter includes a set of asks for the six month pause period. These include: new AI regulatory authorities, oversight and tracking of AI systems, watermarking systems to help distinguish real from fake, auditing and certification of AI systems, liability for AI-caused harm, funding for safety research, and priority research on AI’s economic and political impacts.

On top of those items, I’d like to add my own wishlist*****:

  • Total ban on deepfakes, and other sorts of technologies that allow us to mimic human voices and faces (why is this functionality actually needed?)

  • Mandates where all AI systems:

    • Provide clear notice and disclosure when AI is being used.

    • Provide the ability to opt out of the use of AI, and revert to a human.

    • Are disparity tested before being released to users. That is, before an AI system can be used, it must be tested to ensure that there are no disparities between how it performs for different user groups (gender, race, etc).

    • Continue to be tested after being released to users. This is ensure that AI responses are still valid, even as the training data evolves.

Conclusion

As excited as I am about this new generation of AI, I am also terrified of unfettered use of it. In many cases, the tools are “too powerful” to be released without a warning label, much less no regulation or ownership of the consequences.

This open letter is not without its flaws, and people may debate its contents and motivations of its signatories, but overall I believe it is a step in the right direction.

I urge you to read the open letter and sign it if it resonates with you!

Things You Can Say to Sound Smart About AI

  • AI development needs to be regulated to ensure ethical usage of AI systems.

  • The potential for AI to create fake content that is indistinguishable from real content is absolutely terrifying and ripe for abuse.

  • The proposed six-month pause should be used to develop AI regulation that ensures ethical usage of AI systems.

  • At a minimum we should be able to call for a ban on deepfakes, and watermarking to distinguish generated/fake content from real.

Footnotes

*Okay, there is some AI regulation, such as the EU AI Act, AI Bill of Rights, and the beginnings of some laws limiting the use of AI for military purposes, but some of these are at the very VERY early stages, with none of them being enacted into law.

**For those of you who don’t remember a life before Google - congrats on your (relative) youth. I have a distinct memory of being blown away by how Google was able to do things like search for phrases and unearth obscure articles on trivialities - essentially unlocking a part of the internet that was previously un-findable.

***Okay, not everybody with internet has access to ChatGPT. It is intended for people over the age of 18 (unless supervised), is not available in certain countries (see here for a list of the supported countries), and has recently been (temporarily) banned by Italy due to privacy concerns.

****Some of you may argue that ChatGPT Plus (and therefore GPT4) is not free, and there are some services that are chargeable, but the fact remains that the research preview version of ChatGPT is free for personal and commercial use. Read here for a more detailed description of the free and paid options.

*****I got inspiration for my wishlist from the AI Bill of Rights. Digest version here.

Elaine Ng

Founder of AI Shophouse.

Next
Next

International Women’s Day Personal Story: Why I would tell my younger self to choose a career in AI