Ride the Lightning
Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.
AI Detector Believes the U.S. Constitution Was Written by AI
July 18, 2023
It sure made me laugh to read the Ars Technica article from July 14 announcing that AI has determined that a section of the U.S. Constitution was almost certainly written by AI. Really a hoot – and accompanied by an AI-generated image of James Madison writing the U.S. Constitution using AI. Honestly, it was fun to read this article over the weekend!
Today, professors believe that students are using AI when they are not and flunking them. Time to slow that train down.
So far, due to false positives, AI detectors like GPTZero, ZeroGPT and OpenAI’s Text Classifier cannot be trusted to accurately detect text written by large language models (LLMs) such as ChatGPT.
Selections from The Bible also show up as AI-generated. Good grief.
So how do AI detectors work? They have more similarities than differences. They have the same premise: “There’s an AI model that has been trained on a large body of text (consisting of millions of writing examples) and a set of surmised rules that determine whether the writing is more likely to be human- or AI-generated.”
To give an example, GPTZero is a neural network trained on “a large, diverse corpus of human-written and AI-generated text, with a focus on English prose,” according to the service’s FAQ. Next, the system uses properties like “perplexity” and “burstiness” to evaluate the text and make its classification.
I had never heard either of those terms. In machine learning, perplexity is a measurement of how much a piece of text deviates from what an AI model has learned during its training. As Dr. Margaret Mitchell of AI company Hugging Face told Ars, “Perplexity is a function of ‘how surprising is this language based on what I’ve seen?'”
The closer the output is to the training data, the lower the perplexity rating. Humans are much more chaotic writers— that’s the theory—but humans write with low perplexity sometimes, especially when imitating a formal style used in law or certain types of academic writing. Also, many of the phrases we use are surprisingly common.
As the article notes, if we’re guessing the next word in the phrase “I’d like a cup of _____” most people would fill in the blank with “water,” “coffee,” or “tea.” A language model trained on a lot of English text would do the same because those phrases occur frequently in English writing. The perplexity of any of those three results would be very low because the prediction seems fairly certain.
But here’s a less common completion: “I’d like a cup of spiders.” Both humans and a well-trained language model would be quite surprised (or “perplexed”) by this sentence, so its perplexity would be high. In theory, it’s unlikely that a machine would write this.
Let’s go back to the U.S. Constitution. Because the Constitution’s language is very ingrained in these models, they classify it as AI-generated, creating a false positive.
GPTZero creator Edward Tian told Ars Technica, “The US Constitution is a text fed repeatedly into the training data of many large language models. As a result, many of these large language models are trained to generate similar text to the Constitution and other frequently used training texts. GPTZero predicts text likely to be generated by large language models, and thus this fascinating phenomenon occurs.”
Another property of text evaluated by GPTZero is “burstiness,” which refers to the phenomenon where certain words or phrases appear in rapid succession or “bursts” within a text. Essentially, burstiness looks at the variability in sentence length and structure throughout a text.
Humans often exhibit a dynamic writing style, resulting in text with variable sentence lengths and structures. For example, we might write a long, complex sentence followed by a short, simple one, or we might use a burst of adjectives in one sentence and none in the next. This variability is a natural outcome of human creativity and spontaneity.
AI-generated text tends to be more consistent and uniform. Language models, which are still in their infancy, generate sentences with more regular lengths and structures. The lack of variability can result in a low burstiness score, indicating that the text may be AI-generated.
But once again, burstiness isn’t a foolproof metric for detecting AI-generated content. A human writer may write in a highly structured, consistent style, resulting in a low burstiness score. Alternatively, an AI model might be trained to emulate a more human-like variability in sentence length and structure, raising its burstiness score. As AI language models improve, studies show that their writing seems more and more like human writing.
AI writing detectors can make their best guess, but the margin of error is too large to rely on them for an accurate result.
A 2023 study from researchers at the University of Maryland demonstrated that detectors for AI-generated text are not reliable in practical scenarios and that they perform only marginally better than a random classifier.
Not only do they return false positives, but detectors and watermarking schemes (that seek to alter word choice in a telltale way) can easily be defeated by “paraphrasing attacks” that modify language model output while retaining its meaning.
“I think they’re mostly snake oil,” said AI researcher Simon Willison of AI detector products. “Everyone desperately wants them to work—people in education especially—and it’s easy to sell a product that everyone wants, especially when it’s really hard to prove if it’s effective or not.”
I am in full agreement with Mr. Willison!
Sharon D. Nelson, Esq., President, Sensei Enterprises, Inc.
3975 University Drive, Suite 225, Fairfax, VA 22030
Email: Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology
https://senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson