Ride the Lightning

Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.

Snapchat Warns of Hallucinations with its New Conversation Bot

March 2, 2023

Ars Technica reported on January 27 that it is rolling out an experimental AI-powered conversational chatbot called “My AI,” powered by ChatGPT-style technology from OpenAI. My AI will cost $3.99 a month for Snapchat+ subscribers.

Users can personalize the AI bot by giving it a custom name. Conversations with the bot will take place in a similar interface to a regular chat with a human. “The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day,” Snap CEO Evan Spiegel said.

Just like its GPT-powered cousins, ChatGPT and Bing Chat, Snap says that My AI is prone to “hallucinations,” which are unexpected falsehoods generated by an AI model. On this point, Snap includes a rather lengthy disclaimer in its My AI announcement post:

“As with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything. Please be aware of its many deficiencies and sorry in advance! All conversations with My AI will be stored and may be reviewed to improve the product experience. Please do not share any secrets with My AI and do not rely on it for advice.”

“Sorry in advance!” I just love that part.

Among machine-learning researchers, “hallucination” is a term that describes when an AI model makes inaccurate inferences about a subject or situation that isn’t covered in its training data set. It’s a well-known drawback of current large language models such as ChatGPT, which can easily conjure convincing-sounding falsehoods, such as academic papers that don’t exist and inaccurate biographies.

Despite Snap’s strong disclaimer about My AI’s proclivity for making stuff up, the firm says its new Snapchat bot will be pinned above conversations with friends in its own tab in the Snapchat app and will “recommend birthday gift ideas for your BFF, plan a hiking trip for a long weekend, suggest a recipe for dinner, or even write a haiku about cheese for your cheddar-obsessed pal.”

Snap does not say how the same bot that cannot be “rel[ied] on for advice” can also plan an accurate and safe “hiking trip for a long weekend.” Critics of the galloping rollout of generative AI have seized on this kind of contradiction to show that maybe these chatbots are not ready for prime time, especially when presented as a reference.

While people have made a game of trying to circumvent ChatGPT and Bing Chat’s workarounds, Snap has reportedly trained its GPT model to not discuss sex, swearing, violence, or political opinions. Those restrictions may be especially necessary to avoid the unhinged behavior Bing Chat demonstrated a few weeks ago.

“My AI” may have something powerful running under the hood: OpenAI’s next-generation large language model. According to The Verge, Snap is utilizing a new OpenAI enterprise plan called “Foundry” that OpenAI quietly rolled out earlier this month. It gives companies dedicated cloud access to OpenAI’s GPT-3.5 and “DV” models. Several AI experts have speculated that “DV” may be equivalent to GPT-4, the rumored high-powered follow-up to GPT-3.

What does that mean? It means the “hallucinations” Snap mentioned in its news release may come faster and be more detailed than ChatGPT. And considering the highly convincing nature of other GPT models, people may just believe it, warnings or no warnings.

I suspect we’re in for a wild ride!

Sharon D. Nelson, Esq., PresidentSensei Enterprises, Inc.
3975 University Drive, Suite 225Fairfax, VA 22030
Email:   Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology
https://senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson