Ride the Lightning

Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.

Say Hello to Norman, the World’s First ‘Psychopathic’ AI

June 11, 2018

OK, I probably worry too much. But when I read in ZDNet about Norman, the world's first psychopathic AI, I think a whole lot of worrying was justified. And I think it is pretty likely that Norman was named for Norman Bates in the movie Psycho.

Researchers at the Massachusetts Institute of Technology (MIT) developed Norman. Their experiment was based on the 1921 Rorschach test, which identifies traits in humans deemed to be psychopathic based on their perception of inkblots, along with what is known as thought disorders.

Norman is an AI experiment who was given "extended exposure to the darkest corners of Reddit," according to MIT, in order to explore how datasets and bias can influence the behavior and decision-making capabilities of artificial intelligence.

"When people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it," the researchers say. "The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set." I couldn't agree more.

Norman is an AI system trained to perform image captioning, in which deep learning algorithms are used to generate a text description of an image. As you might imagine, since Norman was exposed to the depths of Reddit and a select subreddit dedicated to graphic content full of images of death and destruction, his datasets are far from what a standard AI is normally trained on.

In a stunning example of artificial intelligence gone wrong, MIT performed the Rorschach inkblot tests on Norman, with a standard image captioning neural network used as a control subject for comparison.

Prepared to be disturbed.

In one inkblot test, a standard AI saw "a black and white photo of a red and white umbrella," while Norman saw "man gets electrocuted while attempting to cross busy street."

In another, the control AI described the inkblot as "a black and white photo of a small bird," Norman described the image as "man gets pulled into dough machine."

Norman's perceptions were downright alarming.

Due to ethical concerns, MIT only introduced bias in relation to image captions from the subreddit which are later matched with randomly generated inkblots. In other words, the researchers did not use true images of people dying during the experiment. Scant comfort there, given how Norman saw the inkblots. I can only imagine what he would have reported the images to be if he had seen true images of death.

In one way, we shouldn't be surprised – our AI systems have already proven to be fraught with human biases. But this experiment highlights the need for an ethical approach to AI. With poor datasets, you cannot rely on the decisions an AI system makes – and you can alter its perception of the world.

Will someone (including nation states) deliberately make an evil AI? Of course they will. It's what humans do. Someone, as Elon Musk might say, will summon the devil. We darn well better get a firm handle on the ethics of AI before that happens.

E-mail: Phone: 703-359-0700
Digital Forensics/Information Security/Information Technology
https://www.senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson