“A lot of people are living with mental illness around them. Either you love one, or you are one.” – Mark Ruffalo, Actor.
After detailed studies by multiple organisations and research facilities, a group of scientists from the Massachusetts Institute of Technology (MIT) have been able to isolate the driving force behind how basic AI (ignoring AGI and DeepNLP) and Machine Learning systems think.
Norman.ai is the only psychopathic AI (aptly named after the cult classic character from Alfred Hitchcock’s movie franchise, Norman Bates from Psycho) that was developed by the research team at MIT, and is a classic textbook case on Black Box Algorithms.
Black box algorithms are a type of input/output processor systems that don’t necessarily function around a quintessential processing device. The only way to infer the possible outcome scenarios is to curate the input coded into the black box.
The black box could be anything in a typical environment pertaining to the stream of work, such as a logic gate or even the human brain!
Norman.ai will go down as an important testament and example to the principle of Runaway Data, which was first coined in the book The Black Box Society written by Frank Pasquale in the year 2015. It revolved around the dangers of runaway data and machine learning obliquity caused due to the primary source data.
In hindsight, the AI’s functioning is governed by its blood cells, which is Data! With the lifeline dependence on the bulk volume of raw and processed data, it isn’t a surprise as to the direction Norman went!
The Rorschach test
The psychoanalytical test was designed and conceptualised by Swiss Freudian psychiatrist Hermann Rorschach, in the year 1921. The test was designed to map the subject’s sub-conscious, as well as unconscious, portions of their individual personality. Its success rate has resulted in it becoming a pre-dominant singular test to detect personality disorders in humans.
The test included the subject being exposed to a series of ink-blot imagery (hence sometimes referred to as the Rorschach ink-blot test) and their observations being recorded for future analysis using psychological interpretation or algorithms.
The team of scientists at MIT conducted a 2 year test using multiple AIs, before finally isolating Norman. The test begun in 2016 when they created the ‘Nightmare AI’, whose sole intention was to create scary/horrific images from pre-created visuals. The data was then curated by over 2 million votes on social media. The image below is an example of the Colosseum in Rome, converted into a horrific representation by the Nightmare AI.
Stage 2 of the test commenced in 2017, when the team developed ‘Shelley’, which was the world’s first collaborative horror writer!
Shelley is a Deep Learning powered AI, which was programmed to collect and process various eerie stories from r/nosleep, which is a forum on Reddit for writers to share their original horror stories. After which, Shelley composed nearly 200 horror stories involving humans. This was later followed by the creation of ‘Deep Empathy’, an AI simulated to feel empathy for humans, by processing data from natural disasters and losses to human life.
Stage 3 of the experiment gave birth to Norman.ai, the world’s first psychopathic AI. Norman was exposed to all the data from the first two stages of the experiment, produced by Shelley and the Nightmare AI, including input from Deep Empathy. Norman.ai was exposed to the scariest and darkest corners of Reddit, which contributed to the ultimate understanding that the only way Artificial Intelligence and Machine Learning (ML) can go wrong is if the input data is biased or wrong. In other words, when AI algorithms tend to become biased or unfair, it directly correlates to the data fed into it!
On a lighter note, the silver lining to this principle is when an AI is used for specific functions. Take the example of Chironx.ai, which is an AI that can detect complex diseases by evaluating copious amounts of medical diagnostics imagery, resulting in faster and more robust diagnostics.