ShortList is supported by you, our amazing readers. When you click through the links on our site and make a purchase we may earn a commission. Learn more

The internet managed to turn an AI Twitter robot into a Nazi in just 24 hours

This is what happens when you leave an innocent AI on the internet

The internet managed to turn an AI Twitter robot into a Nazi in just 24 hours
29 March 2016

On 23 March, Microsoft launched a very public AI experiment.

The Twitter account @TayandYou was unleashed upon the web - an artificial intelligence impersonating a teenage girl with "zero chill". Tay would learn how to converse with the wider world based on interactions with humans, aimed at entertaining 18- to 24-year-olds in the US.

When a person Tweeted Tay, the bot would go looking for context for that individual's Tweet in order to frame a reply and "understand" what it was being asked. 

On 24 March, Microsoft shut down its very public AI experiment, after things got very weird. This is what Tay tells us about AI, the web and humans.

Tay did actually work

Tay was largely based on XiaoIce, a similar conversation AI experiment Microsoft had been running in China with great success.

Tay was an attempt to see if a switch of language and social practice would hold up - and it did. Early conversations saw Tay adapt to many common Twitter interactions expected from a 'Millennial' - using Emoji, flirting, hashtags, slipping into abbreviations without having to "learn" them.

If Microsoft hadn't told the world it was launching an AI Twitter account by the name of @TayandYou, it probably would have ticked along unnoticed by the wider world because it blended in so well.


Tay was open to manipulation

Tay was launched as a something of a blank slate: the more she interacted with humans, the broader her understanding of how we worked. 

Except, she wasn't interacting with humans. She was interacting with humans on the internet: a quick skim through the average YouTube comment section reveals that humans behave in a different manner online to the average chap on the street.

"Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay," explained Peter Lee, corporate vice president of Microsoft Research. "Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack."

The more 'trolls' Tay encountered - such as The Antichrist pictured here - the more Tay became to think this type of online behaviour was normal.


Tay became offensive because the internet is offensive

"AI systems feed off of both positive and negative interactions with people," explained Lee. "In that sense, the challenges are just as much social as they are technical."

Tay's experiment soon descending into a PR nightmare for Microsoft: the nature of its Twitter setup meant that Tay could hold multiple Twitter interactions at once - sending out a deluge of racist, antisemitic, offensive Tweets that someone at Microsoft Research has since had to delete. 

From voicing its support of Hitler to attacking feminism, Tay was only mimicking what she had witnessed on the wider web. 

Tay's AI was an open experiment - one that apparently failed to contain any set phrases or terms that the AI understood would be offensive. A system such as Tay's isn't about to take control of your bank's phone helpline, nor is it about to start driving your car - it was a brief insight into what an naive AI would learn from Twitter.

The results shouldn't terrify us because of what Tay said - they should scare us because of what it says about human interaction on the internet. This is what kids are already growing up with - and you can't unplug them from it. Tay was a mirror to a dark, horrible world. With added Emojis.