ShortList is supported by you, our amazing readers. When you click through the links on our site and make a purchase we may earn a commission. Learn more

Google Translate's AI is doing something incredibly creepy and nobody is totally sure why

Computers being terrifying again, what a surprise!

Google Translate's AI is doing something incredibly creepy and nobody is totally sure why
23 July 2018

’Never trust a computer for anything’ is a great piece of personal advice. Stick to analogue, where it’s safe and nothing can hurt you. Computers, they’re the scary ones - particularly when they’ve got a ghost in them and they are prophesying the coming apocalypse by spouting rambling religious doctrine.

Which is what Google Translate is doing at the moment, by the way, on your computer. On loads of computers, actually - just going absolutely buckwild and screeching terrifying biblical passages when you type things into it.

Granted, you’ve got to type specific things for it to go off on one, but when it returns a horrific prediction of our deaths, it is not simply translating that same passage but in another language, it’s doing it out of nowhere, and explicitly ignoring what you’ve put in the previous box. It’s possessed.

Case in point: if you type “dog” into Google Translate 19 times, then translate it from Maori to English, the following happens:

Yeah, no, no thanks. Don’t like that one bit, really.

And nor do many others, seeing as a Reddit group has been set up to gather examples and discuss why Google should be chanting deeply creepy passages like this, just because you typed “dog” 19 times into a box.

But it’s not just “dog”, or Maori, look:


Hate it all. HATE IT.

So why is it doing this? Well, Justin Burr, a Google spokesperson spoke to Motherboard and allayed fears that Google was scouring through personal messages or emails:

“Google Translate learns from examples of translations on the web and does not use ‘private messages’ to carry out translations, nor would the system even have access to that content.

“This is simply a function of inputting nonsense into the system, to which nonsense is generated.”

However, the most likely reason is - according to Alexander Rush, an assistant professor at Harvard - down to a change in the translation method that Google used a couple of years ago, when it started using a technique called “neural machine translation”.

Essentially, it’s all a bit like Google’s DeepDream, and it has the unfortunate predilection to “hallucinate” weird outputs. Rush said:

“The models are black-boxes, that are learned from as many training instances that you can find.

“The vast majority of these will look like human language, and when you give it a new one it is trained to produce something, at all costs, that also looks like human language. However if you give it something very different, the best translation will be something still fluent, but not at all connected to the input.”

One theory points to the fact that Google may be using the Bible (amongst other texts) to train its model, although Google declined to say whether this was the case.

Either way, a scientific and plausible reason for all of this does in no way make it any less terrifying - it’s a good job we’ve got a digital exorcist on speed dial in this office. It’s what our IT guy insists on calling himself. He’s got a priest’s collar on and everything. Bit weird.

(Image: Getty)