On Wednesday morning, Microsoft introduced Tay, an experimental online chatbot designed to converse with humans and learn from their responses.
The internet being what it is, it didn’t take long for things to go horribly wrong. Within a matter of hours, trolls had the bot spouting outrageously racist and inappropriate comments, much to the amusement of the online community. Microsoft was eventually forced to shut Tay down and delete her offensive tweets, claiming she needed some “adjustments”. We used Visibrain to find out what people had to say.