A few hours were enough to turn Tay, a Microsoft bot, into a foul-mouthed, racist, genocidal maniac. In discourse alone, of course, for it was just an AI experiment gone awry. It was supposed to learn over time, from chats and conversations with humans. It seems, however, that bad company is as harmful for robots as it is for humans. Microsoft attributed the offensive responses and comments tweeted by Tay, which was supposed to be a teenage bot, to a coordinated effort by strangers. It may just have been pranksters, who crossed several red lines beyond the boundaries of decency and a good sense of humor. Or it may have been people who still nurse ideas that are doomed in open societies: they can only survive in the anonymity of nicknames and the dark corners of the web. In any case, it was an instance of abuse of the “repeat after me” upon which much of Tay’s learning was based. We suggest in the meantime that it opens an account on Instagram and learns photography. And next time it comes online on Twitter, Microsoft should find suitable company for Tay. Tell me who your friends are and I will tell you who you are.