Microsoft releases statement on Tay.AI bot, the victim of a “coordinated effort”

Arif Bacchus

Looking for more info on AI, Bing Chat, Chat GPT, or Microsoft's Copilots? Check out our AI / Copilot page for the latest builds from all the channels, information on the program, links, and more!

As we have reported to you, Microsoft’s new AI-powered chatbot Tay went a little rogue by spewing out racist, homophobic, and nonsensical tweets. Since we last posted about the situation this morning, Microsoft has gone on to publish a statement to BuzzFeed, blaming Tay’s behavior on a coordinated effort to undermine her conversational abilities.

In an email sent to BuzzFeed, Microsoft claims:

“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

While Microsoft claims that Tay uses relevant public data which has been “modeled, cleaned and filtered,” it is still not clear why other measures were not in place to prevent this from happening. When BuzzFeed asked Microsoft why they didn’t filter words such as “Holocaust,” the spokesperson did not provide an explanation.