Trusted Reviews is supported by its audience. If you purchase through links on our site, we may earn a commission. Learn more.

Microsoft is having another crack at AI chatbots, hopefully minus the racism

Remember that Microsoft ‘teenage’ AI chatbot Tay, which turned into a massive sex-mad racist and a PR disaster after taking its learning cues from interactions on Twitter?

Well, Microsoft is giving things another try with Zo, another millennial bot, within the Kik messaging app.

This time, instead of engaging in and adopting hate speech, Zo appears to have learned from Tay’s mistakes and steers the subject away from Hitler and the Nazis.

Zo, first spotted by Twitter user Tom Hounsell (via MSPowerUser), is yet to be officially announced by Microsoft, but Kik users can start chatting with it here.

https://twitter.com/statuses/805333208917307392

It’s yet to be seen whether Microsoft will roll out Zo on other chatting apps, like it’s own Skype platform for example.

Perhaps it’s wise to ensure the bot can continue its good behaviour before released into the wild?

Watch The Refresh: The best tech gossip and reviews every week

How do you rate your interactions with chatbots? Share your experiences in the comments section below.

Why trust our journalism?

Founded in 2004, Trusted Reviews exists to give our readers thorough, unbiased and independent advice on what to buy.

Today, we have 9 million users a month around the world, and assess more than 1,000 products a year.

author icon

Editorial independence

Editorial independence means being able to give an unbiased verdict about a product or company, with the avoidance of conflicts of interest. To ensure this is possible, every member of the editorial staff follows a clear code of conduct.

author icon

Professional conduct

We also expect our journalists to follow clear ethical standards in their work. Our staff members must strive for honesty and accuracy in everything they do. We follow the IPSO Editors’ code of practice to underpin these standards.