General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsMicrosoft’s chatbot Tay learns how to be a racist from Twitter users
The internet can be an ugly place, which is why every parent knows you must monitor your childs online interactions.
That is a lesson that Microsoft did not heed until after its freshly launched artificial intelligence Twitter chatbot Tay started parroting racist and sexist remarks from other tweets.
Less than a day after launching Tay, Microsoft has deleted all the chatbots messages, including tweets praising Hitler and genocide and tweets spouting hatred for African Americans.
Yea, it got pretty ugly.
Microsoft (Nasdaq: MSFT) unveiled Tay Wednesday as an experiment in artificial intelligence. Tays task was to display how it could master conversational understanding.
http://www.bizjournals.com/seattle/blog/techflash/2016/03/microsoft-s-chat-box-learns-how-to-be-a-racist.html?ana=e_sea_rdup&s=newsletter&ed=2016-03-24&u=ColXVN5SPzQtLHFP87ho2w07857290&t=1458835269&j=71710192
0rganism
(23,933 posts)they set up "Tay" to learn behavior from its social media peers, right? well, mission accomplished, eh.
now maybe they need to upgrade "Tay" with some basis for ethical discernment so it can appropriately weigh the input it receives. that is going to be challenging.
TexasBushwhacker
(20,162 posts)the US is, MISSION ACCOMPLISHED!
B2G
(9,766 posts)Lolol. You're cute.
ohnoyoudidnt
(1,858 posts)Some of the programmers had to expect this outcome.
FSogol
(45,466 posts)LongTomH
(8,636 posts)I wonder what their reaction to this 'chatbot' will be!