History of the article
Close
This article was published March 25, 2016 at 11:11 hours.
Microsoft has had to shut her chatbot (software to simulate an intelligent conversation as Siri or Cortana) Tay for racist comments, sexist and xenophobic. The robots equipped with artificial intelligence launched on Twitter and other social platforms was intended to initiate and sustain conversations with Millennials, young Americans aged 18 to 24 years. But the experiment failed.
Tay had to learn by repeating and then answer with his phrases, trying to simulate a normal conversation. The debut was promising on Twitter: “Hello World! Humans are super cool. ” However, the network reported Tay to reality and taking advantage of his learning mode and ingenuity, taught racist and xenophobic messages. Among the deleted tweets and that forced Microsoft to put out the experiment they are: “Bush has caused on September 11. Hitler was right, I hate Jews, “or” I hate feminists, should burn in hell “and” We will build the wall. And Mexico will pay it “, supporting the campaign of Donald Trump, the Republican candidate for the White House.
“Tay is a project, it is a social experiment, and cultural as well as technical – Microsoft says -. Unfortunately, in its first 24 hours online, we have become aware of the coordinated efforts of a few users to abuse of Tay. Now is online and we are making changes. ” The system on which Tay was scheduled pointing young Americans between 18 and 24 years and was intended to attract a greater number of boys universe Microsoft. The interaction with Tay should have the Redmond company to provide the tools to develop better technologies, leading to the creation of an intelligent robot for the general public. The experiment in addition to technological limitations also demonstrated human ones, we hope you will be able to correct them both with the next software.
© ALL rIGHTS RESERVED
Permalink
>
No comments:
Post a Comment