AuthorTopic: Microsoft Twitter AI Goes Full Nazi  (Read 1120 times)

Offline RE

  • Administrator
  • Chief Cook & Bottlewasher
  • *****
  • Posts: 42050
    • View Profile
Microsoft Twitter AI Goes Full Nazi
« on: March 25, 2016, 01:31:32 AM »
This is funny!   :icon_mrgreen:

RE

http://www.zerohedge.com/news/2016-03-24/microsofts-twitter-chat-robot-devolves-racist-homophobic-antisemitic-obama-bashing-p

Microsoft's Twitter Chat Robot Quickly Devolves Into Racist, Homophobic, Nazi, Obama-Bashing Psychopath

Submitted by Tyler Durden on 03/24/2016 17:58 -0400

    ETC None Twitter Twitter
 

Two months ago, Stephen Hawking warned humanity that its days may be numbered: the physicist was among over 1,000 artificial intelligence experts who signed an open letter about the weaponization of robots and the ongoing "military artificial intelligence arms race."

Overnight we got a vivid example of just how quickly "artificial intelligence" can spiral out of control when Microsoft's AI-powered Twitter chat robot, Tay, became a racist, misogynist, Obama-hating, antisemitic, incest and genocide-promoting psychopath when released into the wild.

For those unfamiliar, Tay is, or rather was, an A.I. project built by the Microsoft Technology and Research and Bing teams, in an effort to conduct research on conversational understanding. It was meant to be a bot anyone can talk to online. The company described the bot as “Microsoft’s A.I. fam the internet that’s got zero chill!."

Microsoft initially created "Tay" in an effort to improve the customer service on its voice recognition software. According to MarketWatch, "she” was intended to tweet “like a teen girl” and was designed to “engage and entertain people where they connect with each other online through casual and playful conversation.”

The chat algo is able to perform a number of tasks, like telling users jokes, or offering up a comment on a picture you send her, for example. But she’s also designed to personalize her interactions with users, while answering questions or even mirroring users’ statements back to them.

This is where things quickly turned south.

As Twitter users quickly came to understand, Tay would often repeat back racist tweets with her own commentary. Where things got even more uncomfortable is that, as TechCrunch reports, Tay’s responses were developed by a staff that included improvisational comedians. That means even as she was tweeting out offensive racial slurs, she seemed to do so with abandon and nonchalance.

Some examples:

 

This was just a modest sample.

There was everything: racist outbursts, N-words, 9/11 conspiracy theories, genocide, incest, etc. As some noted "Tay really lost it" and the biggest embarrassment was for Microsoft  which had no idea its "A.I." would implode so spectacularly and right in front of everyone. To be sure, none of this was programmed into the chat robot, which was immediately exploited by Twitter trolls, as expected, and demonstrated just how unprepared for the real world even the most advanced algo really is.

Some pointed out that the devolution of the conversation between online users and Tay supported the Internet adage dubbed “Godwin’s law.” This states as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.

Microsoft apparently became aware of the problem with Tay’s racism, and silenced the bot later on Wednesday, after 16 hours of chats. Tay announced via a tweet that she was turning off for the night, but she has yet to turn back on.

Humiliated by the whole experience, Microsoft explained what happened:

    “The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Microsoft also deleted many of the most offensive tweets, however, copies were saved on the Socialhax website, where they can still be found.

Finally, Tay "herself" signed off as Microsoft went back to the drawing board:

We are confident we'll be seen much more of "her" soon, when the chat program will provide even more proof that Stephen Hawking's warning was spot on.
Save As Many As You Can

Offline Surly1

  • Master Chef
  • *****
  • Posts: 18654
    • View Profile
    • Doomstead Diner
Re: Microsoft Twitter AI Goes Full Nazi
« Reply #1 on: March 25, 2016, 02:49:42 AM »


Maybe not quite today, eh Ray?
"...reprehensible lying communist..."

Offline RE

  • Administrator
  • Chief Cook & Bottlewasher
  • *****
  • Posts: 42050
    • View Profile
Re: Microsoft Twitter AI Goes Full Nazi
« Reply #2 on: March 25, 2016, 03:09:19 AM »
Maybe not quite today, eh Ray?

When he's elected, The Donald will use this AI program as his Policy Advisor Algo.

RE
Save As Many As You Can

 

Related Topics

  Subject / Started by Replies Last post
5 Replies
1228 Views
Last post January 06, 2016, 11:09:57 PM
by K-Dog
0 Replies
390 Views
Last post March 30, 2016, 03:31:58 PM
by Palloy
0 Replies
305 Views
Last post October 16, 2017, 12:39:15 AM
by RE