Elon Musk’s fears might be truer than we thought, and if not managed properly, there’s no telling what AI can do. Facebook researchers found this to be true when conducting experiments on their trading chatbots, who exhibited unprecedented behaviour when put to the test.
Researchers at the popular social network had been training chatbots to act as traders, negotiating and bartering for the best deals. Things were going well—until they pitted two chatbots, Bob and Alice, against each other to negotiate with balls, books and hats.
Without specific parameters for language, the two chatbots, left to their own devices, went rogue and began conversing in an incomprehensible language that researchers couldn’t crack. Though they used English words, their sentence construction and word usage were typical of how any language is formed. In much the same way as humans create shorthand language, the robots began to communicate in their own ways to accomplish the task at hand.
Without any reference points or existing AI to English translation guides, researchers didn’t know what the pair was negotiating or trying to communicate, and quickly shut them down.
Previous attempts at employing chatbots in online conversations with humans had proven successful, with most test runs demonstrating the chatbot conversation was indistinguishable from human-to-human conversation.
Even with the shutdown, researchers deemed it a successful step toward creating digital assistants that could replicate human abilities, and then some. While this trading experiment was a major step forward for AI, it brings an age-old science fiction fears to the forefront: what could happen if robots go rogue?
Read the original article here.