AI Has Just Created its Own Form of Encryption
Not that long ago, Google’s AI became better than any human at Go, then it was able to navigate London’s metro system by itself. Now it went on to more, let’s say, interesting matters. Martin Abadi and David G. Andresen from Google described how they instructed three separate AIs to send encrypted messages to one another. These AIs were nicknamed Alice, Bob and Eve.
The test involved two of the AIs – Alice and Bob- sending encrypted messages to each other without Eve understanding it. Eve’s job was to crack the code. It started with a plain old text which Alice changed into an unreadable message. Bob’s job was to understand it, which he did. But so did Eve. Their failed attempts to keep secrets from Eve happened for about 15,000 other tries. However, they learned bit by bit along the way at which point Alice sent an encrypted message, Bob was able to translate it, but Eve didn’t.
So, they were able to understand each other without Eve listening in. It took a while, but ultimately, the results were surprisingly good. How it works, is way beyond our skills here at Gypsy.Ninja to explain, but you can be sure that the algorithms used were a bit more complex than what we explained here. Probably the most interesting thing here is that not even the researchers know how Alice was able to send the encrypted message, and how Bob was able to decode it. But after all, this was only an exercise, and it isn’t something which we have to worry as of yet. However, what’s in store for us in the future; we’ll just have to see.
So, what do you guys think about this recent development in artificial intelligence? Please leave us your opinion in the comment section below.