AI Doomsday Scenarios Are Being Debated By Specialists

9

AI Doomsday Scenarios Are Being Debated By Specialists

AI Doomsday Scenarios Are Being Debated By Specialists
AI Doomsday Scenarios Are Being Debated By Specialists – image via business.financialpost.com




AI doomsday scenarios are not something that we should take lightly. As time marches on, and the day in which AI (artificial intelligence) draws ever near, we should start focusing our attention more and more on what could happen if this new intelligence will somehow go rogue and target us for annihilation.

The obvious answer here is to not create it in the first place, right? Well, not really. We all know that it will not happen, regardless of how much we would oppose it, and when it will become a reality, AI will definitely change the world for the better in a fraction of the time than if we would be doing it ourselves.

In any case, the concept of an AI doomsday is not out of the question, and to that end, a team of experts gathered at Arizona State University for the ‘Envisioning and Addressing Adverse AI Outcomes’ and debate the possible worst-case scenarios that would happen if AI will ever become a threat to humanity.

“There is huge potential for AI to transform so many aspects of our society in so many ways. At the same time, there are rough edges and potential downsides, like any technology,” says AI scientist Eric Horvitz.

Eric Horvitz is a definite supporter of AI and its development, holding a strong belief in the tremendous potential it has for improving everything surrounding our way of life and existence. But this doesn’t mean that he is not pragmatic enough not to recognise the potential negatives.

Two teams of scientists were divided up; teams that were made up of 40 scientists, cyber-security experts and policy makers. One of the teams was tasked to come up with all conceivable ways an AI doomsday scenario could happen, while the other one was tasked with looking into the solution for these scenarios. These AI doomsday scenarios had to realistic and not purely hypothetical.

Some of the scenarios

One of the possible ways one such scenario could play out came in the form of cyber attacks. The idea was put out that a cyber weapon could carry out an attack and then be so intelligent that it would be hard to discover and destroy it. Such attacks could come in the form of stock market manipulations and plummets, self-driving cars being altered to not recognise road signs, or AI made out to rig political elections.

What’s particularly worrying is the fact that not all problems had adequate solutions. This stands to show just how unprepared we are in the face of a high intelligence going against us in any way shape or form. For instance, in the above mentioned cyber attacks, it would be quite easy for a potential attacker to use unsuspecting internet gamers to cover their tracks, by using an online game to hide the attacks themselves.

This meeting was, after all, just the first in a series of others just like it and it is a sign that people are starting to take the situation more seriously. John Launchbury, from the US Defense’s Advanced Research Projects Agency, hopes to see some concrete agreements and rules regarding cyber war, automated weapons and combat robots.

The whole point of this meeting wasn’t to scare people and to incite fear but to acknowledge the possible situations and get a head start in better defending ourselves.

(Source)