The Rise of Artificial Intelligence

How Artificial Intelligence will affect our future

Will Artificial Intelligence ever happen?

If you search for Artificial Intelligence (AI) today you will either be scared by the result you find or you will be pretty amazed by what AI can do already. In this post I will take you on a journey to my opinion of what will happen next with AI.

Our life with AI

Nowadays we use AI everywhere, for example in a smartphone with Siri or even in a fridge. Aaron Saenz describes the presence of AI as the "Jungle of Artificial Intelligence that will Spawn Sentience". So AI is everywhere, and it already helps us with most of our habits. The fact that AI already can predict hypoglycemic events with up to 90% accuracy three hours in advance , generate human faces or help humans with speech problems be better understood is pretty astonishing.

So what will AI bring us in the future?

In my opinion, if AI gets better and start to self improve beyond things that we can imagine, we could use the "Artificial Superintelligence" (ASI) to help humans with problems we couldn't solve by ourselves until now. In the article mentioned above, we can surely achieve 100% accuracy in the near future, which is also believed by many experts. Christopher Mims describes the future healthcare with "The AI Doctor Will See You Now". Nowadays, many already use Google and other apps (e.g. ada) as a doctor's replacement. I think that these apps with AI could improve a lot. The best part, I think, is that AI already helps elderly people to fight social isolation and loneliness.
Tim Urban also thinks that the future with ASI and nanotechnology will drastically change our lives. ASI could solve "diseases, poverty, environmental degradation, unnecessary suffering of all kinds, and beyond that, it could even lead to immortality".

Wouldn't that be an amazing future? I can't wait.

So when will Artificial Superintelligence happen?

According to a study of the author James Barrat, at Ben Goertzel's annual AGI Conference, Artificial General Intelligence (AGI) will most likely be achieved by 2030, followed by ASI within another 30 years. If we really believe the author of this post, ASI would happen around 2060, and this is what Tim Urban thinks, too.

Kurzweil claims that the Singularity will happen by 2045. "Artificial intelligence will reach human levels by around 2029. [...] 2045, we will have multiplied the intelligence [...]".
If we also take a look at Moore's law we can see that around the year 2045 we would have the hardware to mass produce such an AI. Even if we didn't have the hardware small enough, we could build a computer as fast as a brain. Unlike a human brain, the hardware doesn't have to fit into our skull, so the hardware can be as big as a warehouse.

When will Artificial Superintelligence happen?

Today AI is a buzzword for almost every program. But will it really be an artificial intelligence or just an algorithm that pretends to be smart? I think it could happen that we reach the level of ASI in the next couple of years - or maybe just another form of AI that we cannot forecast right now.

Comic: Self Driving Car
Source: Self Driving

Is AI really our final invention?

Tim Urban and Ben Goertzel are thinking that ASI will be our Final Invention. Can that really be? In my point of view AI itself will not wipe us out, but as the past has shown so far, it was the people who have done suffering to other people. Nick Bostrom says that AI alone will not be responsible for our extinction, but humanity itself. Be it through terrorism or through a third world war, in use of the superpowers of an AI. Elon Musk says when we build AI we should do it "very very carefully, very very carefully", what I think too. The growth of AI would be as fast as a train passing the station. Tim Urban puts it even more drastically. He describes existence as a band, where you fall down on two sides either immortality or extinction.

So there will be no machine that becomes a killer robot by itself, without human interaction. Tim Cook thinks so too. He says he's more afraid of "people thinking like machines than machines thinking like people". So that in the near future every user is dependent on the machine and doesn't have his own will, which is what we can already see nowadays.
Could you live without AI? I couldn't make it.

"Cogito ergo sum" (I think, therefore I am)

In most science fiction films, AI is depicted as an evil person with consciousness. Grady Booch says that AI is "something that offers the illusion of intelligence". An AI will never reach consciousness, it will only reach the goal given to it by the developer. Thus the machine has no morality of its own. This is very important to keep in mind, because a being without morals would extinguish mankind to perfect its goal.

Devin Gonier says in this TED Talk about "Morality and Artificial Intelligence" that we should really care about the moral of AI, otherwise we could face the end. He also points out that the Developer should not create a goal conflict with the goal of AI and the human moral goal. Today, humanity doesn't have a common moral code - what is good or evil - by itself, so how should an AI behave?

Will the future of mankind ever happen?

Through the non-consciousness of a superintelligence, it is our responsibility to limit that power and to know the dangers of the technology in advance. Otherwise the end of mankind will be reached in 2060.

Humans are bad at predicting, what we also saw when we thought of flying cars, and 2060 is far, far away for us. We should rather consider the good parts that AI can bring us, like helping humanity, rather than scare people. But we should always have the consequences in our mind.

"Artificial Intelligence may be the last great human invention, let's ensure it's the last great human invention for the right reasons" - Devin Gonier

"The challenge presented by the prospect of superintelligence, and how we might best respond is quite possibly the most important and most daunting challenge humanity has ever faced. And—whether we succeed or fail—it is probably the last challenge we will ever face." — Nick Bostrom