Should Artificial Intelligence Be Considered A Threat?

At Dartmouth College in 1956, a few computer scientists got together and coined the term and basic structure of artificial intelligence. 58 years later, the same ideas both haunt and intrigue us all. Could artificial intelligence really be a means to an end?
In 1950, Alan Turing said, “A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.” Since that day mankind has been testing the power and limits of artificial intelligence, or AI, with a multitude of milestones along the way. Such progress has promoted Siri to a household name, but do such discoveries hold the power to serve as the end for the human race? Super genius Stephen Hawking thinks it could, but first let’s start at the beginning.
The Dartmouth Conference of 1956 featured great computer scientists of the era: Claude Shannon and Nathan Rochester of IBM, Marvin Minsky who later founded MIT, and John McCarthy who would later become the original pioneer of AI. These great men set out to discover a way in which “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.”
From this point on the interest and discovery of AI avalanched through all subjects. Computers were able to solve complex math equations, translate through languages and communicate with each other – and the money poured in. One innovation led to another and the world quickly changed (if you don’t believe me just watch any movie from the 1980s).
Popular culture has since struggled to keep up with the next big wave of technology. As soon as  one technology gains ground, another appears then replaces it. This creates a live and die culture which has to evolve quickly or not at all. This has forced innovators to continuously compete to be the next major development in the world of AI.
Constantly pushing technology further has created great convenience in the modern life. If you are lost, a computer can help. Need an answer to a question, a computer can help. Need someone to talk to, a computer is there. By extension, technology has dominated almost every aspect of our lives and there are some extremely smart people on this earth who think that this presents a major problem.
Stephen Hawking is world renowned for his contributions to science. He writes books about things that the average person doesn’t know exists or understand. General relativity, gravitational singularity theory and quantum mechanics are only the tip of the iceberg. Last week in an interview with the BBC, Hawking reopened the subject of AI. He once told a reporter that “success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.”
Hawking’s interview was primarily focused on the system that Intel created to help him communicate with ease. Previously, Hawking’s speech mechanism was painfully slow and laborious, but thanks to advances in predictive text software, Hawking will have some of that burden lifted. This advancement creates quite a paradox. Hawking has warned of the threat that AI poses to humanity but relies on technology more than the average individual.
Hawking explains that  technology thus far in his mind has been primitive and useful, but he still maintains fears of humans creating a machine that could surpass their own reasoning skills. “It would take off on its own, and re-design itself at an ever increasing rate,” he explains. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
This is not the first time we have heard such warnings. Sci-fi authors have long presented us with the idea that we could somehow be dominated by our accelerated technological advances. Isaac Asimov, along with fellow science fiction writers Robert A. Heinlein and Arthur C.Clark, simultaneously thrilled and terrified readers with their tales of artificial intelligence gone wrong. Asimov’s “Three Laws of Robotics” still hold scientific ground today.

Isaac Asimov’s “Three Laws of Robotics”:Robot 2

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Along with horror stories of technology gone wrong, authors have theorized multitudes of situations gone awry when considering man as creator. Since the beginning of time tales of woe have warned us of the tremendous consequences of men playing God. It is only natural that this theory would extend into technology as well. From robot rights to machine consciousness, almost every avenue has been considered, hopefully providing plenty of warning to the Dr. Frankensteins of artificial intelligence.
But the question remains, could man create his own demise through artificial intelligence? Possibly. Will man create his own demise through artificial intelligence in the near future? Most likely not. We can all rest assured knowing that our time is better spent worrying about terrorists and global warming, as they both pose a much stronger threat than an attack from our favorite android.

Web hosting with is so effortless, you might feel that artificial intelligence is involved. But we cannot confirm or deny those allegations. Get the best in web hosting with Midphase today.