Should we be worried about machine superintelligence?

0
I&nbsp;wasn&rsquo;t planning on writing <a href="https://archive.uwimprint.ca/article/4847-an-overview-of-artificial-intelligence-today">another piece</a> on artificial intelligence (AI), but it seemed almost wrong to cover the topic without talking about singularity and it&rsquo;s potential existential threat to humanity. Normally this sort of discussion is confined to the imaginative realm of science fiction, but a string of public comments warning about the dangers that AI could pose have brought the subject into the media spotlight.



Last October, while conducting an interview at MIT, SpaceX and Tesla founder Elon Musk stated that our biggest existential threat is AI and compared it to &ldquo;releasing the demon.&rdquo; Then, midway through the following month, Musk&rsquo;s comments were corroborated by an email he sent to publisher John Brockman that, while intended to be private, ended up in the conversation feed about the threats of AI. He stated:



&ldquo;The pace of progress in AI (I&rsquo;m not referring to &ldquo;narrow&rdquo; AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast &mdash; it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. 10 years at most.&rdquo;&nbsp;



He went on to note that he is involved in the industry and not crying wolf.



Musk is referring to general AI, or AI that is built to learn and do anything. This is opposed to &ldquo;narrow&rdquo; AI which includes everything I discussed in last week&rsquo;s article &mdash; software that is designed to solve a specific problem or specific set of problems. General AI is a step removed from general autonomous AI &mdash; the type of sentience that designed the Matrix, tried to kill John Connor, and caused Theodore Twombly to fall in love with an operating system.



Musk&rsquo;s comments brought the problem of super intelligent computers into the spotlight and sparked a frenzy of media coverage. Since his remarks, Stephen Hawking and Bill Gates have both stated that they are worried about the problem. Hawking told the <em>BBC</em> in December that &ldquo;The development of full artificial intelligence could spell the end of the human race,&rdquo; and on Jan. 28, during his Reddit Ask Me Anything, Gates wrote that he was &ldquo;in the camp that is concerned about super intelligence&rdquo; and that he &ldquo;agree[s] with Elon Musk and some others on this and doesn&rsquo;t understand why some people are not concerned.&rdquo;



Gates, Hawking, and Musk are all luminaries in their fields and have achieved seemingly impossible goals. It is no wonder, then, that when they are in agreement (ignoring some details) about the existential threat of superintelligent computers, people take them seriously.&nbsp;



None of them, however, are AI experts.&nbsp;



In both industry and academia, experts are not nearly as worried. Take Vicarious co-founder D. Scott Phoenix, who asserts that not only are we still in the earliest stages of developing AI, but sentience isn&rsquo;t something that would just happen &mdash; there would be a long iterative process before we even got close. Yoshua Bengio, head of the Machine Learning Lab at the University of Montreal, added that: &ldquo;We would be baffled if we could build machines that would have the intelligence of a mouse in the near future, but we are far even from that.&rdquo;



Realizing that UW is home to a number of AI experts, <em>Imprint</em> asked Prof. Shai Ben-David, whose research focus is statistical and computational machine learning, about the existential threat of AI. He immediately dismissed the idea, attributing the recent comments to sensationalism for the sake of media attention, adding that &ldquo;[Experts] can make very strong and grounded claims why it will never happen.&rdquo; These are based on the idea that intelligence, as it exists in any living creature, is a result of a long sequence of biological evolution. It would require an unfeasibly colossal amount of time to process the information necessary to replicate human intelligence (assuming the use of a conceivable computing device).



He went on to explain that the types of &ldquo;narrow&rdquo; AI being developed now may be very good at a few well-defined tasks, but such progress is not related to the development of autonomous general AI &mdash; they are different problems.



Despite disparate views on singularity, Musk&rsquo;s assertion, which is supported by a number of researchers, that AI should be more tightly regulated is widely supported. In its many current forms, AI is an extremely powerful tool and, ignoring issues of sentience, failing to regulate its use could lead to dangerous applications.



It is impossible to predict the future, but it is safe to say that the ubiquity of AI systems will continue to rise in the coming decades. Despite this, reacting hysterically to claims from Musk, Gates, and Hawking seems premature. Human ingenuity is a powerful force and one it seems likely that we will retain over our intelligent mechanized counterparts.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.