Article Offers a Balanced View on AI and the Singularity
Possibly because we are sentient, intelligent beings, discussions about artificial intelligence often occupy extremes of alarm, potential or hyperbole. What makes us unique as humans, at least in our degree of intelligence, can be threatened when we start granting machines similar capabilities. Be it Skynet, Lt. Commander Data, military robots, or the singularity, it is pretty easy to grab attention by touting AI as the greatest threat to civilization, or the dawning of a new age of super intelligence.
To be sure, we are seeing remarkable advances in things like intelligent personal assistants that answer our spoken questions, or services that can automatically recognize and tag our images, or many, many other applications. It is also appropriate to raise questions about autonomous intelligence and its possible role in warfare [1] or other areas of risk or harm. AI is undoubtedly an area of technology innovation on the rise. It will also be a constant in human affairs into the future.
That is why a recent article by Toby Walsh on The Singularity May Never Be Near [2] is worth a read. Though only four pages long, it presents a nice historical backdrop on AI and why artificial intelligence may not unfold as many suspect. As he summarizes the article: