Skip to main content

Fear artificial stupidity, not artificial intelligence..?

By:Prayag Nao


Stephen Hawking's vision of the future? <i>(Image: © Warner Bros/Everett/Rex)</i> 


Stephen Hawking thinks computers may surpass human intelligence and take over the world. We won't ever be silicon slaves, insists an AI expert

It is not often that you are obliged to proclaim a much-loved genius wrong, but in his alarming prediction on artificial intelligence and the future of humankind, I believe Stephen Hawking has erred. To be precise, and in keeping with physics – in an echo of Schrödinger's cat – he is simultaneously wrong and right.
Asked how far engineers had come towards creating artificial intelligence, Hawking replied: "Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
In my view, he is wrong because there are strong grounds for believing that computers will never replicate all human cognitive faculties. He is right because even such emasculated machines may still pose a threat to humankind's future – as autonomous weapons, for instance.
Such predictions are not new; at the University of Reading, professor of cybernetics Kevin Warwick, raised this issue in his 1997 book March of the Machines. He observed that robots with the brain power of an insect had already been created. Soon, he predicted, there would be robots with the brain power of a cat, quickly followed by machines as intelligent as humans, which would usurp and subjugate us.

Triple trouble

This is based on the ideology that all aspects of human mentality will eventually be realised by a program running on a suitable computer – a so-called strong AI. Of course, if this is possible, a runaway effect would eventually be triggered by accelerating technological progress – caused by using AI systems to design ever more sophisticated AIs and Moore's law, which states that raw computational power doubles every two years.
I did not agree then, and do not now.
I believe three fundamental problems explain why computational AI has historically failed to replicate human mentality in all its raw and electro-chemical glory, and will continue to fail.
First, computers lack genuine understanding. The Chinese Room Argument is a famous thought experiment by US philosopher John Searle that shows how a computer program can appear to understand Chinese stories (by responding to questions about them appropriately) without genuinely understanding anything of the interaction.
Second, computers lack consciousness. An argument can be made, one I call Dancing with Pixies, that if a robot experiences a conscious sensation as it interacts with the world, then an infinitude of consciousnesses must be everywhere: in the cup of tea I am drinking, in the seat that I am sitting on. If we reject this wider state of affairs – known as panpsychism – we must reject machine consciousness.
Lastly, computers lack mathematical insight. In his book The Emperor's New Mind, Oxford mathematical physicist Roger Penrose argued that the way mathematicians provide many of the "unassailable demonstrations" to verify their mathematical assertions is fundamentally non-algorithmic and non-computational.

Not OK computer

Taken together, these three arguments fatally undermine the notion that the human mind can be completely realised by mere computations. If correct, they imply that some broader aspects of human mentality will always elude future AI systems.
Rather than talking up Hollywood visions of robot overlords, it would be better to focus on the all too real concerns surrounding a growing application of existing AI – autonomous weapons systems.
In my role as an AI expert on the International Committee for Robot Arms Control, I am particularly concerned by the potential deployment of robotic weapons systems that can militarily engage without human intervention. This is precisely because current AI is not akin to human intelligence, and poorly designed autonomous systems have the potential to rapidly escalate dangerous situations to catastrophic conclusions when pitted against each other. Such systems can exhibit genuine artificial stupidity.
It is possible to agree that AI may pose an existential threat to humanity, but without ever having to imagine that it will become more intelligent than us.

Mark Bishop is professor of cognitive computing at Goldsmiths, University of London, and serves on the International Committee for Robot Arms Control




Comments

Popular posts from this blog

Real life Jarvis-Talk With Your Computer like Jarvis in Iron Man ....!

By:Prayag nao                                            Code to Make your Computer like Jarvis New Speech macro..>> Choose Advanced and change the code like this.. <speechMacros>   <command>     <listenFor></listenFor>   </command> </speechMacros> You have to add a commands  <listenFor>........</listenFor> - computer listens the words you specify here and respond accordingly. <speak>............</speak> - computer speaks what is written in this field according to the command which it got. Similarly, You can Edit more commands in the same way.   <speechMacros> <command> <listenFor>What's going on dude</listenFor> <speak>Nothing special tony</speak> </command> </speechMacros> ...

How does a handpump work ?

The most common tool to access a life source like water — this innovation boasts of none of the accolades that modern machines enjoy. Yet the simplicity and efficiency of design drives a sea of devices that permeate our lives at home and in industries. The unsung hero that India should be particularly proud of is called India Mark II. A human-powered pump designed to lift water from a depth of 50 m or less, it is the world’s most widely used water hand pump. It was designed in 1970 through the joint efforts of the government of India, UNICEF and WHO. Its purpose was to address the deathly problem of paucity of water and draught in rural areas of developing nations. By the mid 1990s, five million of the pumps had been manufactured and installed around the world. Hand Pump Parts: Handle Pump rod water outlet Piston Piston valve Foot valve Rising main Suction lift What does it do? Simply defined, hand pumps are manually operated pumps that use human power and ...

Types of gears used in daily life

By:Prayag nao Spur Gears: Spur gears are the most common type used. Tooth contact is primarily rolling, with sliding occurring during engagement and disengagement. Some noise is normal, but it may become objectionable at high speeds.   Rack and Pinion. Rack and pinion gears are essentially a linear shaped variation of spur gears The spur rack is a portion of a spur gear with an infinite radius. Internal Ring Gear: Internal gear is a cylindrical shaped gear with the meshing teeth inside or outside a circular ring. Often used with a spur gear. Internal ring gears may be used within a planetary gear arrangement. ...