The principle of the programmable modern computer was proposed by Alan Turing in a paper in 1936, in which he proved that a “Universal Computing Machine” could be capable of computing anything that is computable by executing instructions using a program stored on tape. Since then, the astonishing development of computing hardware has allowed us to transition from tape to transistors, of which there are literally billions being carried around in our pockets. This development of hardware has made it possible to store significantly more complex programs, which can execute thousands of trillions of calculations per second. The programs have now reached the point that they can understand and perceive their environment and take actions to ensure that they reach their goals. This is known as Artificial Intelligence or AI.
Hollywood has for some time played with the idea of an AI which destroys or enslaves humanity in a dramatic fashion. However, the reality of that happening is much smaller than Hollywood directors and script writers would like us to believe. That is not to say that we shouldn’t proceed with caution when developing and using AI. The use of AI is going to be widespread, with it being applied to cars, trucks, manufacturing machines, and inside computers, smartphones and potentially military technology. It is perhaps in these areas where we have a more immediate need to be cautious.
AI will provide safer trucks and cars, which can reach their destinations faster; something which is naturally beneficial to society, but there will be an impact on those industries in terms of jobs, which could constitute a threat to human dignity. Furthermore, it could replace nurses, soldiers and police officers, roles which require empathy, and without it the consequences could be quite significant. Furthermore, the amount of information an AI can learn about us could give too much power to corporations and governments.
Military technology is the most dangerous area which can be influenced by AI. Very recently, the Korea Institute of Science and Technology (KAIST) celebrated its new research centre for the Convergence of National Defence and Artificial Intelligence. 57 scientists subsequently called for a boycott of the centre, due to the fear it will be used to create autonomous weapon systems.
However, there is a certain level of inevitability about the development of artificial intelligence. It is now perhaps a case of ensuring that those with the ability to develop and fund research into AI do this with something we should all be striving for in ourselves: wisdom. Ensuring that the machines we create have the correct fail-safe mechanisms to ensure our own protection is important, and the initial designers should have the correct motivation when designing the AI.
Many modern philosophers and scientists, including the late Prof. Hawking, have considered the prospect of our own self-destruction through the creation of an AI. However, this far-off doomsday scenario should not distract from the immediate social and ethical issues of AI. Despite the dangers of AI, it should also be considered for its potential to assist humanity with many of the issues we face, such as disease and poverty.
Image Credits: By Sujins | Pixabay | CC0The entity posting this article assumes the responsibility that images used in this article have the requisite permissions