We should Keep A.I. on a leash, or risk facing grave consequences
The need for A.I. regulations is something we need to consider very seriously, from both a local and a global viewpoint. One way to do this could be with an international body combined with a global A.I. treaty.
Recently, Elon Musk made an almost apocalyptic remark regarding A.I. in response to a statement from Vladimir Putin in where he predicted that whichever country leads the way in A.I. research will come to dominate global affairs. Musk, in turn, said this:
China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.— Elon Musk (@elonmusk) September 4, 2017
Musk, who has been vividly involved in talks about the future and impact on A.I. is in general pessimistic when talking about the implications of a widespread use of A.I. and the autonomy it might develop, seeing danger in just letting artificial intelligence roam free without restrictions.
Not unlike the scenarios from the 90’s blockbuster movies “The Terminator”, an untamed A.I. or a ‘super-intelligence’ that goes beyond our own human intelligence, could determine that human lives are less important to execute a given task. A current day example could be if we told a super-intelligence to ideologically and practically solve our global warming problem, and it saw us as the primary culprit, deeming it necessary to get rid of most of us to restore balance to our ecosystem. Even if we tried to put this ‘entity’ in its own domain, it could find it necessary then to hack itself into some nuclear sites and do the job without us ever noticing because it could do it within a blink of an eye.
The primary concern here is, of course, to make any super-intelligence be on our side. But this, in turn, is a complicated matter. If we are building any such thing without it accepting and acknowledging our flaws as human beings, the danger would always be that it might disregard them at any given time it would see fit.
A video from The Guardian illustrating the super-intelligence paradox
So we need to exert some kind of regulatory control over the development and deployment of A.I. development. This can be done in several ways. One way Elon Musk is already exercising is to make it open source, this to prevent that the technology will fall into closed domains, where lack of transparency will avoid the public from gaining insight as to how it can and will be used. As he says:
…to try to get super-A.I. first and distribute the technology to the world than to allow the algorithms to be concealed and concentrated in the hands of tech or government elites… (Source)
Another way would be to control the development of A.I. via a multilateral treaty. As we have several treaties or declarations on matters concerning humanity, from cloning to wars, A.I. will need one too. This in conjunction with some kind of international body to regulate the usage of A.I., and to be able to create sanctions towards nations or governmental entities who do not comply with the treaty. Just as we also will need the same for genetic development in the view of recent advances in CRISPR/cas9 gene editing, which also could spell disaster if ending up in the wrong hands. A.I. is already being used in CRISPR cas-9 development, although for now in pretty harmless ways. But try to imagine a super-intelligence with the ability to mutate any living organism as it sees fit. The upside to this is, of course, the capacity to eradicate enormous amounts of diseases. But we are yet not up to speed in regarding a total understanding of the long term implications of genetic manipulation (best seen in the agricultural industries concerning GMO), so even what would seem to be a simple implementation from a well-intentioned super-A.I. could become disastrous.
So international regulations should be what we strive for, especially if A.I. is used in conjunction with (but not limited to):
- Gene manipulation, especially in bacteria and viruses
- 3D-printing (also of organs)
- Hacking and/or cyber defence
- Control of basic infrastructure (hospital systems, personal data registries, electricity grids, security systems, etc.)
- Weapons and military defence systems
- Agricultural or food processing systems
And of course, it should be taken into consideration that any combination of these would increase the probability of threatening any local or even the global population. This international regulatory body should also be the epicentre of ethical considerations in any A.I. implementation that could become widespread in its utilisation. Just as some countries have ethical councils for, say genetics, this institution should be pivotal in not only global regulations but also be used as the advisor for any issues any country could have concerning the usage of super-intelligences. And at last, it should act as a stop / go button for any A.I. implementation where larger A.I. systems can be combined or can access other A.I. systems locally or globally. Just as we today have software application procedures where for example coders are not allowed to test and approve their applications, so that separate parties are reviewing what has been made for security reasons, this institution should act as an intermediary for any combination or connection between super-A.I. systems. In that way, we could assure that such an independent party could be the safeguard of application and ethics in A.I. across systems and countries.
If we don’t take A.I. seriously, we will soon get to the tipping point where it gets smarter than us. We have the advantage since we’re the ones in charge for now, so it is time to bring into gear regulations and breathe ethics and morality into the super-intelligent life before it’s too late.
Image by Mike Wilson
Image by Alex Knight
This article also appears on Medium