Recently, several of the largest tech companies in America decided to form a coalition to address the issue of artificial intelligence (See here for more information). There really isn’t an ethical path forward under our current economic framework when it comes to this issue, nor is there a safe path forward by merit of our own human intellectual limitations.
So it’s kind of a pickle.
You may believe I am taking an alarming stance on the issue, but it is warranted given the implications.
The economic argument is one that has been used for generations, but in reference to automation, and its unequal distribution of wealth and labor as a result. Ever since we learned how to automate processes, workers have lost their value and leverage in those respective fields, giving a significant boost in both productivity and labor hours saved for those with enough wealth to own and or develop their own machines.
Artificial intelligence that could continuously learn, with an infallible memory and no need for sleep would produce technological breakthroughs we currently would deem impossible. It would simply design machines and processes to automate millions of jobs. It would render medical research teams utterly useless, it would destroy our intellectual work force because it would run circles around them. If this technology is harnessed and owned by a small group of individuals or corporations, it would tilt the balance of global power in a matter of months.
Didn’t mean to get all doom and gloom on you there…
If we accept this premise, then we must accept the inevitable unequal economic outcomes and unimaginable levels of unemployment that would arise. This would call for a reshaping of our societal ideals concerning work, income and the role of work in the life of human beings as a whole. I don’t think we have the ability to adapt as a society quickly, and this world once dreamed about in science fiction is quickly approaching reality.
But is that an issue? Do we want to be pushing against change?
The other and even more pressing issue is if we can move toward automation and artificial intelligence (AI) safely. If there is a computer built with the ability to teach itself and build upon the internet’s unlimited wealth of knowledge, there are many dangers to the world. The first is the obvious movie scenario of us coming between the AI and whatever it determines its role to be. This feels a little unserious to focus on, however, regardless of how real the threat may be one day.
I mean… let’s get real… It’s already happening. Remember the google deep learning? Maybe not… Check this out…
But on another note…
The other much more plausible and urgent scenario is a hostile government developing AI, and in a matter of years becoming the world’s foremost superpower based on technological and strategic advantages discovered by this powerful force. Computers receive information at rates of up to 20,000 times faster than a human brain could possibly keep up with, given the right parameters and information, this would allow it to outpace entire government operations as large as the Manhattan project. The threat of this power falling into/being developed by the wrong groups truly is a terrifying prospect.
I don’t have any clear answers here. It seems the train has left the station on the development of AI, and that it’s only a matter of time before it is completed. I do, however, think we need to start taking this threat or benefit to humanity seriously. We are in an age where science fiction is becoming reality at an increasingly rapid pace, and it’s time for us to plan for the unavoidable implications.
Please take the time to share this with your buddies on social media. It will help fight off the AI apocalypse if you do.