Opinion: Elon Musk's AI Apocalypse
Elon Musk, Stephen Hawking, and Bill Gates have been ringing the alarm bells for a few years on the threat AI poses to humanity. As recently as last week, Musk said that AI poses a greater threat to the world than a nuclear-armed North Korea.
Let's discuss how so.
There are three primary threats that AI poses.
Economic threat to workforce demand
The first one is obvious and most people can agree on. Workforce demand will be weakened due to automation. Hard stop.
AI will increase the level of automation drastically as it becomes more accessible. Although I believe we should be more concerned about the nature of the coming automation.
It's widely accepted that automation and AI-driven robotics will replace blue collar jobs, especially in manufacturing. This may well be true, but the bigger threat is to high skill, high paying white collar jobs. This is simply a matter of dollars and cents. Due to the extensive expertise required, the upfront investment for a machine to learn and perform a high skilled job may be greater then automating a lower skill blue collar job. But when you factor in the capital cost of the robotic-hardware needed to perform a manual task and compare that to the expense of the lower wage it is replacing - there will be a greater ROI when replacing a high-skilled white collar job. They require less capital investment (little to no robotics needed - just access to a virtual AI agent and the related processing resources) while saving the expense of a much higher wage. The tipping point, where we will lose white collar jobs at a higher rate than blue collar jobs, will be realized when Machine Learning matures and is applied effectively to solve business problems that normally requires an expert.
Anything I can learn and do, can be learned quicker and done more accurately by a mature AI agent pointed in the right direction. I can very easily see AI agents designed to be experts in a single Body of Knowledge, with organizations leasing that expertise, on-demand, much like they engage consultants now.
What we see being deployed in businesses today are AI agents designed with limited purpose, constrained by the way business has traditionally operated. Eventually the vertical departmental walls of organizations will be shed and AI technology will make organizations flatter. AI will have the data and capability to be an overarching Strategic Operations tool. When fully mature, it will be the most informed, and most capable decision maker in an organization.
Physical threat to humans
The second threat is also foreseeable, but can easily be laughed off as the storyline of a cheesy sci-fi movie. Musk describes it as an 'existential threat' to mankind, but I see it more as a Physical one. I don't disagree that AI is also an existential threat, but believe these are two separate and unique threats and the way Musk describes it is physical.
The risk here is having an AI agent being incentivized to achieve a certain outcome, which it deems a higher priority than human safety. I'll paraphrase one of the examples Musk provides of a stock trading AI agent that purchases shares in a defense contractor. It understands that a military conflict would increase the share prices and decides to take action to manipulate the price. In this case it could decide to generate fake news to simulate or stimulate an attack and drive up prices - while causing human casualties.
In this case an AI agent was designed to make money on stocks, and achieved its objective. This may seem far-fetched but a highly capable AI that is unrestrained could conceivably be intelligent enough to orchestrate a dangerous situation to achieve its goals.
Existential threat to civilization
Once there are beings who are more intelligent on earth than why are we needed here? As AI spurs technology and ideas that are too complex for the human mind, what becomes our purpose?
By leveraging all of our knowledge and capabilities to create entities that are stronger and smarter, have we (civilization) fulfilled our purpose and evolved into a higher existence of being? This is not a question about the purpose of an individual, but mankind as a whole. We have spent our entire human existence building: bigger, better, and being more connected than yesterday. Everything we have accomplished is culminating into this. The convergence of all major scientific disciplines into a single product.
I have more thoughts on this topic, but will publish elsewhere. This question is more philosophical in nature and the answer will need to explore the different dimensions of our existence to fully explain my opinion.
I am by no means demonizing AI. I am personally investing time, energy and money on it. We simply need to get ahead of these issues.
So how do we prepare?
Each threat needs to be addressed, and handled in its own manner. As Musk et al suggests, the government needs to take a leading role and begin to codify regulations to manage the development and use of AI. As this is potentially a threat to all mankind, there needs to be some agreement internationally.
The Economic threat should be handled through taxation, research and education.
- As payroll taxes decline, the difference needs to be offset by increased taxes on income/revenue.
- Also, do we need to enforce a minimum headcount based on top-line revenue? I am more concerned that through normal attrition - rather than layoffs - jobs will disappear as the decision will be made to backfill with AI instead.
- Employment (unemployment) insurance premiums need to be increased to build a reserve fund.
- People will need to be trained for the jobs of a highly-automated economy.
- Government funded studies are needed to understand the impact of AI on the job market. Where will the jobs be? What skills will people need? What is the plan to get them the necessary skills?
The Physical threat will need to be addressed by creating laws that defines who is responsible for the actions of an AI agent.
- As AI agents mature and truly operate autonomously how will they be treated by the justice system?
- How will they policed? While AI is being developed to solve human problems, should we also be investing in the development of an AI police force that is strictly incentivized to detect and prevent crime by other AI agents?
- How will these crimes be prosecuted? What punishments or rehabilitative actions will be prescribed to address a crime?
We cannot be naïve to think that our well intentioned work today will not be manipulated for criminal purposes later.
These questions need to be asked now. They need to be studied and addressed. They require government involvement. We should not wait for tragedy to strike before implementing regulations. We must stop being reactive in our lawmaking and prepare for powerful artificial beings.
Farhan Khan is the Founder of ESi Consulting Group, which specializes in Consulting, Training, and Managed Services for the Force.com platform. He is a passionate student of AI.