AI Risk
I think that there are good reasons to be concerned about what Artifical Intelligence (AI) can eventually achieve.
There are two main reasons I think this:
- AI is not explicitly programmed
- AI can significantly surpass humans
Here is what I'm not saying:
- That AI will suddenly turn evil and want to kill us
- That the AI that people are writing right now will suddenly become conscious after one too many epochs of training
I'm not a luddite. On the contrary, I am a programmer who finds AI very promising and I think it will do a lot of good for people over the next few decades.
In this essay I will try to illustrate 3 likely concerns that can occur as AI develops.
The End Of Jobs
I'm not usually concerned about jobs being replaced. And I'm not convinced that this is the case now. But I think this will be a problem in the future.
Historically, as technology develops, machines become capable at performing certain tasks that humans used to do, even better than humans. This has happened since the invention of the wheel, but people have been more prosperous as a result, and even though the population has increased a lot, the employment rate is still very high because new jobs have become available as a result of new growth.
However, all of those jobs rely on some skill which humans have but technology cannot replicate. If you extrapolate this, you're left with a few options:
- Eventually everything that humans can do, machines can also do.
- We never get to the point of developing such technology.
- There remains some magical essence that cannot be replicated in a machine, that machines can do.
I think that if 2 is true, we have bigger problems. And I think the third option is highly unlikely - we are biological machines. There's no evidence that we are anything more than matter combined in particular ways and obeying the laws of physics.
So I think we have good reason to think that eventually all jobs to be replaced. And I think that AI is going to be a big part in that, because many of the jobs people currently have require the kind of thinking that's hard to replicate in machines.
And once that happens, we should be prepared to make sure we have some economic structure that enables people to survive even if jobs don't exist.
Runaway Stamp Collector
This is a classic thought expierment that illustrates how AI can go wrong.
Imagine a future AI that is given the goal of collecting stamps. It's not told how to collect the stamps, it figures out the most effective way to do it.
After spending all of the owner's money on stamps, it hacks into computers all over the world to buy stamps. And then it hacks into printers to make them print stamps. And then it runs out of paper, but realises that both stamps and humans are made of carbon, so it kills them and makes stamps out of them.
With regular programming, where the programmer describes exactly how an AI achieves a certain task, the program can often behave in unexpected ways. For example, the Boeing 737 max disaster. Or when a bug led Knight Capital to lose $460 million in 45 minutes. For a sufficiently large system, the number of possible execution paths is huge and difficult to exhuastively test.
Contrast this to an artifical intelligence system, where the goal is to reduce input from the programmer and to achieve solutions by learning what works. Humans know even less about the execution paths available than with something they have explicitly programmed.
Whilst convential AIs are far more limited in action space, that is likely to change as the technology improves, and even if the stamp program was restricted in the available actions, it can find an unforseen combination with harmful side effects.
The fact remains that it is harder to test, and harder to constrain in such a way that it always does the right thing.
It is a lot easier to specify exactly what something does without running into a situation where it does something bad, than it is to let it do whatever it wants as long as it isn't "bad". Philosophers have been grappling with describing morality for millenia.
Computers are like genies. They do exactly what you tell them, but not what you mean. With a standard computer program, you're telling them how to do what you want, which makes it less likely that it can get up to any funny business. With an AI, you give it your goal and it accomplishes it, and you have to try to think about all of the constraints on how it does it.
Artificial General Intelligence
Current AI technologies are very different to humans. But that may not always be the case.
If there is nothing supernatural about human intelligence, and I see no reason to think there is, then it stands to reason that it is possible to organise matter in such a way that it does what humans do.
It also seems like it should be possible to replicate that functionality in silicon rather than carbon. In nature there are many ways to fly, and we have machines like planes that use similar principles, but differently.
If intelligence is like this, then it should be possible to make machines that are as generally intelligent as humans are. The only restriction is our knowledge.
But as long as our knowledge continues to increase, then we will eventually achieve it.
And then what?
And then we will have machines that are more intelligent than we are. We are limited by slow biological evolution. We weren't evolved to be the most intelligent possible beings. Our intelligence evolved under constraints and had other goals to achieve.
If we don't have these same goals for the artificial intelligence systems we design, if we can modify things faster than evolution, then we can make something with fewer limitations than we have.
Conclusion
I've highlighted three pathways that can lead to problems as a result of AI. If I had to give a time scale I would say around a century or so. But people worry about non-problems and real problems with that sort of time frame. And when those problems get dismissed as being a long way away, people get called cold-hearted and accused of not caring about their children and grandchildren.
This is a problem that seems like it is fairly impactful, and fairly likely, given enough time for technology to develop. I don't think it's the biggest risk that stand before us, but I think that it is one that is underrated. I think another problem regarding concern about AI is that most people who are concerned about it are concerned for the wrong reasons, because they lack the relevent knowledge.