Is Artificial Intelligence (AI) a threat to humanity? Without question, yes. Is it going to take over the world and decide that humans are its (or our own) biggest enemy and kill us all? Unlikely.
The “AI threat” isn’t what sci-fi authors have dramatized over the last several decades. The real threat has to do with, of all things, the economy.
I know, I know. Boring stuff. But that’s why it’s so dangerous. Mundane details are just as capable of destroying your life as any exaggerated evil. (Maybe even more so.)
Here it is at a high-level: We’re getting better at producing more things, in less time, with fewer human laborers. However, we need more consumers, with more disposable income, to buy all these things and continue advancement and production. Obviously, the problem is that having fewer laborers is unlikely to result in more disposable income.
The danger of AI is how damn efficient it can be. I mean, if there is a way to get blood from a turnip, AI will be responsible for finding it.
For example, let’s talk about one of the most common jobs in the US: Truck driving. That job market is about to be gutted. As self-driving trucks enter the workforce, millions of jobs will be lost. The problem is that very few new jobs will be created from the resulting shipping efficiencies. Resulting in… A job deficit.
These people have to make money somehow. Right? Or else how will they buy all these shiny new things that are now so cheaply produced and delivered?
And that’s just AI’s impact on the automotive industry. What about every other market segment it will influence?
It doesn’t take a genius to see where all this could lead: Inevitable collapse–one way or another.
Ultimately, the more realistic eventuality is that AI will be responsible for optimizing humanity into extinction1.
So what’s the answer?
Well, honestly, who the hell knows?
Should we outlaw AI? Good luck with that. Did destroying all those cotton mills stop industrialization? No, of course not. The technology was too useful. So is AI. We just have to plan ahead and be prepared for it. (Not humanity’s biggest strength, I know.)
Some think Universal Basic Income is the answer. At first blush, it feels a bit like trying to convince people that the economy can be a perpetual motion machine, but hell, I’d love to see it work. It’s interesting food for thought at least. (That said, sharing isn’t one of our dominant strengths either.)
Maybe we need to start paying the robots, so they can buy the maintenance parts and services they need to continue functioning2. Any of their excess wages could be spent speculating on the open markets. Sure, why not? Consumer robots. What could go wrong? Make them capable of snorting cocaine, and we could have the 80’s all over again.
If anyone does know the answer, I wish he or she would speak up. (Maybe they have, but haven’t been heard over the din of pointless internet outrage.)
In my mind, it breaks down like this: AI in military defense? Not too worried. AI in military offense? More troublesome. AI in industry? Probably catastrophic. ↩︎
Is that what happened in Star Wars’ past? Perhaps Droids are the galactic response to centralized AI. They banned it, as such. They allow droids to have AI–to be AI–but they have to be entirely self-contained.
That would make them easier to keep tabs on and to control. (And destroy if they run amok.) ↩︎