What Happens to the Economy as Our Cognitive Capabilities Expand?
New breakthroughs in artificial general intelligence give us hints about its potential to solve problems human minds never could. So what does that mean for the economy?
Artificial general intelligence (AGI) is no longer science fiction. Two years ago, Google subsidiary DeepMind made a chess program, AlphaZero, that was different from previous AI . Rather than expose it to some huge number of games and tell it to use that database as a resource to calculate optimal play (which had been its approach with its previous program, AlphaGo, as well as the usual method for other chess programs like IBM’s Deep Blue), they taught it the rules, and set it to seek the quickest path to victory. AlphaZero was soon beating the previous champion AI chess program, Stockfish, which itself had been routinely defeating human champions.
In teaching itself to be history’s greatest chess grandmaster, AlphaZero has shown us that machines can become self-learning. That, in turn, gives us a hint about AGI’s potential to solve problems that human minds never could. In the subsequent two years, DeepMind has moved beyond fun and games. One of its latest tricks has been a machine-learning model to optimize and grow the use of wind power. This is the merest glimmer of what machine learning will do to help us decarbonize the economy, but even that is nothing next to AGI’s full potential.
The importance of AI and machine learning can’t be overstated. For the first time in human history, the tools we make are beginning to give us the ability to expand our cognitive capabilities. In his book AI Superpowers, Kai-Fu Lee writes, “Few, if any, experts predicted that deep learning was going to get this good, this fast. Those unexpected improvements are expanding the realm of the possible when it comes to real-world uses….”
It’s great that we’re developing these tools just as we need them to help with some big problems, but that does beg the question, why now? Is it because humans today have become smarter than previous generations? Well, no. We still have a brain and the same mental operating system as a Paleolithic hunter-gatherer, and that setup in our heads has been going on for all our past generations, at least 200,000 years now. As Annie Duke put it in her book Thinking in Bets, “These are the brains we have and they aren’t changing anytime soon.”
What is changing, and fast, is the range and power of the tools we have to increase our abilities. We have the same brain as early anatomically modern humans; the difference is in our cumulative education. We’re now standing on millennia of learned, incremental progress, compounding like interest payments, and so we have a new plethora of tech levers to pull as a result. It is astonishing to think what we’ve achieved…and what we may do next as innovation accelerates faster than has been imagined, even by AI experts.
A clue may come from our closest remaining relatives. We have brains that are approximately three times the size of a chimpanzee’s, but what we can mentally do exceeds what it can do by some factor much higher than three, closer to 40. The chimp in the image can use a stick as a tool but can’t imagine what he could do with a few more cubic centimeters of cranial capacity. Similarly, we can’t really contemplate what emergent phenomena AGI will bring.
And yet, the tool he’s using isn’t so different from the tools used by our early but anatomically indistinguishable ancestors. It may not feel like the line of innovation runs straight from stone tools to the microchips capable of processing AI algorithms, but it does. Creeping innovation has led to what can almost be viewed as miraculous, and yet, as we now expand our own capacities, we can see that we’re just getting started.
The human world is an innovation machine, and the more powerful our tools become, the more rapidly innovation accelerates. We have a hard time understanding change that is greater than linear, but the time of parabolic change is nevertheless upon us. We need to think about what that means, and we need to be humble in appreciating that we don’t—and can’t—fully know what it means. As Garry Kasparov, who in 1997 became the first chess champion defeated by a chess program, recently tweeted, “We are only on edge of how much we will learn from AI about ourselves and our world. The death of randomness is the triumph of science.”
Fascinating and illustrative of how we are only on edge of how much we will learn from AI about ourselves and our world. The death of randomness is the triumph of science. https://t.co/p7EDKSd0VO
— Garry Kasparov (@Kasparov63) March 3, 2019
We can scarcely imagine what’s next, any more than the Paleolithic hunter could have imagined the NVIDIA Volta chip. With the now exponential leverage of AI to expand our capacities, we may not be able to predict what’s next, but we do know it’s going to be dramatic.
The economic triumph of these advances will be that economic productivity becomes so great that we will realize massive production and output for what we now would think of as trivially small inputs, be those inputs money, material resources or person hours.
What happens to the global economy as our cognitive capabilities expand? If we manage it well, it can be an unimaginably better future, where everyone can derive a good standard of living, without us crossing more planetary boundaries. We can imagine a de-risked economy of plenty, if not abundance, existing in equilibrium with a livable climate, and with the rest of biodiversity.
What’s the best way to actualize a de-risked economy? Invest in it now, encourage it to flourish. But should we—is AGI something to fear? Not yet. Again, Kai-Fu Lee: “Our present AI capabilities can’t create a superintelligence that destroys our civilization.” Which is good news, because we need the massive computational power AGI can bring in order to solve our problems. For, as Lee concludes, when it comes to destroying civilization, “My fear is that we humans may prove more than up to that task ourselves.”