
Today, so-called techno-optimists fill the ranks of Silicon Valley billionaires. They proclaim a bright future for humanity delivered by the rapid pursuit of technological advances.
Of course, these techno-optimists are right that technology and science are unarguably among humanity’s greatest assets, and hope for the future. But they go too far, because it is also true that technology always creates new problems even as it solves others – this is also something we’ve learned through science. As a result, naive faith in technology is a recipe for repeatedly achieving a short-term buzz while also incurring long-term costs. Getting the best out of technology requires a more cautious and balanced approach.
Why does technology so often go wrong – even as it gets many things right? The anthropologist Sander van der Leeuwe sketched out an answer about a decade ago, and it seems to be something like a law of nature. When we face a problem, we think about it and build a conceptual model of how part of the world works. We use it to propose a solution to our problem. Based on that understanding, we then act, and the technology we come up with often solves the problem. However, we then typically find that our model – of course – wasn’t actually a complete model of the world. Our simple model left some things out. Not surprisingly, it then turns out that our technology, operating in the real world, has effects on that world that we hadn’t foreseen – unanticipated consequences.
We repeatedly encounter this pattern because simple models are so powerful, seductive and useful. Also, simple models leave details out so that we always misperceive the full consequences of our actions. We invent better fishing technology to feed more people, and then find we’ve wiped out fish populations. We create wonderful non-stick surfaces for cooking pans and then later discover that the chemicals in these materials cause health problems and have leached into the environment, spreading essentially everywhere. We make super-convenient plastics that end up as micro-particles in the oceans and in our own bodies. This is also the story of technology, along with the great victories.
Because we understand this, anticipating problems should be part of technological development itself. A clear-eyed view of our ignorance doesn’t mean not pursuing technology, but counsels caution and wisdom by employing foresight, without expecting anything close to flawless prescience. It also means taking practical steps to regulate development and give time to redress emerging problems, while at the very least avoiding the worst possible outcomes.
Our current approach to research and development in artificial intelligence or AI offers an example of the reckless approach. Right now a handful of the world’s largest technology companies are battling it out among themselves to control the market for this technology, rolling out one model after another as fast as they can with little oversight. As the neuroscientist Gary Marcus has argued, this race for near-term dominance has one obvious cost – it exposes everyone to the unknown risks of new and untested technologies. It also has a less obvious cost: the pitched urgency of the competition means that virtually all available resources get invested in research in the recent most promising area, currently so-called large-language models. This hoards resources away from other areas of computer science that might ultimately turn out to be more important to one day achieving true AI.
Fortunately, not all Silicon Valley leaders accept the techno-optimist demand for uncontrolled technological acceleration. Dario Amodei, CEO of the AI company Anthropic, certainly shares their optimism, as he revealed in a recent essay expressing his view that AI research could lead to incredible improvements to human wellbeing. Exploring an admittedly optimistic scenario, he suggests that we might in a few decades eliminate essentially all diseases, spread beneficial economic growth across nations, even greatly improve humans’ collective ability to form consensus on issues of fundamental social importance.
But Amodei also accepts that there’s plenty of room for things to go wrong – AI may not achieve any of these positives, and could instead radically exacerbate inequality, or provide a new class of autocrats with unprecedented powers of surveillance and control through AI-enhanced propaganda. What will happen depends on the choices we make.
And, in this, he suggests that keeping a close focus on risks and regulation has to be the right way forward, rather than naively racing into the future with hope as our guide. People not only underestimate how good AI might one day be, he thinks, but also how bad the risks could be. And there’s natural asymmetry we need to respect.
“The basic development of AI technology and many (not all) of its benefits seems inevitable,” as he sees it, as the result of powerful market forces. “On the other hand, the risks are not predetermined and our actions can greatly change their likelihood.”
As so often with cultures such as Wall Street or Silicon Valley, the essential tension is between forces seeking short-term profits – whatever the long-term outcome – and others who would rather balance opportunities and risks, and thereby pursue more sustainable benefits. In arguments for and against such opposing views, there’s a natural imbalance, as alluring and obvious potential profits now get weighed up against harder-to-see and less-defined risks set in an unknown future. It’s not a fair comparison.
Especially when it is so easy to make catastrophically huge errors when thinking about the future, even the near future. In his techno-optimist manifesto, the entrepreneur Marc Andreessen casually voices his dream that we might ramp up clean-energy resources so quickly that everyone on earth could soon use 1,000 times more energy a day than is currently typical for people in developed nations. Just think what people could achieve! Sounds great. Except that a little physics thinking also shows that using that much energy would immediately cause planetary warming about 30 times faster than we’re experiencing today, and we’d all be dead in a few years. Not so great after all.
Of course, anyone might make this kind of mistake, because in our complex world, cause and effect is complex. Technology is tricky, and what might happen is far from obvious. That’s just the way it is – and why we need to think more carefully about risks and follow a more cautious approach.
-
Mark Buchanan is a physicist and science writer and the author of Ubiquity and Nexus: Small Worlds and the New Science of Networks