The Coming Wave

Mustafa Suleyman is worried. He’s worried about the rapid development and proliferation of artificial intelligence and synthetic biology. He calls these two emerging technologies and the countless innovations they will spawn “the coming wave.”

The coming wave will bring enormous benefits to society. We cannot tackle some of humanity’s most urgent problems – climate change, global inequality, pandemics, etc. – without them. At the same time, the coming wave will usher in a period of profound and destabilizing economic, political and social change, change we’re not ready for. He predicts that unless we navigate a very narrow path to containing these technologies, they could lead to catastrophe or dystopia or both.

Mustafa Suleyman is the CEO of Microsoft’s AI division. He co-founded DeepMind, the company that famously developed AlphaGo, an AI that learned to play the Japanese game Go and defeated the reigning human Go champion. DeepMind was acquired by Google in 2014. In his 2023 book, The Coming Wave: Technology, Power and the Twenty-First Century’s Greatest Dilemma, Suleyman explores the technologies of the coming wave and their likely impacts on humanity, and he attempts to lay out key steps along the narrow path to containing them.

“As we stand at this turning point, we are faced with a choice – a choice between a future of unparalleled possibility and a future of unimaginable peril. The fate of humanity hangs in the balance, and the decisions we make in the coming years and decades will determine whether we rise to the challenge of these technologies or fall victim to their dangers.” [p. 4]  

Illustrating both the opportunities and the challenges of the coming wave, the book’s prologue, including the above paragraph, was written by an AI.

Cover of The Coming Wave showing a starburst in rainbow colors.

The Coming Wave:
Technology, Power and the Twenty-First Century’s Greatest Dilemma
By Mustafa Suleyman
Crown, New York, 2023

A Different Kind of Wave

Suleyman says the coming wave of innovation based on AI and synthetic biology is different from previous waves such as those sparked by the internet, the steam engine, the printing press and other general purpose technologies. That’s because of their unique characteristics and incentives.

AI and synthetic biology are asymmetric: they transfer power away from a few large incumbents to many smaller, more nimble players. These technologies are cheaper, faster, and more efficient. Sometimes they’re described as democratizing. Suleyman gives the example of Ukraine using cheap drones to halt a massive column of invading Russian tanks.

They are capable of hyper-evolution, meaning that innovation happens through very fast iterations. Large language models (LLMs) are now training themselves, for example. Suleyman says these technologies are omni-use. You might be familiar with the term dual-use, where a particular technology has both civilian and military applications. Well, omni-use mean a technology has a wide variety of uses, many of which cannot be predicted. This is a particular risk of general purpose technologies.

Finally, these technologies have autonomy, they don’t need a “human in the loop” anymore. Think of self-driving cars, self-training models, self-targeting weapons. They’re not just assisting humans, they’re replacing us. These characteristics make technologies of the coming wave very difficult to contain.

That difficulty is compounded by four incentives spurring on the development of coming wave technologies. First, there’s nationalism, geopolitics and great power competition. The US and China are competing to “win” the AI race. Other countries want their own sovereign AI so they’re not dependent upon anyone else. Next, the structure of the global scientific community incentivizes openness and sharing of research and it rewards publication of results. Capitalism and the drive for profits is of course another strong incentive. Suleyman notes that we have always needed financial incentives to get any new technology from the lab into people’s homes and businesses. But the pursuit of profit does make containment more difficult.

The fourth incentive is ego. Suleyman must have experienced this himself and so have I in my own career. It’s the personal desire to find purpose in one’s work. For scientists and engineers that often translates to solving hard problems, following curiosity wherever it leads, making a difference in the world. These incentives piled on top of each other make it very difficult to stop, slow down, or contain any technological developments.

What Is Containment?

Before we go any further, what exactly is containment? Suleyman defines it like this:

“In most cases, containment is about meaningful control, the capability to stop a use case, change a research direction, or deny access to harmful actors. It means preserving the ability to steer waves to ensure their impact reflects our values, helps us flourish as a species and does not introduce significant harms that outweigh their benefits.” [p. 37]

Suleyman acknowledges that we have very few historical examples of successful containment. Nuclear non-proliferation is one. Phasing out ozone-destroying chlorofluorocarbons (CFCs) is another. But that’s about it. These cases had special characteristics that made containment feasible. Either the technology involved was extremely complex and expensive or there were readily available substitutes. Significantly, in both cases there were a very small number of players who had to agree and coordinate their actions.

In contrast, look at climate change. We know we must radically decarbonize the world’s energy system to prevent the worst effects of climate change. Every country, every industry, and every household in the world needs to be involved in the transition. How well is that going?

Why Do We Need Containment?

Why do we even need containment? What are the “unimaginable perils” we must guard against? Suleyman doesn’t provide an exhaustive list because there isn’t one. He’s less concerned about some technological “singularity” where a superintelligent AI takes over the world, although some people do worry about this. Still, catastrophic scenarios are certainly possible. Imagine an engineered pathogen escaping from a biolab and killing billions of people. Imagine an autonomous weapon system – a swarm of drones perhaps – killing hundreds or thousands of civilians. Or an intelligent bond trading AI that brings the world’s financial system to its knees.

Suleyman’s main concerns seems to be that the coming wave threatens the stability and maybe even the existence of the nation state which he believes is the most important entity for managing technology. He says, and I largely agree, that the nation state is essentially a “grand bargain” between governments and their people. We give governments a monopoly on the legitimate use of coercive force, tempered by appropriate checks and balances, and in return we expect governments to provide safety, order, infrastructure, essential services, and to promote our health and welfare.

But the coming wave threatens that bargain. One the one hand, through asymmetry, technology empowers individuals and organizations with unparalleled power outside the control of the nation state. On the other hand, technology makes possible a dystopian future of pervasive government surveillance and control. The internet illustrates this paradox of empowerment and control.

“The internet does precisely this: centralizes in a few key hubs while also empowering billions of people. It creates behemoths and yet gives everyone the opportunity to join in. Social media created a few giants and a million tribes. Everyone can build a website, but there is only one Google. Everyone can sell their own niche products, but there is only one Amazon. And on and on. The disruption of the internet era is largely explained by this tension, this potent, combustible brew of empowerment and control.” [p. 203]

The coming wave will be even more disruptive at a time when modern nation states are already dealing with incredibly difficult, fractious problems. Suleyman says the coming wave is a “fragility amplifier.”

“The coming wave will land in a combustible, incompetent, overwrought environment. This makes the challenge of containment – of controlling and directing technologies so they are of net benefit to humanity – even more daunting.” [p. 152]

We don’t want either failed states or repressive dictatorships, Suleyman says. Both are bad for their people, and we shouldn’t trust either of them with powerful new technologies. So will the coming wave bring chaos or dystopia? My guess is both.

Even though containment appears impossible, Suleyman urges that we find ways to make it possible. He concludes the book with ten steps towards containment. Some of these are about developing safeguards within the technologies themselves, for example responsible AI. Other steps concern actions that governments should take and that citizens should take. All of them seem perfectly reasonable and all of them seem utterly inadequate for the challenges of the coming wave.

Unsolicited Feedback

I’m glad I read Yuval Noah Harari’s book Nexus before tackling The Coming Wave. Harari is a historian rather than a technologist, but he is equally concerned about the dangers of AI. His book helped me put The Coming Wave into a broader context. In fact, I think the challenges Suleyman describes fit neatly into Harari’s truth-vs-order framework.

I must say it’s a little galling to read statements from AI pioneers – Suleyman isn’t the only one – who have made, and continue to make, fortunes from AI technology and are now warning the rest of us that it needs to be contained and restricted. If you folks are so worried, why don’t you stop working on this stuff? That said I think Suleyman is right to call out the possible dangers of the coming wave and to highlight the need for some action. Humanity has enough experience now with rapid technological development that we have no excuse for blindly marching off a cliff.

In some sense, containment gets applied to any new technology as its effects ripple through society. Take automobiles. They had a profound effect on society, but regulations evolved over time around vehicle safety, traffic management, driver licensing, pollution control, etc. Suleyman argues, correctly I think, that we need to be much more proactive when it comes to AI and biotechnology because they are evolving so quickly and because their impact could be so catastrophic.

Still, I’m deeply skeptical that we can successfully contain AI or synthetic biology. Here in the US, Trump’s AI Action Plan is moving in the opposite direction, rolling back regulations (except for banning “woke” AI) in order to accelerate AI development and “win the AI race.”

Frankly, it’s not even clear to me how much containment is necessary. Suleyman assumes the need for containment, but other than providing a “parade of horribles” he doesn’t really make a clear-cut case for its necessity. We seem to have survived previous technology waves. Why not this one? The printing press caused all kinds of social upheaval. Would restrictions on the printing press have limited that upheaval? Would they have done so without seriously reducing the benefits? Impossible to answer, of course.

The point is we don’t know whether containing AI or synthetic biology would be worth reducing their benefits either. For example, in How We Got to Now, Steven Johnson tells how the invention of the printing press helped Galileo develop the telescope 150 years later. More people reading books meant more people discovering they couldn’t focus properly which led to more people needing eyeglasses which drove improvements in lens grinding technology which helped Galileo. Who knows what chains of invention AI and biotechnology will kick off?

I suspect for most of us it’s hard to see a problem because our interactions with AI have been mostly positive until now. We get interesting recommendations when we shop or listen to music. We get help writing our essays or developing business plans, and we get new vaccines and medicines more quickly. There hasn’t been the AI equivalent of an airplane crash or extreme weather events. The risks on the biotech side may be clearer: there is ongoing debate about whether the COVID-19 pandemic originated from a Chinese biolab.

One area where we are starting to see negative impact is in the labor market. There are fears, and increasing evidence, that AI is causing job losses particularly among white collar workers, even among software engineers. Suleyman thinks that even if these new technologies eventually create more jobs than they displace – and that’s what typically happens – there will be a lag and new jobs won’t get created fast enough to help displaced workers.

High unemployment is a recipe for civil unrest and violence. And if AI displaces too many jobs, who will have the money to buy anything? How will companies make a profit? I think this will force governments to respond, maybe with something like universal basic income. But again, it’s hard to predict if and when this will be necessary.

A set of higher-level principles might help guide us, something like Isaac Asimov’s Three Laws of Robotics. But there’s nothing like that in The Coming Wave. Similarly, Suleyman does not provide a risk taxonomy. What I mean by this is a categorization of risks by their severity, scope and likelihood. Different responses, different types of containment, might be needed for each risk category. The Coming Wave doesn’t get into this level of detail. Suleyman is more concerned with broader societal risks, and I admit that makes for a better, more compelling book.

I’m in a quandary on this. I think it’s wise to be thinking about these issues and building safeguards into such powerful technologies. To that end, The Coming Wave is both timely and worthwhile. Suleyman has done us all a great service by writing this book. There are some dangerous scenarios, akin to nuclear proliferation, that we should anticipate and hopefully head off. Yet I think full containment is neither possible nor desirable. The potential benefits are enormous and the knock-on effects are largely unpredictable.

Besides, the genie is out of the bottle.

This has been a long review. Thanks for reading all the way through.


If you enjoyed this review, please follow Unsolicited Feedback.


Discover more from Unsolicited Feedback

Subscribe to get the latest posts sent to your email.

This entry was posted in Books, Computers and Internet, Science and technology and tagged , , , , , , . Bookmark the permalink.

5 Responses to The Coming Wave

  1. rockymich's avatar rockymich says:

    Thanks Harry for another helpful review. I’m also going to follow your lead and read Nexus first, before reading The Coming Wave. One thing I can say is that I’m always somewhat comforted to see cautions and/or warnings given by people who are in roles where they can influence how AI gets developed and implemented…to some extent. True, it can be a bit like the wolf guarding the hen house, but most ‘wolves’ I’ve read about seem genuinely invested in safeguards.

    P.S. Yeah, wouldn’t a risk taxonomy be essential?!

    Liked by 1 person

    • Harry Katz's avatar Harry Katz says:

      Thanks, Michele. I agree it’s good that industry insiders are highlighting the need for safeguards. I just hope political & financial pressures don’t end up overriding their concerns.

      Like

  2. I just checked my library and they carry both the hardcopy and the ebook version of this, so I added it to my list. This sounds like another important read in my quest to learn more about different facets of AI this year. I’m barely scratching the surface in my comprehension of it, I realize, but a little is better than none, right? The more I learn, the more my confusion rises in my love/hate relationship with AI. I feel like I’ve benefited from it so far on a personal level. But the scary predictions make me wonder what’s coming down the road for the world in general….more things I won’t fully understand, I’m sure! ha.

    Liked by 1 person

Leave a reply to rockymich Cancel reply