The Noise That Keeps Me Awake: Futures Thinking at the Oxford Science and Ideas Festival

Content warning: this page explores the things people worry about, and looks in some detail at existential threats.

Thank you for visiting this page. A particular thanks if you have come here after taking part in our stall at Oxford’s Science and Ideas Festival.

As a society we give relatively little attention, and relatively few resources, to tackling the kind of problems that pose the greatest threat to our existence. Yet the organisation 80,000 hours, which tries to identify the careers which will have the greatest positive impact on the world, states in relation to existential threats:

“we think that safeguarding the future comes out as the world’s biggest priority.”

Just how big a priority can be seen from their research into the likelihood of human extinction. They calculated a 19% chance that the human race would not survive past the year 2100.

So, what are the most important types of existential threat we could be working on, and why are so few people working on them?

In this piece, we want to look at the four areas identified in the heatmap we used for the Science and Ideas Festival – climate disaster, war, bioengineered pandemic, and superintelligent AI. We want to look at the nature of the threat posed by each, and what you can do if you think working to solve one of these key challenges could be for you.

But first, we want to explain the hypothesis we were interested in exploring by producing our interactive “heatmap of nightmares”.

We presented people with a blank(ish) poster, featuring 8 things we thought people might worry about. We asked people to put a red (seriously worried) or yellow (somewhat worried) sticker on the areas they spent most time worrying about. 4 of these things (climate disaster, war, bioengineered pandemic, and superintelligent AI) were existential threats that will affect humanity’s future (and to the degree they affect our present are likely to do so for people living outside Oxford where our participants are likely to be based). The other 4 (money, politics, health, identity) were things that people are already likely to be directly affected by personally. We also asked people to write their age in their sticker if they were willing.

We want to explore whether people are more likely to be worried on a day to day basis by things that affect them personally than things which might affect the whole of humanity at some point but are not yet personally affecting their lives. To assist with understanding the nuance of what the heatmap was showing us, we also conducted informal conversations, and asked people to use post-its to express their concerns and their possible solutions (we will use thematic analysis to give us an understanding of these qualitative data).

Our hypothesis is that a major reason we do not devote as much time as research tells us we “should” to existential threats is that the “bandwidth” available to us for thinking about such challenges is largely used up by pressing personal worries. This is based in part on the work of Eldar Shafir and Sendhil Mullainathan, published in their book “Scarcity”, which we discuss below.

Our hopes are:

  • That if there is evidence to support this hypothesis, that may provide additional evidence supporting the importance of tackling these more “personal” issues – the idea would be that in addition to being vital issues in their own right, tackling poverty or online abuse, for example, would have the additional benefit of creating more available bandwidth for tackling existential threats. This is the line of argument you can find in, for example, Rutger Bregman’s book Utopia for Realists where, among other things, he discusses the notion of an unconditional basic income for all people, which he believes would act as a “venture capital for the people” by removing other pressing concerns that take up our brain space.
  • That we might encourage even one person to think about a career tackling existential threats. 80,000 hours calculates that just 10,000 people working in these areas could reduce the likelihood of extinction this century by 1%, from 19% to 18%.
  • That we might better understand how people connect with issues that do not affect them directly, at the present time. And that by doing so we might understand how to communicate about these ideas more effectively.

Shafir and Mullainathan’s central thesis is very simple. We make worse choices when we are preoccupied. This is because we have fewer resources available to us to calculate the best course of action. To use an obvious analogy, we have less available bandwidth because our brains have too many other tabs open. If we want to make better decisions, they argue, the very simple solution is to close some of those tabs. Their work focuses particularly on poverty. It offers a tantalisingly simple solution to the question of why poor people find it harder to make what might be considered “wise” choices about their spending. It has nothing to do with not knowing what is good for them. It has nothing to do with not having the desire to improve their lives, or the willpower to do so. Instead, they are so preoccupied by being poor they lack bandwidth – they are too exhausted by their situation. And the answer equally simple – not education or moral improvement, but giving them more money.

We are interested in seeing whether this applies to our consideration of existential threats. Is it true that we simply have no bandwidth to think about what we can do to reduce the risk of a bioengineered pandemic because we are too worried how to make our food budget last the month?

Let’s take a look at each of those 4 areas of existential threat in turn. For detailed further reading, we highly recommend this article by 80,000 hours, and all the links from it (https://80000hours.org/articles/extinction-risk/)

Climate Disaster

This is the odd one out of our 4 existential threats because it is already very high profile. And with the growing Extinction Rebellion movement, it is clear that there is also a substantial public connection to it. This makes it in many ways a fascinating learning case for those wanting to increase engagement with other threats. This is particularly true in relation to superintelligent AI with which it shares the characteristic of emerging as a threat over time but moving towards a possibly inevitable conclusion (as opposed to war and pandemic which are more likely to erupt suddenly).

We know that the results of the earth heating more than 1.5 degrees could be devastating. But it would not take an unimaginable rise in temperature for the earth to become uninhabitable. What makes the potential scope and speed of devastation difficult to calculate is the climate’s incredible complexity, and the fact it contains many potential feedback loops, such as the mass release of carbon through thawing permafrost, increased ocean temperatures rendering carbon capturing species’ habitats untenable, and light absorption by polar ice. The sheer scale and diversity of the problem does, however, offer a number of possibilities for people who want to work on solving it.

Bioengineered Pandemic

The flu pandemic which began in 1918 killed between 50 and 100 million people. And the plague pandemic in the 14th century killed between 30 and 60% of Europe’s population. The advance of bioengineering raises the possibility of something even more severe than natural pandemics – a bioengineered virus that would combine the quick-spreading potential of the world’s most contagious viruses with an almost total level of lethality created by genetic engineering.

A particular danger associated with a bioengineered pandemic is physical size. Whilst nuclear weapons are large, involve complex supply chains, and are very hard to move without trace, pathogens are small, easily transportable, and potentially increasingly cheap and easy to produce. All of this makes containment one of the key areas for working to reduce the threat.

Superintelligent AI

The rise of killer robots, or the rebellion of rogue artificial intelligence, is the stuff of science fiction. But many of the leading figures working on developing artificial intelligence, like Elon Musk and Google DeepMind’s Demis Hassabis are sufficiently worried that the threats it poses to human existence that they are pouring millions of pounds into research to ensure it remains safe.

At present, artificial intelligence is limited to very specific tasks – like learning how to recognise faces, or how to play games like go. The concern is what would happen should artificial intelligence become general – that is, the kind of transferable problem-solving and decision-making skills that humans possess. Once we reach a point known as the singularity, when machine intelligence equals that of humans, many fear there is a real danger that humans could lose control of a machine that becomes more intelligent than we are, whose decision making processes we do not understand. The Future of Humanity Institute’s Nick Bostrom put it “We’re like children playing with a bomb.” (https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine)

One way of working to keep artificial intelligence safe is, of course, to work directly as a coder. But the increase of AI Ethics opens up many more possibilities. The ethics of AI is already embedded in Oxford’s Philosophy Faculty and the Oxford Internet Institute. And in 2024 the new £150m Stephen Schwarzmann Huamnities Building will have at its core a Centre for the Ethics of AI.

War

Conventional wars killed hundreds of millions of people over the course of the 20th Century. But nuclear weapons have created the prospect of a war that could make us extinct, not just because of the devastating power of the weapons themselves. The nuclear winter that would result from a war could make the planet uninhabitable for humans. 80,000 hours puts the possibility of extinction as a result of nuclear war before 2100 at 0.3% or “the risks from nuclear weapons are greater than all the natural risks put together.”

Whilst there is a very real risk of nuclear war resulting from a deliberate act of aggression, history’s warning – and also the area where human intervention could make the biggest possible impact – is that possibly the greatest danger comes from error. In particular, system error has the potential to lead to nuclear war by accident. This is what nearly happened during the Cuban Missile Crisis, when catastrophe was averted by Vassily Arkhipov. His system picked up signals indicating incoming missiles, which would have required a retaliatory strike. Fortunately he was quick-witted enough to realise something was amiss about the pattern of the supposed launch, and he refused to fire. A few minutes later he was vindicated when the systems glitch resolved itself and it was clear there were no incoming missiles.

As we have seen, there are specific areas you can work on to reduce each potential risk. But there are more general possibilities for making a high impact. For example, you can contribute to decision making skills, seek to influence funding policy and to raise the profile of research and funding for areas of existential risk. Rogue Interrobang (https://rogueinterrobang.com/), the spinout company from Oxford’s Humanities Division founded by one of Futures Thinking’s co-convenors, Dan Holloway, was set up to provide precisely the skills needed to make better decisions in these circumstances.

 

futures thinking logo