Need To Know

Artificial intelligence: The Future of Mankind

Scientists reckon there have been at least five mass extinction events in the history of our planet, when a catastrophically high number of species were wiped out in a relatively short period of time. We are possibly now living through a sixth — caused by human activity. But could humans themselves be next? This is the sort of question that preoccupies the staff at the Future of Humanity Institute in Oxford.

 

The centre is an offshoot of the Oxford Martin School, founded in 2005 by the late James Martin, a technology author and entrepreneur who was among the university’s biggest donors. As history has hit the fast-forward button, it seems to have become the fashion among philanthropists to endow research institutes that focus on the existential challenges of our age, and this is one of the most remarkable.

Tucked away behind Oxford’s modern art museum, the institute is in a bland modern office block; the kind of place you might expect to find a provincial law firm. The dozen or so mathematicians, philosophers, computer scientists and engineers who congregate here spend their days thinking about how to avert catastrophes: meteor strikes, nuclear winter, environmental destruction, extraterrestrial threats. On the afternoon I visit there is a fascinating and (to me) largely unfathomable seminar on the mathematical probability of alien life.

Presiding over this extraordinary institute since its foundation has been Professor Nick Bostrom, who, in his tortoise-shell glasses and grey, herringbone jacket, appears a rather ordinary academic, even if his purple socks betray a streak of flamboyance. His office resembles an Ikea showroom, brilliantly lit by an array of floor lamps, somewhat redundant on a glorious sunny day. He talks in a kind of verbal origami, folding down the edges of his arguments with precision before revealing his final, startling conclusions. The slightest monotonal accent betrays his Swedish origins.

 

Bostrom makes it clear that he and his staff are not interested in everyday disasters; they deal only with the big stuff: “There are a lot of things that can go and have gone wrong throughout history — earthquakes and wars and plagues and whatnot. But there is one kind of thing that has not ever gone wrong; we have never, so far, permanently destroyed the entire future.” Anticipating the obvious next question, Bostrom argues that it is fully justified to devote resources to studying such threats because, even if they are remote, the downside is so terrible.

Staving off future catastrophes (assuming that is possible) would bring far more benefit to far greater numbers of people than solving present-day problems such as cancer or extreme poverty. The number of lives saved in the future would be many times greater, particularly if “Earth civilisation”, as he calls it, spreads to other stars and galaxies. “We have a particular interest in future technologies that might potentially transform the human condition in some fundamental way,” he says.

About the author

AEPC

Add Comment

Click here to post a comment