Search

The grim fate that could be 'worse than extinction' - BBC News

indonesiabei.blogspot.com
What would it take for a global totalitarian government to rise to power indefinitely? This nightmare scenario may be closer than first appears.

What would totalitarian governments of the past have looked like if they were never defeated? The Nazis operated with 20th Century technology and it still took a world war to stop them. How much more powerful – and permanent – could the Nazis have been if they had beat the US to the atomic bomb? Controlling the most advanced technology of the time could have solidified Nazi power and changed the course of history.

When we think of existential risks, events like nuclear war or asteroid impacts often come to mind. Yet there’s one future threat that is less well known – and while it doesn’t involve the extinction of our species, it could be just as bad.

It’s called the “world in chains” scenario, where, like the preceding thought experiment, a global totalitarian government uses a novel technology to lock a majority of the world into perpetual suffering. If it sounds grim, you’d be right. But is it likely? Researchers and philosophers are beginning to ponder how it might come about – and, more importantly, what we can do to avoid it.

Read more:

Existential risks (x-risks) are disastrous because they lock humanity into a single fate, like the permanent collapse of civilisation or the extinction of our species. These catastrophes can have natural causes, like an asteroid impact or a supervolcano, or be human-made from sources like nuclear war or climate change. Allowing one to happen would be “an abject end to the human story" and would let down the hundreds of generations that came before us, says Haydn Belfield, academic project manager at the Centre for the Study of Existential Risk at the University of Cambridge.

Hitler inspects advanced German engineering of the time - what if it had given the Nazis an unbeatable advantage? (Credit: Getty Images)

Hitler inspects advanced German engineering of the time - what if it had given the Nazis an unbeatable advantage? (Credit: Getty Images)

Toby Ord, a senior research fellow at the Future of Humanity Institute (FHI) at Oxford University, believes that the odds of an existential catastrophe happening this century from natural causes are less than one in 2,000, because humans have survived for 2,000 centuries without one. However, when he adds the probability of human-made disasters, Ord believes the chances increase to a startling one in six. He refers to this century as “the precipice” because the risk of losing our future has never been so high.

Researchers at the Center on Long-Term Risk, a non-profit research institute in London, have expanded upon x-risks with the even-more-chilling prospect of suffering risks. These “s-risks” are defined as “suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.” In these scenarios, life continues for billions of people, but the quality is so low and the outlook so bleak that dying out would be preferable. In short: a future with negative value is worse than one with no value at all.

This is where the “world in chains” scenario comes in. If a malevolent group or government suddenly gained world-dominating power through technology, and there was nothing to stand in its way, it could lead to an extended period of abject suffering and subjugation. A 2017 report on existential risks from the Global Priorities Project, in conjunction with FHI and the Ministry for Foreign Affairs of Finland, warned that “a long future under a particularly brutal global totalitarian state could arguably be worse than complete extinction”.

Singleton hypothesis

Though global totalitarianism is still a niche topic of study, researchers in the field of existential risk are increasingly turning their attention to its most likely cause: artificial intelligence.

In his “singleton hypothesis”, Nick Bostrom, director at Oxford’s FHI, has explained how a global government could form with AI or other powerful technologies  – and why it might be impossible to overthrow. He writes that a world with “a single decision-making agency at the highest level” could occur if that agency “obtains a decisive lead through a technological breakthrough in artificial intelligence or molecular nanotechnology”. Once in charge, it would control advances in technology that prevent internal challenges, like surveillance or autonomous weapons, and, with this monopoly, remain perpetually stable.

A nuclear missile on display in China (Credit: Getty Images)

A nuclear missile on display in China (Credit: Getty Images)

If the singleton is totalitarian, life would be bleak. Even in the countries with the strictest regimes, news leaks in and out from other countries and people can escape. A global totalitarian rule would eliminate even these small seeds of hope. To be worse than extinction, “that would mean we feel absolutely no freedom, no privacy, no hope of escaping, no agency to control our lives at all", says Tucker Davey, a writer at the Future of Life Institute in Massachusetts, which focuses on existential risk research.

“In totalitarian regimes of the past, [there was] so much paranoia and psychological suffering because you just have no idea if you're going to get killed for saying the wrong thing,” he continues. “And now imagine that there's not even a question, every single thing you say is being reported and being analysed.”

“We may not yet have the technologies to do this,” Ord said in a recent interview, “but it looks like the kinds of technologies we’re developing make that easier and easier. And it seems plausible that this may become possible at some time in the next 100 years.”

AI and authoritarianism

Though life under a global totalitarian government is still an unlikely and far-future scenario, AI is already enabling authoritarianism in some countries and strengthening infrastructure that could be seized by an opportunistic despot in others.

“We've seen sort of a reckoning with the shift from very utopian visions of what technology might bring to much more sobering realities that are, in some respects, already quite dystopian,” says Elsa Kania, an adjunct senior fellow at the Center for New American Security, a bipartisan non-profit that develops national security and defence policies.

A benevolent government that installs surveillance cameras everywhere could make it easier for a totalitarian one to rule in the future (Credit: Steffi Loos/Getty Images)

A benevolent government that installs surveillance cameras everywhere could make it easier for a totalitarian one to rule in the future (Credit: Steffi Loos/Getty Images)

In the past, surveillance required hundreds of thousands of people – one in every 100 citizens in East Germany was an informant – but now it can be done by technology. In the United States, the National Security Agency (NSA) collected hundreds of millions of American call and text records before they stopped domestic surveillance in 2019, and there are an estimated four to six million CCTV cameras across the United Kingdom. Eighteen of the 20 most surveilled cities in the world are in China, but London is the third. The difference between them lies less in the tech that the countries employ and more in how they use it.

What if the definition of what is illegal in the US and the UK expanded to include criticising the government or practising certain religions? The infrastructure is already in place to enforce it, and AI – which the NSA has already begun experimenting with – would enable agencies to search through our data faster than ever before.

In addition to enhancing surveillance, AI also underpins the growth of online misinformation, which is another tool of the authoritarian. AI-powered deep fakes, which can spread fabricated political messages, and algorithmic micro-targeting on social media are making propaganda more persuasive. This undermines our epistemic security – the ability to determine what is true and act on it – that democracies depend on.

“Over the last few years, we've seen the rise of filter bubbles and people getting shunted by various algorithms into believing various conspiracy theories, or even if they’re not conspiracy theories, into believing only parts of the truth,” says Belfield. “You can imagine things getting much worse, especially with deep fakes and things like that, until it's increasingly harder for us to, as a society, decide these are the facts of the matter, this is what we have to do about it, and then take collective action.”

Preemptive measures

The Malicious Use of Artificial Intelligence report, written by Belfield and 25 authors from 14 institutions, forecasts that trends like these will expand existing threats to our political security and introduce new ones in the coming years. Still, Belfield says his work makes him hopeful and that positive trends, like more democratic discussions around AI and actions by policy-makers (for example, the EU considering pausing facial recognition in public places), keep him optimistic that we can avoid catastrophic fates.

Davey agrees. “We need to decide now what are acceptable and unacceptable uses of AI,” he says. “And we need to be careful about letting it control so much of our infrastructure. If we're arming police with facial recognition and the federal government is collecting all of our data, that's a bad start.”

If you remain sceptical that AI could offer such power, consider the world before nuclear weapons. Three years before the first nuclear chain reaction, even scientists trying to achieve it believed it was unlikely. Humanity, too, was unprepared for the nuclear breakthrough and teetered on the brink of “mutually assured destruction” before treaties and agreements guided the global proliferation of the deadly weapons without an existential catastrophe.

We can do the same with AI, but only if we combine the lessons of history with the foresight to prepare for this powerful technology. The world may not be able to stop totalitarian regimes like the Nazis rising again in the future – but we can avoid handing them the tools to extend their power indefinitely.

--

Join one million Future fans by liking us on Facebook, or follow us on Twitter or Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “The Essential List”. A handpicked selection of stories from BBC FutureCultureWorklife, and Travel, delivered to your inbox every Friday.

Let's block ads! (Why?)



"world" - Google News
October 14, 2020 at 02:00PM
https://ift.tt/340PwbS

The grim fate that could be 'worse than extinction' - BBC News
"world" - Google News
https://ift.tt/3d80zBJ
https://ift.tt/2WkdbyX

Bagikan Berita Ini

0 Response to "The grim fate that could be 'worse than extinction' - BBC News"

Post a Comment

Powered by Blogger.