It's only Tuesday, but already this week's news is apocalyptic: the collapse of the West Antarctic Ice Sheet is unstoppable, the UN is debating whether to ban killer robots and the U.S. and Russia are going 1980s-retro with nuclear war games. What should worry us the most? Here's what two experts think.

The online academic publication The Conversation organized a public question-and-answer session on Redditt, in which Anders Sandberg and Andrew Snyder-Beattie, researchers at the Future of Humanity Institute at Oxford University, explored what existential risks humanity faces and how we could reduce them. Here are some highlights:

What do you think poses the greatest threat to humanity?

Sandberg: Natural risks are far smaller than human-caused risks. The typical mammalian species lasts for a few million years, which means that extinction risk is on the order of one in a million per year. Just looking at nuclear war, where we have had at least one close call in 69 years (the Cuban Missile Crisis) gives a risk of many times higher. Of course, nuclear war might not be 100% extinction-causing, but even if we agree it has just 10% or 1% chance, it is still way above the natural extinction rate.

Nuclear war is still the biggest direct threat, but I expect biotechnology-related threats to increase in the near future (cheap DNA synthesis, big databases of pathogens, at least some crazies and misanthropes). Further along the line nanotechnology (not grey goo, but "smart poisons" and superfast arms races) and artificial intelligence might be really risky.

The core problem is a lot of overconfidence. When people are overconfident they make more stupid decisions, ignore countervailing evidence and set up policies that increase risk. So in a sense the greatest threat is human stupidity.

In the near future, what do you think the risk is that an influenza strain (with high infectivity and lethality) of animal origin will mutate and begin to pass from human to human (rather than only animal to human), causing a pandemic?

Snyder-Beattie: Low probability. Some models we have been discussing suggest that a flu that kills one-third of the population would occur once every 10,000 years or so.

Pathogens face the same tradeoffs any parasite does. If the disease has a high lethality, it typically kills its host too quickly to spread very far. Selection pressure for pathogens therefore creates an inverse relationship between infectivity and lethality.

This inverse relationship is the byproduct of evolution though?there's no law of physics that prevents such a disease. That is why engineered pathogens are of particular concern.

Which existential risk do you think we are under-investing in and why?

Snyder-Beattie: All of them. The reason we under-invest in countering them is because reducing existential risk is an inter-generational public good. Humans are bad at accounting for the welfare of future generations.

In some cases, such as possible existential risks from artificial intelligence, the underinvestment problem is compounded by people failing to take the risks seriously at all. In other cases, like biotechnology, people confuse risk with likelihood. Extremely unlikely events are still worth studying and preventing, simply because the stakes are so high.

What gives you the most hope for humanity?

Sandberg: The overall wealth of humanity...has grown exponentially over the past ~3000 years?despite the fall of the Roman empire, the Black Death and World War II. Just because we also mess things up doesn't mean we lack [the] ability to solve really tricky and nasty problems again and again.

Snyder-Beattie: Imagination. We're able to use symbols and language to create and envision things that our ancestors would have never dreamed possible.