Which fraction of the expected effects of neartermist interventions (e.g. global health and development, and animal welfare) flow through longtermis considerations (e.g. longterm effects of changing population size, or expansion of the moral circle)?
Should we be uncertain about whether saving lives is good/​bad because of the meat eater problem?
What is the chance that the time of perils hypothesis is true (e.g. how does the existential risk this century compare to that over the next 1 billion years)? How can we get more evidence for/​against it? Relevant because, if existential risk is spread out over a long time, reducing existential risk this century has a negligible effect on total existential risk, as discussed by David Thorstad.
How high is the chance of AGI lock-in this century?
What can we do to ensure a bright future if there are advanced aliens on or around Earth (Magnus Vinding’s thoughts)? More broadly, should humanity do anything differently due to the possibility of advanced civilisations which did not originite on Earth?
How much weight should one give to the XPT’s forecasts? The ones regarding nuclear extinction seem way too pessimistic to be accurate. Superforecasters and domain experters predicted a likelihood of nuclear extinction by 2100 of 0.074 % and 0.55 %. My guess would be something like 10^-6 (10 % of a global nuclear nuclear war involving tens of detonations, 10 % of it escalating to thousands of detonations, and 0.01 % of that leading to extinction), in which case superforecasters would be off by 3 orders of magnitude.
Hi Rob,
Nice to know you are interviewing Anders! Some questions:
Should more resources be directed towards patient philanthropy at the margin? How much more/​less?
How binary is longterm value? Relevant to the importance of the concept of existential risk.
Should the idea that more global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios (ASRSs) be taken seriously? (Anders is at the board of ALLFED, and therefore has knowledge about ASRSs.)
Which fraction of the expected effects of neartermist interventions (e.g. global health and development, and animal welfare) flow through longtermis considerations (e.g. longterm effects of changing population size, or expansion of the moral circle)?
Under moral realism, are we confident that superintelligent artificial intelligence disempowering humans would be bad?
Should we be uncertain about whether saving lives is good/​bad because of the meat eater problem?
What is the chance that the time of perils hypothesis is true (e.g. how does the existential risk this century compare to that over the next 1 billion years)? How can we get more evidence for/​against it? Relevant because, if existential risk is spread out over a long time, reducing existential risk this century has a negligible effect on total existential risk, as discussed by David Thorstad.
How high is the chance of AGI lock-in this century?
What can we do to ensure a bright future if there are advanced aliens on or around Earth (Magnus Vinding’s thoughts)? More broadly, should humanity do anything differently due to the possibility of advanced civilisations which did not originite on Earth?
How much weight should one give to the XPT’s forecasts? The ones regarding nuclear extinction seem way too pessimistic to be accurate. Superforecasters and domain experters predicted a likelihood of nuclear extinction by 2100 of 0.074 % and 0.55 %. My guess would be something like 10^-6 (10 % of a global nuclear nuclear war involving tens of detonations, 10 % of it escalating to thousands of detonations, and 0.01 % of that leading to extinction), in which case superforecasters would be off by 3 orders of magnitude.