If I could pick just one, it would be an assessment of existential risk conditional on some really major global catastrophe (e.g. something that kills 90% / 99% / 99.9%) of the world’s population. I think this is really crucial because: (i) for many of the proposed extinction risks (nuclear, asteroids, supervolcanoes, even bio), I find it really hard to see how they could directly kill literally everyone, but I find it much easier to see how they could kill some very large proportion of the population; (ii) there’s been very little work done on evaluating how likely (or not) civilisation would be to rebound from a really major global catastrophe. (This is the main thing I know of.)
Ideally, I’d want the piece of research to be directed at a sceptic. Someone who said: “Even if 99.9% of the world’s population were killed, there would still be 7 million people left, approximately the number of hunter-gatherers prior to the Neolithic revolution. It didn’t take very long — given Earth-level timescales — for hunter-gatherers to develop agriculture and then industry. And the catastrophe survivors would have huge benefits compared to them: inherited knowledge, leftover technology, low-lying metals, open coal mines, etc. So we should think it’s very likely that, after a catastrophe of this scale, civilisation would still recover.”
And the best response to the sceptic, if one could be found, would be pointing to a challenge that post-catastrophe survivors would face that hunter-gatherers didn’t: e.g. a very different disease burden, or climate, or something.
I’m also just really pro Forum users trying to independently verify arguments made by others in EA (or endorsed by others in EA), or check data that’s being widely used. E.g. I thought Jeff Kaufman’s series on AI risk was excellent. And recently Ben Garfinkel has been trying to locate the sources of the global population numbers that underlie the ‘hyperbolic growth’ idea and I’ve found that work important and helpful.
(In general, I think we can sometimes have a double standard where we will happily tear apart careful, widely-cited research done by people outside the community, but then place a lot of weight on ideas or arguments that have come from within the community, even if they haven’t gone through the equivalent of rigorous peer-review.)
If anyone decides to work one this, please feel free to contact me! There is a small but non-negligible probability I’ll work on this question, and if I don’t I’d be happy to help out with some contacts I made.
If I could pick just one, it would be an assessment of existential risk conditional on some really major global catastrophe (e.g. something that kills 90% / 99% / 99.9%) of the world’s population. I think this is really crucial because: (i) for many of the proposed extinction risks (nuclear, asteroids, supervolcanoes, even bio), I find it really hard to see how they could directly kill literally everyone, but I find it much easier to see how they could kill some very large proportion of the population; (ii) there’s been very little work done on evaluating how likely (or not) civilisation would be to rebound from a really major global catastrophe. (This is the main thing I know of.)
Ideally, I’d want the piece of research to be directed at a sceptic. Someone who said: “Even if 99.9% of the world’s population were killed, there would still be 7 million people left, approximately the number of hunter-gatherers prior to the Neolithic revolution. It didn’t take very long — given Earth-level timescales — for hunter-gatherers to develop agriculture and then industry. And the catastrophe survivors would have huge benefits compared to them: inherited knowledge, leftover technology, low-lying metals, open coal mines, etc. So we should think it’s very likely that, after a catastrophe of this scale, civilisation would still recover.”
And the best response to the sceptic, if one could be found, would be pointing to a challenge that post-catastrophe survivors would face that hunter-gatherers didn’t: e.g. a very different disease burden, or climate, or something.
I’m also just really pro Forum users trying to independently verify arguments made by others in EA (or endorsed by others in EA), or check data that’s being widely used. E.g. I thought Jeff Kaufman’s series on AI risk was excellent. And recently Ben Garfinkel has been trying to locate the sources of the global population numbers that underlie the ‘hyperbolic growth’ idea and I’ve found that work important and helpful.
(In general, I think we can sometimes have a double standard where we will happily tear apart careful, widely-cited research done by people outside the community, but then place a lot of weight on ideas or arguments that have come from within the community, even if they haven’t gone through the equivalent of rigorous peer-review.)
If anyone decides to work one this, please feel free to contact me! There is a small but non-negligible probability I’ll work on this question, and if I don’t I’d be happy to help out with some contacts I made.