Nice to know, Rob! I have really liked the podcasts Carl did. You may want to link to Carl’s (great!) blog in your post too.
In general, I would be curious to know more about how Carl thinks about determining how much resources should go into each cause area, which I do not recall being discussed much in Carl’s 3 podcasts. Some potential segways:
How would Carl allocate Open Philanthropy’s funding? I am not sure how easy it would be to discuss this given Carl has been advising Open Phil, but I like your policy of letting guests decide which questions to answer.
Which areas are under or overrated, and why.
Carl has knowledge about lots of topics, very much like Anders Sandberg. So I think the questions I shared to ask Anders are also good questions for Carl:
Should more resources be directed towards patient philanthropy at the margin? How much more/​less?
Which fraction of the expected effects of neartermist interventions (e.g. global health and development, and animal welfare) flow through longtermis considerations (e.g. longterm effects of changing population size, or expansion of the moral circle)? [Is it unclear whether Against Malaria Foundation is better/​worse than Make-A-Wish Foundation, as argued in section 4.1 of Maximal cluelessness?]
What is the chance that the time of perils hypothesis is true (e.g. how does the existential risk this century compare to that over the next 1 billion years)? How can we get more evidence for/​against it? Relevant because, if existential risk is spread out over a long time, reducing existential risk this century has a negligible effect on total existential risk, as discussed by David Thorstad. [See also Rethink’s post on this question.]
How high is the chance of AGI lock-in this century?
What can we do to ensure a bright future if there are advanced aliens on or around Earth (Magnus Vinding’s thoughts)? More broadly, should humanity do anything differently due to the possibility of advanced civilisations which did not originite on Earth? [Another speculative question, which you covered a little in the podcast with Joe, is what should we do differently to improve the world if we were in a simulation?]
How much weight should one give to the XPT’s forecasts? The ones regarding nuclear extinction seem way too pessimistic to be accurate [the reasons I think this are in this thread]. Superforecasters and domain experters predicted a likelihood of nuclear extinction by 2100 of 0.074 % and 0.55 %. My guess would be something like 10^-6 (10 % of a global nuclear nuclear war involving tens of detonations, 10 % of it escalating to thousands of detonations, and 0.01 % of that leading to extinction), in which case superforecasters would be off by 3 orders of magnitude.
Nice to know, Rob! I have really liked the podcasts Carl did. You may want to link to Carl’s (great!) blog in your post too.
In general, I would be curious to know more about how Carl thinks about determining how much resources should go into each cause area, which I do not recall being discussed much in Carl’s 3 podcasts. Some potential segways:
Open Phil Should Allocate Most Neartermist Funding to Animal Welfare. Carl shared in the comments his thoughts on Rethink Priorities’ moral weight project.
Rethink Priorities’ CURVE sequence and cross-cause cost-effectiveness model.
How would Carl allocate Open Philanthropy’s funding? I am not sure how easy it would be to discuss this given Carl has been advising Open Phil, but I like your policy of letting guests decide which questions to answer.
Which areas are under or overrated, and why.
Carl has knowledge about lots of topics, very much like Anders Sandberg. So I think the questions I shared to ask Anders are also good questions for Carl: