Some questions to which I suspect key figures in Effective Altruism and Progress Studies would give different answers:
a. How much of a problem is it to have a mainstream culture that is afraid of technology, or that underrates its promise?
b. How does the rate of economic growth in the West affect the probability of political catastrophe, e.g. WWIII?
c. How fragile are Enlightenment norms of open, truth-seeking debate? (E.g. Deutsch thinks something like the Enlightenment “tried to happen” several times, and that these norms may be more fragile than we think.)
d. To what extent is existential risk something that should be quietly managed by technocrats vs a popular issue that politicians should be talking about?
e. The relative priority of catastrophic and existential risk reduction, and the level of convergence between these goals.
f. The tractability of reducing existential risk.
g. What is most needed: more innovation, or more theory/plans/coordination?
h. What does ideal and actual human rationality look like? E.g. Bayesian, ecological, individual, social.
i. How to act when faced with small probabilities of extremely good or extremely bad outcomes.
j. How well can we predict the future? Is it reasonable to make probability estimates about technological innovation? (I can’t quickly find the strongest “you can’t put probabilities” argument, but here’s Anders Sandberg sub-Youtubing Deutsch)
Some questions to which I suspect key figures in Effective Altruism and Progress Studies would give different answers:
a. How much of a problem is it to have a mainstream culture that is afraid of technology, or that underrates its promise?
b. How does the rate of economic growth in the West affect the probability of political catastrophe, e.g. WWIII?
c. How fragile are Enlightenment norms of open, truth-seeking debate? (E.g. Deutsch thinks something like the Enlightenment “tried to happen” several times, and that these norms may be more fragile than we think.)
d. To what extent is existential risk something that should be quietly managed by technocrats vs a popular issue that politicians should be talking about?
e. The relative priority of catastrophic and existential risk reduction, and the level of convergence between these goals.
f. The tractability of reducing existential risk.
g. What is most needed: more innovation, or more theory/plans/coordination?
h. What does ideal and actual human rationality look like? E.g. Bayesian, ecological, individual, social.
i. How to act when faced with small probabilities of extremely good or extremely bad outcomes.
j. How well can we predict the future? Is it reasonable to make probability estimates about technological innovation? (I can’t quickly find the strongest “you can’t put probabilities” argument, but here’s Anders Sandberg sub-Youtubing Deutsch)
k. Credence in moral realism.