Thanks for the post. Here are some comments (I am confident there is considerable overlap with the other comments, but I have not read them):
What was done well:
Willingness to challenge EA ideas in order to better understand them and improve them.
Points to possibly neglected topics in long-termism (e.g. mitigation of very bad outcomes).
Sections “What would convince me otherwise”.
Good arguments for why it is uncertain whether the long-term future will be good/bad.
What could be improved:
“While creating happy people is valuable, I view it as much less valuable than making sure people are not in misery. Therefore I am not extremely concerned about the lost potential from extinction risks”.
What about x-risks which do not involve extinction? For example, decreasing s-risk would decrease the likelihood of a future with large “misery”.
Sections “I do not think humanity is inherently super awesome” and “I am unsure whether the future will be better than today”.
Longtermism only requires that most of the expected value of our actions is in the future. It does not rely on predictions about how good the future will be.
Section “The length of the long-term future”.
Similarly, given the uncertainty about the length of the long-term future (Toby Ord guesses in The Precipice there is a “one in two chance that humanity avoids every existential catastrophe and eventually fulfils its potential”), most of the expected value should concern the long-term.
Explicit expected value calculations could overestimate the importance of the long-term. However, a more accurate Bayesian approach could still favour the long-term as long as the prior is not unreasonably narrow.
Section “The ability to influence the long-term future”.
Thanks for the post. Here are some comments (I am confident there is considerable overlap with the other comments, but I have not read them):
What was done well:
Willingness to challenge EA ideas in order to better understand them and improve them.
Points to possibly neglected topics in long-termism (e.g. mitigation of very bad outcomes).
Sections “What would convince me otherwise”.
Good arguments for why it is uncertain whether the long-term future will be good/bad.
What could be improved:
“While creating happy people is valuable, I view it as much less valuable than making sure people are not in misery. Therefore I am not extremely concerned about the lost potential from extinction risks”.
What about x-risks which do not involve extinction? For example, decreasing s-risk would decrease the likelihood of a future with large “misery”.
Sections “I do not think humanity is inherently super awesome” and “I am unsure whether the future will be better than today”.
Longtermism only requires that most of the expected value of our actions is in the future. It does not rely on predictions about how good the future will be.
Section “The length of the long-term future”.
Similarly, given the uncertainty about the length of the long-term future (Toby Ord guesses in The Precipice there is a “one in two chance that humanity avoids every existential catastrophe and eventually fulfils its potential”), most of the expected value should concern the long-term.
Explicit expected value calculations could overestimate the importance of the long-term. However, a more accurate Bayesian approach could still favour the long-term as long as the prior is not unreasonably narrow.
Section “The ability to influence the long-term future”.
The concept of s-risk could be mentioned here.
https://forum.effectivealtruism.org/posts/Z5KZ2cui8WDjyF6gJ/some-thoughts-on-toby-ord-s-existential-risk-estimates#The_table