I considered Evan Williams’ paper one of the most important papers in cause prioritization at the time, and I think I still broadly buy this. As I mention in this answer, there are at least 4 points his paper brought up that are nontrivial, interesting, and hard to refute.
If I were to write this summary again, I think I’d be noticeably more opinionated. In particular, a key disagreement I have with him (which I remember having at the time I was making the summary, but this never making it into my notes) is on the importance of the speed of moral progress vs the sustainability of continued moral progress. In “implementation of improved values”, the paper focuses a lot on the flexibility of setting up society to be able to make moral progress quickly, but naively I feel about as worried or more worried that society can make anti-progress and do horrifyingly dumb and new things in the name of good. So I’d be really worried about trajectory changes for the worse, especially longer-lasting ones (“lock-in” is a phrase that’s in vogue these days).
I’ve also updated significantly on both the moral cost and the emprical probability of near-term extinction risks, and of course extinction is the archetypal existential risk that will dramatically curtail the value of the far future.
It feels weird getting my outline into the EA decade review, instead of the original paper, though I wouldn’t be very surprised if at this point more EAs have read my outline than the paper itself.
I vaguely feel like Williams should get a lot more credit than he has received for this paper. Like EA should give him a prize or something, maybe help him identify more impactful research areas, etc.
I considered Evan Williams’ paper one of the most important papers in cause prioritization at the time, and I think I still broadly buy this. As I mention in this answer, there are at least 4 points his paper brought up that are nontrivial, interesting, and hard to refute.
If I were to write this summary again, I think I’d be noticeably more opinionated. In particular, a key disagreement I have with him (which I remember having at the time I was making the summary, but this never making it into my notes) is on the importance of the speed of moral progress vs the sustainability of continued moral progress. In “implementation of improved values”, the paper focuses a lot on the flexibility of setting up society to be able to make moral progress quickly, but naively I feel about as worried or more worried that society can make anti-progress and do horrifyingly dumb and new things in the name of good. So I’d be really worried about trajectory changes for the worse, especially longer-lasting ones (“lock-in” is a phrase that’s in vogue these days).
I’ve also updated significantly on both the moral cost and the emprical probability of near-term extinction risks, and of course extinction is the archetypal existential risk that will dramatically curtail the value of the far future.
It feels weird getting my outline into the EA decade review, instead of the original paper, though I wouldn’t be very surprised if at this point more EAs have read my outline than the paper itself.
I vaguely feel like Williams should get a lot more credit than he has received for this paper. Like EA should give him a prize or something, maybe help him identify more impactful research areas, etc.