Thanks for reading the post and for the summary. I find the summary quite accurate, but here’s a few notes:
Point 1: I’m not trying to argue that WAW is a more pressing cause area, however that IF my estimations are somewhat correct, then WAW and x-risk prevention are roughly similar. Also I wanted to present a possible way (the “main formula”) that could universally be used to compare longtermist causes. (Sidenote: I actually considered writing the post as a comparison of “insuring a future” to “making the future better”)
Point 3: I agree, except for the use of the word “existential risk”. As I see it (for the most part), suffering-risks prevention are for reducing future suffering and existential risk prevention are for ensuring that the hopefully net positive future will happen. Even though some areas might be related to both (e.g. AI-alignment). So I think the word “existential risk” should be changed to “suffering-risks”.
The priority of what’s most important might be a bit off (which I guess is to be expected with AI). E.g. I think “recommendations on how to use this information” should take up a bigger part of a summary that size, because of its higher importance. But I guess that’s debatable.
IDK if this is relevant but here we go: I don’t think the summary fits the target audience super well. E.g. “Both x-risk prevention and wild animal welfare are highly neglected areas compared to their importance” is probably not that new information for most people reading this. But this might be fixable with giving more precise instructions to the AI.
Thanks for reading the post and for the summary. I find the summary quite accurate, but here’s a few notes:
Point 1: I’m not trying to argue that WAW is a more pressing cause area, however that IF my estimations are somewhat correct, then WAW and x-risk prevention are roughly similar. Also I wanted to present a possible way (the “main formula”) that could universally be used to compare longtermist causes. (Sidenote: I actually considered writing the post as a comparison of “insuring a future” to “making the future better”)
Point 3: I agree, except for the use of the word “existential risk”. As I see it (for the most part), suffering-risks prevention are for reducing future suffering and existential risk prevention are for ensuring that the hopefully net positive future will happen. Even though some areas might be related to both (e.g. AI-alignment). So I think the word “existential risk” should be changed to “suffering-risks”.
The priority of what’s most important might be a bit off (which I guess is to be expected with AI). E.g. I think “recommendations on how to use this information” should take up a bigger part of a summary that size, because of its higher importance. But I guess that’s debatable.
IDK if this is relevant but here we go: I don’t think the summary fits the target audience super well. E.g. “Both x-risk prevention and wild animal welfare are highly neglected areas compared to their importance” is probably not that new information for most people reading this. But this might be fixable with giving more precise instructions to the AI.