Thanks for reading the post and for the summary. I find the summary quite accurate, but hereâs a few notes:
Point 1: Iâm not trying to argue that WAW is a more pressing cause area, however that IF my estimations are somewhat correct, then WAW and x-risk prevention are roughly similar. Also I wanted to present a possible way (the âmain formulaâ) that could universally be used to compare longtermist causes. (Sidenote: I actually considered writing the post as a comparison of âinsuring a futureâ to âmaking the future betterâ)
Point 3: I agree, except for the use of the word âexistential riskâ. As I see it (for the most part), suffering-risks prevention are for reducing future suffering and existential risk prevention are for ensuring that the hopefully net positive future will happen. Even though some areas might be related to both (e.g. AI-alignment). So I think the word âexistential riskâ should be changed to âsuffering-risksâ.
The priority of whatâs most important might be a bit off (which I guess is to be expected with AI). E.g. I think ârecommendations on how to use this informationâ should take up a bigger part of a summary that size, because of its higher importance. But I guess thatâs debatable.
IDK if this is relevant but here we go: I donât think the summary fits the target audience super well. E.g. âBoth x-risk prevention and wild animal welfare are highly neglected areas compared to their importanceâ is probably not that new information for most people reading this. But this might be fixable with giving more precise instructions to the AI.
Thanks for reading the post and for the summary. I find the summary quite accurate, but hereâs a few notes:
Point 1: Iâm not trying to argue that WAW is a more pressing cause area, however that IF my estimations are somewhat correct, then WAW and x-risk prevention are roughly similar. Also I wanted to present a possible way (the âmain formulaâ) that could universally be used to compare longtermist causes. (Sidenote: I actually considered writing the post as a comparison of âinsuring a futureâ to âmaking the future betterâ)
Point 3: I agree, except for the use of the word âexistential riskâ. As I see it (for the most part), suffering-risks prevention are for reducing future suffering and existential risk prevention are for ensuring that the hopefully net positive future will happen. Even though some areas might be related to both (e.g. AI-alignment). So I think the word âexistential riskâ should be changed to âsuffering-risksâ.
The priority of whatâs most important might be a bit off (which I guess is to be expected with AI). E.g. I think ârecommendations on how to use this informationâ should take up a bigger part of a summary that size, because of its higher importance. But I guess thatâs debatable.
IDK if this is relevant but here we go: I donât think the summary fits the target audience super well. E.g. âBoth x-risk prevention and wild animal welfare are highly neglected areas compared to their importanceâ is probably not that new information for most people reading this. But this might be fixable with giving more precise instructions to the AI.