I think one would have to have a lot clearer view of the field /â problems /âpotential to be able to do across-cause prioritization and prioritization in the context of differential technological progress in a meaningful way.
Hmm, Iâm not sure I agree.
Or at least, I think Iâd somewhat confidently disagree that the ideal project aimed at doing âacross-cause prioritisationâ and âprioritisation in the context of differential (technological) progressâ would look like more of the same sort of work done in this post.
Iâm not saying youâre necessarily claiming that, but your comment could be read as either making that claim or as side-stepping that question.
To be clear, this is not to say I think this post was useless or doesnât help at all with those objectives!
I think the post is quite useful for within cause prioritisation (which is another probably-useful goal), and somewhat useful for across-cause prioritisation
Though maybe itâs not useful for prioritization in the context of differential progress
I also really liked the postâs structure and clarity, and would be likely to at least skim further work you produce on this topic.
But I think for basically any cause area that hasnât yet received much âacross-cause prioritisationâ research, Iâd be at least somewhat and maybe much more excited about more of that than more within-cause prioritisation research.
And this cause area seems unusually prone to within-cause successes being majorly accidentally harmful (by causing harmful types of progress, technological or otherwise), so this is perhaps especially true here.
And I think the ideal project to do that for meta science would incorporate some components that are like whatâs done in this post, but also other components more explicitly focused on across-cause prioritisation, possible accidental harms, and differential progress
(This may sound harsher than my actual viewsâI do think this post was a useful contribution.)
Hmm, Iâm not sure I agree.
Or at least, I think Iâd somewhat confidently disagree that the ideal project aimed at doing âacross-cause prioritisationâ and âprioritisation in the context of differential (technological) progressâ would look like more of the same sort of work done in this post.
Iâm not saying youâre necessarily claiming that, but your comment could be read as either making that claim or as side-stepping that question.
To be clear, this is not to say I think this post was useless or doesnât help at all with those objectives!
I think the post is quite useful for within cause prioritisation (which is another probably-useful goal), and somewhat useful for across-cause prioritisation
Though maybe itâs not useful for prioritization in the context of differential progress
I also really liked the postâs structure and clarity, and would be likely to at least skim further work you produce on this topic.
But I think for basically any cause area that hasnât yet received much âacross-cause prioritisationâ research, Iâd be at least somewhat and maybe much more excited about more of that than more within-cause prioritisation research.
I explain my reasoning for a similar view in Should marginal longtermist donations support fundamental or intervention research?
And this cause area seems unusually prone to within-cause successes being majorly accidentally harmful (by causing harmful types of progress, technological or otherwise), so this is perhaps especially true here.
And I think the ideal project to do that for meta science would incorporate some components that are like whatâs done in this post, but also other components more explicitly focused on across-cause prioritisation, possible accidental harms, and differential progress
(This may sound harsher than my actual viewsâI do think this post was a useful contribution.)