Just my two cents, but in my view, these are how valuable these forum posts would be:
Shelters MVP − 9⁄10
I’d be interested to read about this since you say it could be what OpenPhil spends its last longtermist dollar on. It’s also just something personally interesting to me, and I think other longtermist EAs would be interested in it too.
What are Good Humanities Research Ideas for Longtermism? − 8⁄10
I think it’d be good to get people with humanities backgrounds to do more research work on longtermism.
After the Apocalypse − 7⁄10
I think this is quite interrelated with the Shelters MVP post, and so initially I ranked this an 8⁄10 . But I’m a bit more interested though in Shelters MVP as a way to protect people rather than just helping people get better at surviving in the wild or after a catastrophe, which there might be resources already for outside of EA.
How to Get Good at Forecasting − 6⁄10
I think a lot of EAs would be interested in this, myself included, but I think the value of the 3 posts above are higher, and I think they are more neglected/unique. I presume it would be easier for someone to interview forecasters themselves if they were interested in learning from them on how to get good at forecasting, rather than for someone to compile a bunch of research about any of the 3 topics above.
Moral Circle Expansion − 5⁄10
I’m skeptical of how much this would change people’s views on Moral Circle Expansion, so I don’t think this post would have a lot of value, since it might not be concrete/applicable enough.
I’m skeptical of how much this would change people’s views on Moral Circle Expansion, so I don’t think this post would have a lot of value, since it might not be concrete/applicable enough.
Speaking for myself here, I’d be very interested in reading a more in-depth critique of Moral Circle Expansion, and I’m open to changing my mind on that topic. Although I’m perhaps most interested in predictions of specific questions, like whether our descendants will care about the welfare of invertebrates and other wild animals, and (relatedly) whether sentience is likely to be the main determinant of moral concern in the future.
Just my two cents, but in my view, these are how valuable these forum posts would be:
Shelters MVP − 9⁄10
I’d be interested to read about this since you say it could be what OpenPhil spends its last longtermist dollar on. It’s also just something personally interesting to me, and I think other longtermist EAs would be interested in it too.
What are Good Humanities Research Ideas for Longtermism? − 8⁄10
I think it’d be good to get people with humanities backgrounds to do more research work on longtermism.
After the Apocalypse − 7⁄10
I think this is quite interrelated with the Shelters MVP post, and so initially I ranked this an 8⁄10 . But I’m a bit more interested though in Shelters MVP as a way to protect people rather than just helping people get better at surviving in the wild or after a catastrophe, which there might be resources already for outside of EA.
How to Get Good at Forecasting − 6⁄10
I think a lot of EAs would be interested in this, myself included, but I think the value of the 3 posts above are higher, and I think they are more neglected/unique. I presume it would be easier for someone to interview forecasters themselves if they were interested in learning from them on how to get good at forecasting, rather than for someone to compile a bunch of research about any of the 3 topics above.
Moral Circle Expansion − 5⁄10
I’m skeptical of how much this would change people’s views on Moral Circle Expansion, so I don’t think this post would have a lot of value, since it might not be concrete/applicable enough.
I think I roughly agree with your ranking Brian!
Speaking for myself here, I’d be very interested in reading a more in-depth critique of Moral Circle Expansion, and I’m open to changing my mind on that topic. Although I’m perhaps most interested in predictions of specific questions, like whether our descendants will care about the welfare of invertebrates and other wild animals, and (relatedly) whether sentience is likely to be the main determinant of moral concern in the future.
(Thanks Linch for a great post!)