[Disclaimer: I used to be the Executive Director of the Foundational Research Institute, and currently work at the Future of Humanity Institute, both of which you mention in your post. Views are my own.]
Thank you so much for writing this! I wish I could triple-upvote this post. It seems to fit very well with some thoughts and unarticulated frustrations I’ve had for a while. This doesn’t mean I agree with everything in the OP, but I feel excited about conversations it might start. I might add some more specific comments over the next few days.
[FWIW, I’m coming roughly from a place of believing that (i) at least some of the central ‘ideological tenets’ of EA are conducive to the community causing good outcomes, (ii) the overall ideological and social package of EA making me more optimistic about the EA community causing good outcomes per member than about any other major existing social and ideological package. However, I think these are messy empirical questions we are ultimately clueless about. And I do share a sense that in at least some conversations within the community it’s not being acknowledged that these are debatable questions, and that the community’s trajectory is being and will be affected by these implicit “ideological” foundations. (Even though I probably wouldn’t have chosen the term “ideology”.)
I do think that an awareness of EA’s implicit ideological tenets sometimes points to marginal improvements I’d like the community to make. This is particularly true for more broadly investigating potential long-termist cause areas, including ones that don’t have to do with emerging technologies. I also suspect that qualitative methodologies from the social sciences and humanities are currently being underused, e.g. I’d be very excited to see thoroughly conducted interviews with AI researchers and certain government staff on several topics.
Of course, all of this reflects that I’m thinking about this in a sufficiently outcome-oriented ethical framework.
My perception also is that within the social networks most tightly coalescing around the major EA organizations in Oxford and the Bay Area it is more common for people to be aware of the contingent “ideological” foundations you point to than one would maybe expect based on published texts. As a random example, I know of one person working at GPI who described themselves as a dualist, and I’ve definitely seen discussions around “What if certain religious views are true?”—in fact, I’ve seen many more discussions of the latter kind than in other mostly secular contexts and communities I’m familiar with.]
I already did this. - I was implicitly labelling this “double upvote” and was trying to say something like “I wish I could upvote this post even more strongly than with a ‘strong upvote’”. But thanks for letting me know, and sorry that this wasn’t clear. :)
[Disclaimer: I used to be the Executive Director of the Foundational Research Institute, and currently work at the Future of Humanity Institute, both of which you mention in your post. Views are my own.]
Thank you so much for writing this! I wish I could triple-upvote this post. It seems to fit very well with some thoughts and unarticulated frustrations I’ve had for a while. This doesn’t mean I agree with everything in the OP, but I feel excited about conversations it might start. I might add some more specific comments over the next few days.
[FWIW, I’m coming roughly from a place of believing that (i) at least some of the central ‘ideological tenets’ of EA are conducive to the community causing good outcomes, (ii) the overall ideological and social package of EA making me more optimistic about the EA community causing good outcomes per member than about any other major existing social and ideological package. However, I think these are messy empirical questions we are ultimately clueless about. And I do share a sense that in at least some conversations within the community it’s not being acknowledged that these are debatable questions, and that the community’s trajectory is being and will be affected by these implicit “ideological” foundations. (Even though I probably wouldn’t have chosen the term “ideology”.)
I do think that an awareness of EA’s implicit ideological tenets sometimes points to marginal improvements I’d like the community to make. This is particularly true for more broadly investigating potential long-termist cause areas, including ones that don’t have to do with emerging technologies. I also suspect that qualitative methodologies from the social sciences and humanities are currently being underused, e.g. I’d be very excited to see thoroughly conducted interviews with AI researchers and certain government staff on several topics.
Of course, all of this reflects that I’m thinking about this in a sufficiently outcome-oriented ethical framework.
My perception also is that within the social networks most tightly coalescing around the major EA organizations in Oxford and the Bay Area it is more common for people to be aware of the contingent “ideological” foundations you point to than one would maybe expect based on published texts. As a random example, I know of one person working at GPI who described themselves as a dualist, and I’ve definitely seen discussions around “What if certain religious views are true?”—in fact, I’ve seen many more discussions of the latter kind than in other mostly secular contexts and communities I’m familiar with.]
You can! :P. Click-and-hold for “strong upvote.”
I already did this. - I was implicitly labelling this “double upvote” and was trying to say something like “I wish I could upvote this post even more strongly than with a ‘strong upvote’”. But thanks for letting me know, and sorry that this wasn’t clear. :)