Thank you very much for the evidence about the funding. OpenPhil has caught up remarkably and I expect many more donors towards longtermism in the future ; GiveWell is excellent but it remains one source and the likelihood that it decreases/doesn’t infuse as much as before remains since it’s more difficult to get funding when there is only one source of funding.
I was indeed wrong to say that longtermism was the most financed area; however, I wouldn’t be surprised if this data changed very fast and the trend reversed next year, given the current circumstances of pushing from the top and hallo effect around longtermism right now.
I don’t want to force myself, but as a community builder, I have to take the leap. Hence my need to understand better how I can get people on board with this.
I’m open to there being new evidence on funding, but I’d also want to make a distinction between existential risk and longtermism as reasons for funding. I could reject the ‘Astronomical Waste’ argument and still think that preventing the worst impacts of Nuclear War/Climate Change from affecting the current generation held massive moral value and deserved funding.
As for being a community builder, I don’t have experience there, but I guess I’d make some suggestions/distinctions:
If you have a co-director for the community in question who is more AI-focused, perhaps split responsibilities along cause area lines
Be open about your personal position (i.e. being unpursuaded about the value of AI risk) but separate that from being a community builder where you introduce the various major cause areas (including AI) and present the arguments for and against it
I don’t think you should have to update or defer your own views in order to be a community builder at all, and I’d encourage you to hold on to that feeling of being unconvinced
Thank you very much for the evidence about the funding. OpenPhil has caught up remarkably and I expect many more donors towards longtermism in the future ; GiveWell is excellent but it remains one source and the likelihood that it decreases/doesn’t infuse as much as before remains since it’s more difficult to get funding when there is only one source of funding.
I was indeed wrong to say that longtermism was the most financed area; however, I wouldn’t be surprised if this data changed very fast and the trend reversed next year, given the current circumstances of pushing from the top and hallo effect around longtermism right now.
I don’t want to force myself, but as a community builder, I have to take the leap. Hence my need to understand better how I can get people on board with this.
I’m open to there being new evidence on funding, but I’d also want to make a distinction between existential risk and longtermism as reasons for funding. I could reject the ‘Astronomical Waste’ argument and still think that preventing the worst impacts of Nuclear War/Climate Change from affecting the current generation held massive moral value and deserved funding.
As for being a community builder, I don’t have experience there, but I guess I’d make some suggestions/distinctions:
If you have a co-director for the community in question who is more AI-focused, perhaps split responsibilities along cause area lines
Be open about your personal position (i.e. being unpursuaded about the value of AI risk) but separate that from being a community builder where you introduce the various major cause areas (including AI) and present the arguments for and against it
I don’t think you should have to update or defer your own views in order to be a community builder at all, and I’d encourage you to hold on to that feeling of being unconvinced
Hope that helps! :)