I definitely agree that funding is a significant factor for some institutional actors.
For example, RP’s Surveys and Data Analysis team has a significant amount of research that we would like to publish if we had capacity / could afford to do so: our capacity is entirely bottlenecked on funding and as we are ~ entirely reliant on paid commissions (we don’t receive any grants for general support) time spent publishing reports is basically just pro bono, adding to our funding deficit.
Example of this sort of unpublished research include:
The two reports mentioned by CEA here about attitudes towards EA post-FTX among the general public, elites, and students on elite university campuses.
Followup posts about the survey reported here about how many people have heard of EA, to further discuss people’s attitudes towards EA, and where members of the general public hear about EA (this differs systematically)
Updated numbers on the growth of the EA community (2020-2022) extending this method and also looking at numbers of highly engaged longtermists specifically
Several studies we ran to develop reliable measure of how positively inclined towards longtermism people are, looking at different predictors of support for longtermism and how these vary in the population
Reports on differences between neartermists and longtermists within the EA community and on how neartermist / longtermist efforts influence each other (e.g. to what extent does neartermist outreach, like GiveWell, Peter Singer articles about poverty, lead to increased numbers of longtermists)
Whether the age at which one first engaged with EA predicts lower / higher future engagement with EA
A significant dynamic here is that even where we are paid to complete research for particular orgs, we are not funded for the extra time it would take to write up and publish the results for the community. So doing so is usually unaffordable, even where we have staff capacity.
Of course, much of our privately commissioned research is private, such that we couldn’t post it. But there are also significant amounts of research that we would want to conduct independently, so that we could publish it, which we can’t do purely due to lack of funding. This includes:
More message testing research related to EA /longtermism (for an example see Will MacAskill’s comment referencing our work here), including but not limited to:
Testing the effectiveness of specific arguments for these causes
Testing how “longtermist” or “existential risk” or “effective altruist” or “global priorities” framings/brandings compare in terms of how people respond to them (including comparing this to just advocating for specific concrete x-risks without
Testing effectiveness of different approaches to outreach in different populations for AI safety / particular policies
“We want to publish but can’t because the time isn’t paid for” seems like a big loss[1], and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it’s a crisply defined chunk of work with clear outcomes. But I imagine you guys have already put some thought into how to get this paid for.
To be totally honest, I have qualms about the specific projects you mention, they seem centered on social reality not objective reality. But I value a lot of RP’s other work, think social reality investigations can be helpful in moderation, and my qualms about these questions aren’t enough to override the general principle.
“We want to publish but can’t because the time isn’t paid for” seems like a big loss, and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it’s a crisply defined chunk of work with clear outcomes.
Thanks! I’m planning to post something about our funding situation before the end of the year, but a couple of quick observations about the specific points you raise:
I think funding projects from multiple smaller donors is just generally more difficult to coordinate than funding from a single source
A lot of people seem to assume that our projects already are fully funded or that they should be centrally funded because they seem very much like core community infrastructure, which reduces inclination to donate
they seem centered on social reality not objective reality. But I value a lot of RP’s other work, think social reality investigations can be helpful in moderation, and my qualms about these questions aren’t enough to override the general principle.
I’d be curious to understand this line of thinking better if you have time to elaborate. “Social” vs “objective” doesn’t seem like a natural and action-guiding distinction to me. For example:
Does everyone we want to influence hate EA post-FTX?
Are people more convinced by outreach based on “longtermism” or “existential risk” or principles-based effective altruism or specific concrete causes more effective?
Do people who first engage with EA when they are younger end up less engaged with EA than those who first engage when they are older?
How fast is EA growing?
all strike me as objective social questions of clear importance. Also, it seems like the key questions around movement building will often be (characterisable as) “social” questions. I could understand concerns about too much meta but too much “social” seems harder to understand.[1]
A possible interpretation I would have some sympathy for is distinguishing between concern with what is persuasive vs what is correct. But I don’t think this raises concerns about these kinds of projects, because:
- A number of these projects are not about increasing persuasiveness at all (e.g. how fast is EA growing? Where are people encountering EA ideas?). Even findings like “does everyone on elite campuses hate EA?” are relevant for reasons other than simply increasing persuasiveness, e.g. decisions about whether we should increase or decrease spending on outreach at the top of the funnel.
- Even if you have a strong aversion to optimising for persuasiveness (you want to just present the facts and let people respond how they will), you may well still want to know if people are totally misunderstanding your arguments as you present them (which seems exceptionally common in cases like AI risk).
- And, of course, I think many people reasonably think that if you care about impact, you should care about whether your arguments are persuasive (while still limiting yourself to arguments which are accurate, sincerely held etc.).
- The overall EA portfolio seems to assign a very small portion of its resources to this sort of research as it stands (despite dedicating a reasonably large amount of time to a priori speculation about these questions (1)(2)(3)(4)(5)(6)(7)(8)) so some more empirical investigation of them seems warranted.
Yeah, “objective” wasn’t a great word choice there. I went back and forth between “objective”, “object”, and “object-level”, and probably made the wrong call. I agree there is an objective answer to “what percentage of people think positively of malaria nets?” but view it as importantly different than “what is the impact of nets on the spread of malaria?”
I agree the right amount of social meta-investigation is >0. I’m currently uncomfortable with the amount EA thinks about itself and its presentation; but even if that’s true, professionalizing the investigation may be an improvement. My qualms here don’t rise to the level where I would voice them in the normal course of events, but they seemed important to state when I was otherwise pretty explicitly endorsing the potential posts.
I can say a little more on what in particular made me uncomfortable. I wouldn’t be writing these if you hadn’t asked and if I hadn’t just called for money for the project of writing them up, and if I was I’d be aiming for a much higher quality bar. I view saying these at this quality level as a little risky, but worth it because this conversation feels really productive and I do think these concerns about EA overall are important, even though I don’t think they’re your fault in particular:
several of these questions feel like they don’t cut reality at the joints, and would render important facets invisible. These were quick summaries so it’s not fair to judge them, but I feel this way about a lot of EA survey work where I do have details.
several of your questions revolve around growth; I think EA’s emphasis on growth has been toxic and needs a complete overhaul before EA is allowed to gather data again.
I especially think CEA’s emphasis on Highly Engaged people is a warped frame that causes a lot of invisible damage. My reasoning is pretty similar to Theo’s here.
I don’t believe EA knows what to do with the people it recruits, and should stop worrying about recruiting until that problem is resolved.
Asking “do people introduced to EA younger stick around longer?” has an implicit frame that longer is better, and is missing follow-ups like “is it good for them? what’s the counterfactual for the world?”
I definitely agree that funding is a significant factor for some institutional actors.
For example, RP’s Surveys and Data Analysis team has a significant amount of research that we would like to publish if we had capacity / could afford to do so: our capacity is entirely bottlenecked on funding and as we are ~ entirely reliant on paid commissions (we don’t receive any grants for general support) time spent publishing reports is basically just pro bono, adding to our funding deficit.
Example of this sort of unpublished research include:
The two reports mentioned by CEA here about attitudes towards EA post-FTX among the general public, elites, and students on elite university campuses.
Followup posts about the survey reported here about how many people have heard of EA, to further discuss people’s attitudes towards EA, and where members of the general public hear about EA (this differs systematically)
Updated numbers on the growth of the EA community (2020-2022) extending this method and also looking at numbers of highly engaged longtermists specifically
Several studies we ran to develop reliable measure of how positively inclined towards longtermism people are, looking at different predictors of support for longtermism and how these vary in the population
Reports on differences between neartermists and longtermists within the EA community and on how neartermist / longtermist efforts influence each other (e.g. to what extent does neartermist outreach, like GiveWell, Peter Singer articles about poverty, lead to increased numbers of longtermists)
Whether the age at which one first engaged with EA predicts lower / higher future engagement with EA
A significant dynamic here is that even where we are paid to complete research for particular orgs, we are not funded for the extra time it would take to write up and publish the results for the community. So doing so is usually unaffordable, even where we have staff capacity.
Of course, much of our privately commissioned research is private, such that we couldn’t post it. But there are also significant amounts of research that we would want to conduct independently, so that we could publish it, which we can’t do purely due to lack of funding. This includes:
More message testing research related to EA /longtermism (for an example see Will MacAskill’s comment referencing our work here), including but not limited to:
Testing the effectiveness of specific arguments for these causes
Testing how “longtermist” or “existential risk” or “effective altruist” or “global priorities” framings/brandings compare in terms of how people respond to them (including comparing this to just advocating for specific concrete x-risks without
Testing effectiveness of different approaches to outreach in different populations for AI safety / particular policies
“We want to publish but can’t because the time isn’t paid for” seems like a big loss[1], and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it’s a crisply defined chunk of work with clear outcomes. But I imagine you guys have already put some thought into how to get this paid for.
To be totally honest, I have qualms about the specific projects you mention, they seem centered on social reality not objective reality. But I value a lot of RP’s other work, think social reality investigations can be helpful in moderation, and my qualms about these questions aren’t enough to override the general principle.
Thanks! I’m planning to post something about our funding situation before the end of the year, but a couple of quick observations about the specific points you raise:
I think funding projects from multiple smaller donors is just generally more difficult to coordinate than funding from a single source
A lot of people seem to assume that our projects already are fully funded or that they should be centrally funded because they seem very much like core community infrastructure, which reduces inclination to donate
I’d be curious to understand this line of thinking better if you have time to elaborate. “Social” vs “objective” doesn’t seem like a natural and action-guiding distinction to me. For example:
Does everyone we want to influence hate EA post-FTX?
Are people more convinced by outreach based on “longtermism” or “existential risk” or principles-based effective altruism or specific concrete causes more effective?
Do people who first engage with EA when they are younger end up less engaged with EA than those who first engage when they are older?
How fast is EA growing?
all strike me as objective social questions of clear importance. Also, it seems like the key questions around movement building will often be (characterisable as) “social” questions. I could understand concerns about too much meta but too much “social” seems harder to understand.[1]
A possible interpretation I would have some sympathy for is distinguishing between concern with what is persuasive vs what is correct. But I don’t think this raises concerns about these kinds of projects, because:
- A number of these projects are not about increasing persuasiveness at all (e.g. how fast is EA growing? Where are people encountering EA ideas?). Even findings like “does everyone on elite campuses hate EA?” are relevant for reasons other than simply increasing persuasiveness, e.g. decisions about whether we should increase or decrease spending on outreach at the top of the funnel.
- Even if you have a strong aversion to optimising for persuasiveness (you want to just present the facts and let people respond how they will), you may well still want to know if people are totally misunderstanding your arguments as you present them (which seems exceptionally common in cases like AI risk).
- And, of course, I think many people reasonably think that if you care about impact, you should care about whether your arguments are persuasive (while still limiting yourself to arguments which are accurate, sincerely held etc.).
- The overall EA portfolio seems to assign a very small portion of its resources to this sort of research as it stands (despite dedicating a reasonably large amount of time to a priori speculation about these questions (1)(2)(3)(4)(5)(6)(7)(8)) so some more empirical investigation of them seems warranted.
Yeah, “objective” wasn’t a great word choice there. I went back and forth between “objective”, “object”, and “object-level”, and probably made the wrong call. I agree there is an objective answer to “what percentage of people think positively of malaria nets?” but view it as importantly different than “what is the impact of nets on the spread of malaria?”
I agree the right amount of social meta-investigation is >0. I’m currently uncomfortable with the amount EA thinks about itself and its presentation; but even if that’s true, professionalizing the investigation may be an improvement. My qualms here don’t rise to the level where I would voice them in the normal course of events, but they seemed important to state when I was otherwise pretty explicitly endorsing the potential posts.
I can say a little more on what in particular made me uncomfortable. I wouldn’t be writing these if you hadn’t asked and if I hadn’t just called for money for the project of writing them up, and if I was I’d be aiming for a much higher quality bar. I view saying these at this quality level as a little risky, but worth it because this conversation feels really productive and I do think these concerns about EA overall are important, even though I don’t think they’re your fault in particular:
several of these questions feel like they don’t cut reality at the joints, and would render important facets invisible. These were quick summaries so it’s not fair to judge them, but I feel this way about a lot of EA survey work where I do have details.
several of your questions revolve around growth; I think EA’s emphasis on growth has been toxic and needs a complete overhaul before EA is allowed to gather data again.
I especially think CEA’s emphasis on Highly Engaged people is a warped frame that causes a lot of invisible damage. My reasoning is pretty similar to Theo’s here.
I don’t believe EA knows what to do with the people it recruits, and should stop worrying about recruiting until that problem is resolved.
Asking “do people introduced to EA younger stick around longer?” has an implicit frame that longer is better, and is missing follow-ups like “is it good for them? what’s the counterfactual for the world?”