“We want to publish but can’t because the time isn’t paid for” seems like a big loss, and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it’s a crisply defined chunk of work with clear outcomes.
Thanks! I’m planning to post something about our funding situation before the end of the year, but a couple of quick observations about the specific points you raise:
I think funding projects from multiple smaller donors is just generally more difficult to coordinate than funding from a single source
A lot of people seem to assume that our projects already are fully funded or that they should be centrally funded because they seem very much like core community infrastructure, which reduces inclination to donate
they seem centered on social reality not objective reality. But I value a lot of RP’s other work, think social reality investigations can be helpful in moderation, and my qualms about these questions aren’t enough to override the general principle.
I’d be curious to understand this line of thinking better if you have time to elaborate. “Social” vs “objective” doesn’t seem like a natural and action-guiding distinction to me. For example:
Does everyone we want to influence hate EA post-FTX?
Are people more convinced by outreach based on “longtermism” or “existential risk” or principles-based effective altruism or specific concrete causes more effective?
Do people who first engage with EA when they are younger end up less engaged with EA than those who first engage when they are older?
How fast is EA growing?
all strike me as objective social questions of clear importance. Also, it seems like the key questions around movement building will often be (characterisable as) “social” questions. I could understand concerns about too much meta but too much “social” seems harder to understand.[1]
A possible interpretation I would have some sympathy for is distinguishing between concern with what is persuasive vs what is correct. But I don’t think this raises concerns about these kinds of projects, because:
- A number of these projects are not about increasing persuasiveness at all (e.g. how fast is EA growing? Where are people encountering EA ideas?). Even findings like “does everyone on elite campuses hate EA?” are relevant for reasons other than simply increasing persuasiveness, e.g. decisions about whether we should increase or decrease spending on outreach at the top of the funnel.
- Even if you have a strong aversion to optimising for persuasiveness (you want to just present the facts and let people respond how they will), you may well still want to know if people are totally misunderstanding your arguments as you present them (which seems exceptionally common in cases like AI risk).
- And, of course, I think many people reasonably think that if you care about impact, you should care about whether your arguments are persuasive (while still limiting yourself to arguments which are accurate, sincerely held etc.).
- The overall EA portfolio seems to assign a very small portion of its resources to this sort of research as it stands (despite dedicating a reasonably large amount of time to a priori speculation about these questions (1)(2)(3)(4)(5)(6)(7)(8)) so some more empirical investigation of them seems warranted.
Yeah, “objective” wasn’t a great word choice there. I went back and forth between “objective”, “object”, and “object-level”, and probably made the wrong call. I agree there is an objective answer to “what percentage of people think positively of malaria nets?” but view it as importantly different than “what is the impact of nets on the spread of malaria?”
I agree the right amount of social meta-investigation is >0. I’m currently uncomfortable with the amount EA thinks about itself and its presentation; but even if that’s true, professionalizing the investigation may be an improvement. My qualms here don’t rise to the level where I would voice them in the normal course of events, but they seemed important to state when I was otherwise pretty explicitly endorsing the potential posts.
I can say a little more on what in particular made me uncomfortable. I wouldn’t be writing these if you hadn’t asked and if I hadn’t just called for money for the project of writing them up, and if I was I’d be aiming for a much higher quality bar. I view saying these at this quality level as a little risky, but worth it because this conversation feels really productive and I do think these concerns about EA overall are important, even though I don’t think they’re your fault in particular:
several of these questions feel like they don’t cut reality at the joints, and would render important facets invisible. These were quick summaries so it’s not fair to judge them, but I feel this way about a lot of EA survey work where I do have details.
several of your questions revolve around growth; I think EA’s emphasis on growth has been toxic and needs a complete overhaul before EA is allowed to gather data again.
I especially think CEA’s emphasis on Highly Engaged people is a warped frame that causes a lot of invisible damage. My reasoning is pretty similar to Theo’s here.
I don’t believe EA knows what to do with the people it recruits, and should stop worrying about recruiting until that problem is resolved.
Asking “do people introduced to EA younger stick around longer?” has an implicit frame that longer is better, and is missing follow-ups like “is it good for them? what’s the counterfactual for the world?”
Thanks! I’m planning to post something about our funding situation before the end of the year, but a couple of quick observations about the specific points you raise:
I think funding projects from multiple smaller donors is just generally more difficult to coordinate than funding from a single source
A lot of people seem to assume that our projects already are fully funded or that they should be centrally funded because they seem very much like core community infrastructure, which reduces inclination to donate
I’d be curious to understand this line of thinking better if you have time to elaborate. “Social” vs “objective” doesn’t seem like a natural and action-guiding distinction to me. For example:
Does everyone we want to influence hate EA post-FTX?
Are people more convinced by outreach based on “longtermism” or “existential risk” or principles-based effective altruism or specific concrete causes more effective?
Do people who first engage with EA when they are younger end up less engaged with EA than those who first engage when they are older?
How fast is EA growing?
all strike me as objective social questions of clear importance. Also, it seems like the key questions around movement building will often be (characterisable as) “social” questions. I could understand concerns about too much meta but too much “social” seems harder to understand.[1]
A possible interpretation I would have some sympathy for is distinguishing between concern with what is persuasive vs what is correct. But I don’t think this raises concerns about these kinds of projects, because:
- A number of these projects are not about increasing persuasiveness at all (e.g. how fast is EA growing? Where are people encountering EA ideas?). Even findings like “does everyone on elite campuses hate EA?” are relevant for reasons other than simply increasing persuasiveness, e.g. decisions about whether we should increase or decrease spending on outreach at the top of the funnel.
- Even if you have a strong aversion to optimising for persuasiveness (you want to just present the facts and let people respond how they will), you may well still want to know if people are totally misunderstanding your arguments as you present them (which seems exceptionally common in cases like AI risk).
- And, of course, I think many people reasonably think that if you care about impact, you should care about whether your arguments are persuasive (while still limiting yourself to arguments which are accurate, sincerely held etc.).
- The overall EA portfolio seems to assign a very small portion of its resources to this sort of research as it stands (despite dedicating a reasonably large amount of time to a priori speculation about these questions (1)(2)(3)(4)(5)(6)(7)(8)) so some more empirical investigation of them seems warranted.
Yeah, “objective” wasn’t a great word choice there. I went back and forth between “objective”, “object”, and “object-level”, and probably made the wrong call. I agree there is an objective answer to “what percentage of people think positively of malaria nets?” but view it as importantly different than “what is the impact of nets on the spread of malaria?”
I agree the right amount of social meta-investigation is >0. I’m currently uncomfortable with the amount EA thinks about itself and its presentation; but even if that’s true, professionalizing the investigation may be an improvement. My qualms here don’t rise to the level where I would voice them in the normal course of events, but they seemed important to state when I was otherwise pretty explicitly endorsing the potential posts.
I can say a little more on what in particular made me uncomfortable. I wouldn’t be writing these if you hadn’t asked and if I hadn’t just called for money for the project of writing them up, and if I was I’d be aiming for a much higher quality bar. I view saying these at this quality level as a little risky, but worth it because this conversation feels really productive and I do think these concerns about EA overall are important, even though I don’t think they’re your fault in particular:
several of these questions feel like they don’t cut reality at the joints, and would render important facets invisible. These were quick summaries so it’s not fair to judge them, but I feel this way about a lot of EA survey work where I do have details.
several of your questions revolve around growth; I think EA’s emphasis on growth has been toxic and needs a complete overhaul before EA is allowed to gather data again.
I especially think CEA’s emphasis on Highly Engaged people is a warped frame that causes a lot of invisible damage. My reasoning is pretty similar to Theo’s here.
I don’t believe EA knows what to do with the people it recruits, and should stop worrying about recruiting until that problem is resolved.
Asking “do people introduced to EA younger stick around longer?” has an implicit frame that longer is better, and is missing follow-ups like “is it good for them? what’s the counterfactual for the world?”