Good posts generate a lot of positive externalities, which means they’re undersupplied, especially by people who are busy and don’t get many direct rewards from posting. How do we fix that? What are rewards relevant authors would find meaningful?
Here are some possibilities off the top of my head, with some commentary. My likes are not universal and I hope the comments include people with different utility functions.
Money. Always a classic, rarely disliked although not always prioritized. I’m pretty sure this is why LTFF and EAIF are writing more now.
Appreciation (broad). Some people love these. I definitely prefer getting them over not getting them, but they’re not that motivating for me. Their biggest impact on motivation is probably cushioning the blow of negative comments.
Appreciation (specific). Things like “this led to me getting my iron tested” or “I changed my mind based on X”. I love these, they’re far more impactful than generic appreciation.
High quality criticism that changes my mind.
Arguing with bad commenters.
One of the hardest parts of writing for me is getting a shitty, hostile comment, and feeling like my choices are “let it stand” or “get sucked into a miserable argument that will accomplish nothing”. Commenters arguing with commenters gets me out of this dilemma, which is already great, but then sometimes the commenters display deep understanding of the thing I wrote and that’s maybe my favorite feeling.
Deliberately not included: longer term rewards like reputation that can translate into jobs, employees, etc. I’m specifically looking for quick rewards for specific posts
I definitely agree that funding is a significant factor for some institutional actors.
For example, RP’s Surveys and Data Analysis team has a significant amount of research that we would like to publish if we had capacity / could afford to do so: our capacity is entirely bottlenecked on funding and as we are ~ entirely reliant on paid commissions (we don’t receive any grants for general support) time spent publishing reports is basically just pro bono, adding to our funding deficit.
Example of this sort of unpublished research include:
The two reports mentioned by CEA here about attitudes towards EA post-FTX among the general public, elites, and students on elite university campuses.
Followup posts about the survey reported here about how many people have heard of EA, to further discuss people’s attitudes towards EA, and where members of the general public hear about EA (this differs systematically)
Updated numbers on the growth of the EA community (2020-2022) extending this method and also looking at numbers of highly engaged longtermists specifically
Several studies we ran to develop reliable measure of how positively inclined towards longtermism people are, looking at different predictors of support for longtermism and how these vary in the population
Reports on differences between neartermists and longtermists within the EA community and on how neartermist / longtermist efforts influence each other (e.g. to what extent does neartermist outreach, like GiveWell, Peter Singer articles about poverty, lead to increased numbers of longtermists)
Whether the age at which one first engaged with EA predicts lower / higher future engagement with EA
A significant dynamic here is that even where we are paid to complete research for particular orgs, we are not funded for the extra time it would take to write up and publish the results for the community. So doing so is usually unaffordable, even where we have staff capacity.
Of course, much of our privately commissioned research is private, such that we couldn’t post it. But there are also significant amounts of research that we would want to conduct independently, so that we could publish it, which we can’t do purely due to lack of funding. This includes:
More message testing research related to EA /longtermism (for an example see Will MacAskill’s comment referencing our work here), including but not limited to:
Testing the effectiveness of specific arguments for these causes
Testing how “longtermist” or “existential risk” or “effective altruist” or “global priorities” framings/brandings compare in terms of how people respond to them (including comparing this to just advocating for specific concrete x-risks without
Testing effectiveness of different approaches to outreach in different populations for AI safety / particular policies
“We want to publish but can’t because the time isn’t paid for” seems like a big loss[1], and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it’s a crisply defined chunk of work with clear outcomes. But I imagine you guys have already put some thought into how to get this paid for.
To be totally honest, I have qualms about the specific projects you mention, they seem centered on social reality not objective reality. But I value a lot of RP’s other work, think social reality investigations can be helpful in moderation, and my qualms about these questions aren’t enough to override the general principle.
“We want to publish but can’t because the time isn’t paid for” seems like a big loss, and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it’s a crisply defined chunk of work with clear outcomes.
Thanks! I’m planning to post something about our funding situation before the end of the year, but a couple of quick observations about the specific points you raise:
I think funding projects from multiple smaller donors is just generally more difficult to coordinate than funding from a single source
A lot of people seem to assume that our projects already are fully funded or that they should be centrally funded because they seem very much like core community infrastructure, which reduces inclination to donate
they seem centered on social reality not objective reality. But I value a lot of RP’s other work, think social reality investigations can be helpful in moderation, and my qualms about these questions aren’t enough to override the general principle.
I’d be curious to understand this line of thinking better if you have time to elaborate. “Social” vs “objective” doesn’t seem like a natural and action-guiding distinction to me. For example:
Does everyone we want to influence hate EA post-FTX?
Are people more convinced by outreach based on “longtermism” or “existential risk” or principles-based effective altruism or specific concrete causes more effective?
Do people who first engage with EA when they are younger end up less engaged with EA than those who first engage when they are older?
How fast is EA growing?
all strike me as objective social questions of clear importance. Also, it seems like the key questions around movement building will often be (characterisable as) “social” questions. I could understand concerns about too much meta but too much “social” seems harder to understand.[1]
A possible interpretation I would have some sympathy for is distinguishing between concern with what is persuasive vs what is correct. But I don’t think this raises concerns about these kinds of projects, because:
- A number of these projects are not about increasing persuasiveness at all (e.g. how fast is EA growing? Where are people encountering EA ideas?). Even findings like “does everyone on elite campuses hate EA?” are relevant for reasons other than simply increasing persuasiveness, e.g. decisions about whether we should increase or decrease spending on outreach at the top of the funnel.
- Even if you have a strong aversion to optimising for persuasiveness (you want to just present the facts and let people respond how they will), you may well still want to know if people are totally misunderstanding your arguments as you present them (which seems exceptionally common in cases like AI risk).
- And, of course, I think many people reasonably think that if you care about impact, you should care about whether your arguments are persuasive (while still limiting yourself to arguments which are accurate, sincerely held etc.).
- The overall EA portfolio seems to assign a very small portion of its resources to this sort of research as it stands (despite dedicating a reasonably large amount of time to a priori speculation about these questions (1)(2)(3)(4)(5)(6)(7)(8)) so some more empirical investigation of them seems warranted.
Yeah, “objective” wasn’t a great word choice there. I went back and forth between “objective”, “object”, and “object-level”, and probably made the wrong call. I agree there is an objective answer to “what percentage of people think positively of malaria nets?” but view it as importantly different than “what is the impact of nets on the spread of malaria?”
I agree the right amount of social meta-investigation is >0. I’m currently uncomfortable with the amount EA thinks about itself and its presentation; but even if that’s true, professionalizing the investigation may be an improvement. My qualms here don’t rise to the level where I would voice them in the normal course of events, but they seemed important to state when I was otherwise pretty explicitly endorsing the potential posts.
I can say a little more on what in particular made me uncomfortable. I wouldn’t be writing these if you hadn’t asked and if I hadn’t just called for money for the project of writing them up, and if I was I’d be aiming for a much higher quality bar. I view saying these at this quality level as a little risky, but worth it because this conversation feels really productive and I do think these concerns about EA overall are important, even though I don’t think they’re your fault in particular:
several of these questions feel like they don’t cut reality at the joints, and would render important facets invisible. These were quick summaries so it’s not fair to judge them, but I feel this way about a lot of EA survey work where I do have details.
several of your questions revolve around growth; I think EA’s emphasis on growth has been toxic and needs a complete overhaul before EA is allowed to gather data again.
I especially think CEA’s emphasis on Highly Engaged people is a warped frame that causes a lot of invisible damage. My reasoning is pretty similar to Theo’s here.
I don’t believe EA knows what to do with the people it recruits, and should stop worrying about recruiting until that problem is resolved.
Asking “do people introduced to EA younger stick around longer?” has an implicit frame that longer is better, and is missing follow-ups like “is it good for them? what’s the counterfactual for the world?”
I think we need to be a bit careful with this, as I saw many highly upvoted posts that in my opinion have been actively harmful. Some very clear examples:
Theses on Sleep, claiming that sleep is not that important. I know at least one person that tried to sleep 6 hours/day for a few weeks after reading this, with predictable results
In general, I think we should promote more posts like “Veg*ns should take B12 supplements, according to nearly-unanimous expert consensus” while not promoting posts like “Veg*nism entails health tradeoffs”, when there is no scientific evidence of this and expert consensus of the contrary. (I understand that your intention was not to claim that a vegan diet was worse than an average non-vegan diet, but that’s how most readers I’ve spoken to updated in response to your posts.)
I would be very excited about encouraging posts that broadcast knowledge where there is expert consensus that is widely neglected (e.g. Veg*ns should take B12 supplements), but I think it can also be very easy to overvalue hard-to-measure benefits, and we should keep in mind that the vast majority of posts get forgotten after a few days.
I think you are incorrectly conflating being mistaken and being “actively harmful” (what does actively mean here?) I think most things that are well-written and contain interesting true information or perspectives are helpful, your examples included.
Truth-seeking is a long game that is mostly about people exploring ideas, not about people trying to minimize false beliefs at each individual moment.
I think you are incorrectly conflating being mistaken and being “actively harmful”
That’s a fair point, I listed posts that were clearly not only mistaken but also harmful, to highlight that the cost-benefit analysis of “good posts” as a category is very non-obvious.
(what does actively mean here?)
I shouldn’t have used the term “actively”, I edited the comment.
I think most things that are well-written and contain interesting true information or perspectives are helpful, your examples included.
I fear that there’s a very real risk of building castles in the sky, where interesting true information gets mixed with interesting not-so-true information and woven into a misleading narrative that causes bad consequences, that this happens often, and that we should be mindful of that.
I should have explicitly mentioned it, but I mostly agree with Elizabeth’s quick take. I just want to highlight that while some “good posts” “generate a lot of positive externalities”, many other “good posts” are wrong and harmful (and many many more get forgotten after a few days). I’m also probably more skeptical of hard-to-measure diffuse benefits without a clear theory of change or observable measures and feedback loops.
Good posts generate a lot of positive externalities, which means they’re undersupplied, especially by people who are busy and don’t get many direct rewards from posting. How do we fix that? What are rewards relevant authors would find meaningful?
Here are some possibilities off the top of my head, with some commentary. My likes are not universal and I hope the comments include people with different utility functions.
Money. Always a classic, rarely disliked although not always prioritized. I’m pretty sure this is why LTFF and EAIF are writing more now.
Appreciation (broad). Some people love these. I definitely prefer getting them over not getting them, but they’re not that motivating for me. Their biggest impact on motivation is probably cushioning the blow of negative comments.
Appreciation (specific). Things like “this led to me getting my iron tested” or “I changed my mind based on X”. I love these, they’re far more impactful than generic appreciation.
High quality criticism that changes my mind.
Arguing with bad commenters.
One of the hardest parts of writing for me is getting a shitty, hostile comment, and feeling like my choices are “let it stand” or “get sucked into a miserable argument that will accomplish nothing”. Commenters arguing with commenters gets me out of this dilemma, which is already great, but then sometimes the commenters display deep understanding of the thing I wrote and that’s maybe my favorite feeling.
Deliberately not included: longer term rewards like reputation that can translate into jobs, employees, etc. I’m specifically looking for quick rewards for specific posts
I definitely agree that funding is a significant factor for some institutional actors.
For example, RP’s Surveys and Data Analysis team has a significant amount of research that we would like to publish if we had capacity / could afford to do so: our capacity is entirely bottlenecked on funding and as we are ~ entirely reliant on paid commissions (we don’t receive any grants for general support) time spent publishing reports is basically just pro bono, adding to our funding deficit.
Example of this sort of unpublished research include:
The two reports mentioned by CEA here about attitudes towards EA post-FTX among the general public, elites, and students on elite university campuses.
Followup posts about the survey reported here about how many people have heard of EA, to further discuss people’s attitudes towards EA, and where members of the general public hear about EA (this differs systematically)
Updated numbers on the growth of the EA community (2020-2022) extending this method and also looking at numbers of highly engaged longtermists specifically
Several studies we ran to develop reliable measure of how positively inclined towards longtermism people are, looking at different predictors of support for longtermism and how these vary in the population
Reports on differences between neartermists and longtermists within the EA community and on how neartermist / longtermist efforts influence each other (e.g. to what extent does neartermist outreach, like GiveWell, Peter Singer articles about poverty, lead to increased numbers of longtermists)
Whether the age at which one first engaged with EA predicts lower / higher future engagement with EA
A significant dynamic here is that even where we are paid to complete research for particular orgs, we are not funded for the extra time it would take to write up and publish the results for the community. So doing so is usually unaffordable, even where we have staff capacity.
Of course, much of our privately commissioned research is private, such that we couldn’t post it. But there are also significant amounts of research that we would want to conduct independently, so that we could publish it, which we can’t do purely due to lack of funding. This includes:
More message testing research related to EA /longtermism (for an example see Will MacAskill’s comment referencing our work here), including but not limited to:
Testing the effectiveness of specific arguments for these causes
Testing how “longtermist” or “existential risk” or “effective altruist” or “global priorities” framings/brandings compare in terms of how people respond to them (including comparing this to just advocating for specific concrete x-risks without
Testing effectiveness of different approaches to outreach in different populations for AI safety / particular policies
“We want to publish but can’t because the time isn’t paid for” seems like a big loss[1], and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it’s a crisply defined chunk of work with clear outcomes. But I imagine you guys have already put some thought into how to get this paid for.
To be totally honest, I have qualms about the specific projects you mention, they seem centered on social reality not objective reality. But I value a lot of RP’s other work, think social reality investigations can be helpful in moderation, and my qualms about these questions aren’t enough to override the general principle.
Thanks! I’m planning to post something about our funding situation before the end of the year, but a couple of quick observations about the specific points you raise:
I think funding projects from multiple smaller donors is just generally more difficult to coordinate than funding from a single source
A lot of people seem to assume that our projects already are fully funded or that they should be centrally funded because they seem very much like core community infrastructure, which reduces inclination to donate
I’d be curious to understand this line of thinking better if you have time to elaborate. “Social” vs “objective” doesn’t seem like a natural and action-guiding distinction to me. For example:
Does everyone we want to influence hate EA post-FTX?
Are people more convinced by outreach based on “longtermism” or “existential risk” or principles-based effective altruism or specific concrete causes more effective?
Do people who first engage with EA when they are younger end up less engaged with EA than those who first engage when they are older?
How fast is EA growing?
all strike me as objective social questions of clear importance. Also, it seems like the key questions around movement building will often be (characterisable as) “social” questions. I could understand concerns about too much meta but too much “social” seems harder to understand.[1]
A possible interpretation I would have some sympathy for is distinguishing between concern with what is persuasive vs what is correct. But I don’t think this raises concerns about these kinds of projects, because:
- A number of these projects are not about increasing persuasiveness at all (e.g. how fast is EA growing? Where are people encountering EA ideas?). Even findings like “does everyone on elite campuses hate EA?” are relevant for reasons other than simply increasing persuasiveness, e.g. decisions about whether we should increase or decrease spending on outreach at the top of the funnel.
- Even if you have a strong aversion to optimising for persuasiveness (you want to just present the facts and let people respond how they will), you may well still want to know if people are totally misunderstanding your arguments as you present them (which seems exceptionally common in cases like AI risk).
- And, of course, I think many people reasonably think that if you care about impact, you should care about whether your arguments are persuasive (while still limiting yourself to arguments which are accurate, sincerely held etc.).
- The overall EA portfolio seems to assign a very small portion of its resources to this sort of research as it stands (despite dedicating a reasonably large amount of time to a priori speculation about these questions (1)(2)(3)(4)(5)(6)(7)(8)) so some more empirical investigation of them seems warranted.
Yeah, “objective” wasn’t a great word choice there. I went back and forth between “objective”, “object”, and “object-level”, and probably made the wrong call. I agree there is an objective answer to “what percentage of people think positively of malaria nets?” but view it as importantly different than “what is the impact of nets on the spread of malaria?”
I agree the right amount of social meta-investigation is >0. I’m currently uncomfortable with the amount EA thinks about itself and its presentation; but even if that’s true, professionalizing the investigation may be an improvement. My qualms here don’t rise to the level where I would voice them in the normal course of events, but they seemed important to state when I was otherwise pretty explicitly endorsing the potential posts.
I can say a little more on what in particular made me uncomfortable. I wouldn’t be writing these if you hadn’t asked and if I hadn’t just called for money for the project of writing them up, and if I was I’d be aiming for a much higher quality bar. I view saying these at this quality level as a little risky, but worth it because this conversation feels really productive and I do think these concerns about EA overall are important, even though I don’t think they’re your fault in particular:
several of these questions feel like they don’t cut reality at the joints, and would render important facets invisible. These were quick summaries so it’s not fair to judge them, but I feel this way about a lot of EA survey work where I do have details.
several of your questions revolve around growth; I think EA’s emphasis on growth has been toxic and needs a complete overhaul before EA is allowed to gather data again.
I especially think CEA’s emphasis on Highly Engaged people is a warped frame that causes a lot of invisible damage. My reasoning is pretty similar to Theo’s here.
I don’t believe EA knows what to do with the people it recruits, and should stop worrying about recruiting until that problem is resolved.
Asking “do people introduced to EA younger stick around longer?” has an implicit frame that longer is better, and is missing follow-ups like “is it good for them? what’s the counterfactual for the world?”
I think we need to be a bit careful with this, as I saw many highly upvoted posts that in my opinion have been
activelyharmful. Some very clear examples:Theses on Sleep, claiming that sleep is not that important. I know at least one person that tried to sleep 6 hours/day for a few weeks after reading this, with predictable results
A Chemical Hunger, “a series by the authors of the blog Slime Mold Time Mold (SMTM) that has been received positively on LessWrong, argues that the obesity epidemic is entirely caused by environmental contaminants.” It wouldn’t surprise me if it caused several people to update their diets in worse ways, or in general have a worse model of obesity
In general, I think we should promote more posts like “Veg*ns should take B12 supplements, according to nearly-unanimous expert consensus” while not promoting posts like “Veg*nism entails health tradeoffs”, when there is no scientific evidence of this and expert consensus of the contrary. (I understand that your intention was not to claim that a vegan diet was worse than an average non-vegan diet, but that’s how most readers I’ve spoken to updated in response to your posts.)
I would be very excited about encouraging posts that broadcast knowledge where there is expert consensus that is widely neglected (e.g. Veg*ns should take B12 supplements), but I think it can also be very easy to overvalue hard-to-measure benefits, and we should keep in mind that the vast majority of posts get forgotten after a few days.
I think you are incorrectly conflating being mistaken and being “actively harmful” (what does actively mean here?) I think most things that are well-written and contain interesting true information or perspectives are helpful, your examples included.
Truth-seeking is a long game that is mostly about people exploring ideas, not about people trying to minimize false beliefs at each individual moment.
That’s a fair point, I listed posts that were clearly not only mistaken but also harmful, to highlight that the cost-benefit analysis of “good posts” as a category is very non-obvious.
I shouldn’t have used the term “actively”, I edited the comment.
I fear that there’s a very real risk of building castles in the sky, where interesting true information gets mixed with interesting not-so-true information and woven into a misleading narrative that causes bad consequences, that this happens often, and that we should be mindful of that.
I should have explicitly mentioned it, but I mostly agree with Elizabeth’s quick take. I just want to highlight that while some “good posts” “generate a lot of positive externalities”, many other “good posts” are wrong and harmful (and many many more get forgotten after a few days). I’m also probably more skeptical of hard-to-measure diffuse benefits without a clear theory of change or observable measures and feedback loops.