1) One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works.
I’m not sure that’s an entirely bad thing, because frugality seems mixed as a virtue e.g. it can lead to:
Not spending money on clearly worth it things (e.g. not paying to have a larger table at a student fair even when it would result in more sign ups; not getting a cleaner when you earn over $50/hour), which in turn can also make us seem not serious about maximising impact (e.g. this comment).
Even worse, getting distracted from the top priority by worrying about efforts to save relatively small amounts of money. Or not considering high upside projects that require a lot of resources, but where there’s a good chance of failure, due to a fear of not being able to justify the spending.
Feelings of guilt around spending and not being perfectly altruistic, which can lead to burn out.
Filtering out people who want a normal middle class lifestyle & family, but could have had a big impact (and go work at FAANG instead). Filtering out people from low income backgrounds or with dependents.
However, we need new hard-to-fake signals of seriousness to replace frugality. I’m not sure what these should be, but here are some alternative things we could try to signal, which seem closer to what we most care about:
That we nerd out hard about doing good.
Intense focus on the top priority.
Doing high upside things even if there’s a good chance they might not work out and seem unconventional.
Giving 10% (or more) (which is compatible with non-frugality)
The difficulty is to think of hard-to-fake and easy-to-explain ways to show we’re into these.
2) Another way to see the problem is that in the past we’ve used the following idea to get people into EA: “you can save a life for a few thousand dollars and should maximise your donations to that cause”. But this idea is obviously in tension with the activities that many see as the top priorities these days (e.g. wanting to convince top computer scientists to work on the AI alignment problem).
My view is that we should try to move past this way of introducing effective altruism, and instead focus more on ideas like:
Let’s do the most we can to tackle big, neglected global problems. (I’d probably start by introducing climate change and/or pandemics rather than global health.)
Find high-upside projects that help tackle the biggest bottlenecks in those problems.
If you want to do good, do it effectively, and focus on the highest-leverage ways you can help (but ~no-one is perfectly altruistic and it’s fine to have a nice life too).
One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism
Agree.
Fully agree we need new hard-to-fake signals. Ben’s list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are:
Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring processes. There could also be more advice and guidance for newer orgs on the best ways to look for and judge this when hiring. Decisions to promote staff should seek feedback from peers and direct reports.
Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naïve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or funding such people.
Accountability mechanisms. Top example: external impact reviews of organisations. This could provide a way to check for and discourage any corruption / excess / un-cooperativeness. Maybe an EA whistleblowing system (but maybe not needed). Maybe more accountability checking and feedback for individuals in senior roles in EA orgs (not so sure about this, as it can backfire).
So far the community seems to be doing well. Yet EA is gaining resources and power. And power has been known to corrupt. So lets make sure we build in mechanisms so that doesn’t happen to our wonderful community.
Random but in the early days of YC they said they used to have a “no assholes” rule, which mean they’d try to not accept founders who seemed like assholes, even if they thought they might succeed, due to the negative externalities on the community.
Seems like a great rule. Do you know why they don’t have this rule anymore? (One plausible reason: The larger your community gets the harder such a rule is to implement, which would means this wouldn’t (anymore) be a feasible for the EA community.)
Hey, do you happen to know me in real life and would be willing to talk about these issues offline?
I’m asking because it seems unlikely you will be able to be more specific publicity (but it would be good if you were and were to just write here) and so it would be good to talk about the specific examples or perceptions in a private setting.
I know someone who went to EAG who is sort of skeptical and looks for these things, but they didn’t see a lot of bad things at all.
(Now, a caveat is that selection is a big thing. Maybe a person might miss these people for various idiosyncratic factors).
But I’m really skeptical about major issues and in the absence of substantive issues (which by the way, doesn’t need hard data to establish), it seems negative EV to generate alot of concern or use language.
One issue is that problems are self fulfilling, you start pointing a lot about bad actors in a vague way and you’ll find that you start losing the benefits of the community. As long as these people don’t enter senior levels or community building roles you’re pretty good.
Another issue is that trust networks are how these issues are normally solved, and yet there’s pressure to open these networks, which runs into the teeth of these issues.
To be clear, I’m saying that this funding and trust problem is probably being worked on. Having a lot noise about this issue or people poking the elephant or just having bad vibes, but not substantiated, can be net negative.
Thank you for the comment. I edited out the bit you were concerned about as that seemed to be the quickest/easiest solution here. Let me know if you want more changes. (Feel free to edit / remove your post too.)
Hi, this is really thoughtful. In the principle of being consonant with your actions in your reply, following your lead, I edited my post.
However, I didn’t intend to create an edit to this thread and I especially did not intend to undo discussion.
It seems more communication is good.
It seems like raising the issue is good, as long as that is balanced with good judgement and proportionate action and beliefs. It seems like a good action was to understand and substantiate or explore issues.
Part of me is a bit sad that community building is now a comfortable and status-y option. The previous generation of community builders had a really high proportion of people who cared deeply about these ideas, were willing to take weird ideas seriously and often take a substantial financial/career security hit.
I don’t think this applies to most of the current generation of community builders to the same degree and it just seems like much more of a mixed bag people wise. To be clear I still think this is good on the margin, I just trust the median new community builder a lot less (by default).
Interesting! I work in CB full-time (Director of EA Germany), and my impression is still that it’s challenging work, pays less than what I and my peers would earn elsewhere and most of the CB roles still have a lot less status than e.g. being a researcher who gets invited to give talks etc.
Do you think some CBs are motivated by money or status? What makes you think so? I’m genuinely curious (though no worries if you don’t feel like elaborating).
I think I am mostly comparing to how different my impression of the landscape of a few years ago is to today’s landscape.
I am mostly talking about uni groups (I know less about how status-y city groups are), but I there were certainly a few people putting in a lot of hours for 0 money and not much recognition from the community for just how valuable their work was. I don’t want to name specific people I have in mind, but some of them now work at top EA orgs or are doing other interesting things and have status now, I just think it was hard for them to know that this is how it would pan out so I’m pretty confident they are not particularly status motivated.
I’m also pretty confident that that most community builders I know wouldn’t be doing their job on minimum wage even if they thought it was the most impactful thing they could do. That’s probably fine, I just think they are less ‘hardcore’ than I would like.
Also being status motivated is not neccesarilly a bad thing, I’m confused about this but it’s plausibly a good thing for the movement to have lots of status motivated people to the degree that we can make status track the right stuff. I am sure that part of why I am less excited about these people is a vibes thing that isn’t tracking impact.
Something I like about “Doing high upside things even if there’s a good chance they might not work out and seem unconventional” as a mark of seriousness is that it’s its own form of sacrifice: being willing to look weird and fail and give up on full security and job comfort and do something hard because it’s positive EV.
In your list of new hard-to-fake signals of seriousness I like.
Doing high upside things even if there’s a good chance they might not work out and seem unconventional.
I think that this is underrated and as a community, we overemphasise actually achieving things in the real world meaning if you want to get ahead within EA it often pays to do the medium right but reasonable thing over the super high EV thing, as the weird super high EV thing probably won’t work.
I’m much more excited when I meet young people who keep trying a bunch of things that seem plausibly very high value and give them lots of information relative to people that did some ok-ish things that let them build a track record/status. Fwiw I think that some senior EAs do track these high EV high-risk things really well, but maybe the general perception of what people ought to do is too close to that of the non-EA world.
I would expect detrimental effects if nerding out became even more of a paid-attention-to signal. It’s something you can do endlessly without ever helping a person. But maybe you just mean “successfully making valuable intellectual contributions”, in which case I agree.
Agreed. There seems to be what I can best call an intellectual aesthetic that drives about 1⁄2 instances of “nerding out” that I observe in the [East] Bay Area. The contrast between the Bay Area attitude and the Oxford attitude, the latter of which I guess applies to Ben Todd, has continually surprised me, and this variable of location may be dispositive over whether “nerding out” is evidence of desirable character.
1) One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works.
I’m not sure that’s an entirely bad thing, because frugality seems mixed as a virtue e.g. it can lead to:
Not spending money on clearly worth it things (e.g. not paying to have a larger table at a student fair even when it would result in more sign ups; not getting a cleaner when you earn over $50/hour), which in turn can also make us seem not serious about maximising impact (e.g. this comment).
Even worse, getting distracted from the top priority by worrying about efforts to save relatively small amounts of money. Or not considering high upside projects that require a lot of resources, but where there’s a good chance of failure, due to a fear of not being able to justify the spending.
Feelings of guilt around spending and not being perfectly altruistic, which can lead to burn out.
Filtering out people who want a normal middle class lifestyle & family, but could have had a big impact (and go work at FAANG instead). Filtering out people from low income backgrounds or with dependents.
However, we need new hard-to-fake signals of seriousness to replace frugality. I’m not sure what these should be, but here are some alternative things we could try to signal, which seem closer to what we most care about:
That we nerd out hard about doing good.
Intense focus on the top priority.
Doing high upside things even if there’s a good chance they might not work out and seem unconventional.
Giving 10% (or more) (which is compatible with non-frugality)
The difficulty is to think of hard-to-fake and easy-to-explain ways to show we’re into these.
2) Another way to see the problem is that in the past we’ve used the following idea to get people into EA: “you can save a life for a few thousand dollars and should maximise your donations to that cause”. But this idea is obviously in tension with the activities that many see as the top priorities these days (e.g. wanting to convince top computer scientists to work on the AI alignment problem).
My view is that we should try to move past this way of introducing effective altruism, and instead focus more on ideas like:
Let’s do the most we can to tackle big, neglected global problems. (I’d probably start by introducing climate change and/or pandemics rather than global health.)
Find high-upside projects that help tackle the biggest bottlenecks in those problems.
If you want to do good, do it effectively, and focus on the highest-leverage ways you can help (but ~no-one is perfectly altruistic and it’s fine to have a nice life too).
Agree.
Fully agree we need new hard-to-fake signals. Ben’s list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are:
Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring processes. There could also be more advice and guidance for newer orgs on the best ways to look for and judge this when hiring. Decisions to promote staff should seek feedback from peers and direct reports.
Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naïve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or funding such people.
Accountability mechanisms. Top example: external impact reviews of organisations. This could provide a way to check for and discourage any corruption / excess / un-cooperativeness. Maybe an EA whistleblowing system (but maybe not needed). Maybe more accountability checking and feedback for individuals in senior roles in EA orgs (not so sure about this, as it can backfire).
So far the community seems to be doing well. Yet EA is gaining resources and power. And power has been known to corrupt. So lets make sure we build in mechanisms so that doesn’t happen to our wonderful community.
(Thanks to others in discussion for these ideas)
[edited]
Random but in the early days of YC they said they used to have a “no assholes” rule, which mean they’d try to not accept founders who seemed like assholes, even if they thought they might succeed, due to the negative externalities on the community.
Seems like a great rule. Do you know why they don’t have this rule anymore? (One plausible reason: The larger your community gets the harder such a rule is to implement, which would means this wouldn’t (anymore) be a feasible for the EA community.)
Hey, do you happen to know me in real life and would be willing to talk about these issues offline?
I’m asking because it seems unlikely you will be able to be more specific publicity (but it would be good if you were and were to just write here) and so it would be good to talk about the specific examples or perceptions in a private setting.
I know someone who went to EAG who is sort of skeptical and looks for these things, but they didn’t see a lot of bad things at all.
(Now, a caveat is that selection is a big thing. Maybe a person might miss these people for various idiosyncratic factors).
But I’m really skeptical about major issues and in the absence of substantive issues (which by the way, doesn’t need hard data to establish), it seems negative EV to generate alot of concern or use language.
One issue is that problems are self fulfilling, you start pointing a lot about bad actors in a vague way and you’ll find that you start losing the benefits of the community. As long as these people don’t enter senior levels or community building roles you’re pretty good.
Another issue is that trust networks are how these issues are normally solved, and yet there’s pressure to open these networks, which runs into the teeth of these issues.
To be clear, I’m saying that this funding and trust problem is probably being worked on. Having a lot noise about this issue or people poking the elephant or just having bad vibes, but not substantiated, can be net negative.
Thank you for the comment. I edited out the bit you were concerned about as that seemed to be the quickest/easiest solution here. Let me know if you want more changes. (Feel free to edit / remove your post too.)
Hi, this is really thoughtful. In the principle of being consonant with your actions in your reply, following your lead, I edited my post.
However, I didn’t intend to create an edit to this thread and I especially did not intend to undo discussion.
It seems more communication is good.
It seems like raising the issue is good, as long as that is balanced with good judgement and proportionate action and beliefs. It seems like a good action was to understand and substantiate or explore issues.
Part of me is a bit sad that community building is now a comfortable and status-y option. The previous generation of community builders had a really high proportion of people who cared deeply about these ideas, were willing to take weird ideas seriously and often take a substantial financial/career security hit.
I don’t think this applies to most of the current generation of community builders to the same degree and it just seems like much more of a mixed bag people wise. To be clear I still think this is good on the margin, I just trust the median new community builder a lot less (by default).
Interesting! I work in CB full-time (Director of EA Germany), and my impression is still that it’s challenging work, pays less than what I and my peers would earn elsewhere and most of the CB roles still have a lot less status than e.g. being a researcher who gets invited to give talks etc.
Do you think some CBs are motivated by money or status? What makes you think so? I’m genuinely curious (though no worries if you don’t feel like elaborating).
I think I am mostly comparing to how different my impression of the landscape of a few years ago is to today’s landscape.
I am mostly talking about uni groups (I know less about how status-y city groups are), but I there were certainly a few people putting in a lot of hours for 0 money and not much recognition from the community for just how valuable their work was. I don’t want to name specific people I have in mind, but some of them now work at top EA orgs or are doing other interesting things and have status now, I just think it was hard for them to know that this is how it would pan out so I’m pretty confident they are not particularly status motivated.
I’m also pretty confident that that most community builders I know wouldn’t be doing their job on minimum wage even if they thought it was the most impactful thing they could do. That’s probably fine, I just think they are less ‘hardcore’ than I would like.
Also being status motivated is not neccesarilly a bad thing, I’m confused about this but it’s plausibly a good thing for the movement to have lots of status motivated people to the degree that we can make status track the right stuff. I am sure that part of why I am less excited about these people is a vibes thing that isn’t tracking impact.
Something I like about “Doing high upside things even if there’s a good chance they might not work out and seem unconventional” as a mark of seriousness is that it’s its own form of sacrifice: being willing to look weird and fail and give up on full security and job comfort and do something hard because it’s positive EV.
In your list of new hard-to-fake signals of seriousness I like.
I think that this is underrated and as a community, we overemphasise actually achieving things in the real world meaning if you want to get ahead within EA it often pays to do the medium right but reasonable thing over the super high EV thing, as the weird super high EV thing probably won’t work.
I’m much more excited when I meet young people who keep trying a bunch of things that seem plausibly very high value and give them lots of information relative to people that did some ok-ish things that let them build a track record/status. Fwiw I think that some senior EAs do track these high EV high-risk things really well, but maybe the general perception of what people ought to do is too close to that of the non-EA world.
I would expect detrimental effects if nerding out became even more of a paid-attention-to signal. It’s something you can do endlessly without ever helping a person. But maybe you just mean “successfully making valuable intellectual contributions”, in which case I agree.
Agreed. There seems to be what I can best call an intellectual aesthetic that drives about 1⁄2 instances of “nerding out” that I observe in the [East] Bay Area. The contrast between the Bay Area attitude and the Oxford attitude, the latter of which I guess applies to Ben Todd, has continually surprised me, and this variable of location may be dispositive over whether “nerding out” is evidence of desirable character.