One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism
Agree.
Fully agree we need new hard-to-fake signals. Ben’s list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are:
Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring processes. There could also be more advice and guidance for newer orgs on the best ways to look for and judge this when hiring. Decisions to promote staff should seek feedback from peers and direct reports.
Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naïve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or funding such people.
Accountability mechanisms. Top example: external impact reviews of organisations. This could provide a way to check for and discourage any corruption / excess / un-cooperativeness. Maybe an EA whistleblowing system (but maybe not needed). Maybe more accountability checking and feedback for individuals in senior roles in EA orgs (not so sure about this, as it can backfire).
So far the community seems to be doing well. Yet EA is gaining resources and power. And power has been known to corrupt. So lets make sure we build in mechanisms so that doesn’t happen to our wonderful community.
Random but in the early days of YC they said they used to have a “no assholes” rule, which mean they’d try to not accept founders who seemed like assholes, even if they thought they might succeed, due to the negative externalities on the community.
Seems like a great rule. Do you know why they don’t have this rule anymore? (One plausible reason: The larger your community gets the harder such a rule is to implement, which would means this wouldn’t (anymore) be a feasible for the EA community.)
Hey, do you happen to know me in real life and would be willing to talk about these issues offline?
I’m asking because it seems unlikely you will be able to be more specific publicity (but it would be good if you were and were to just write here) and so it would be good to talk about the specific examples or perceptions in a private setting.
I know someone who went to EAG who is sort of skeptical and looks for these things, but they didn’t see a lot of bad things at all.
(Now, a caveat is that selection is a big thing. Maybe a person might miss these people for various idiosyncratic factors).
But I’m really skeptical about major issues and in the absence of substantive issues (which by the way, doesn’t need hard data to establish), it seems negative EV to generate alot of concern or use language.
One issue is that problems are self fulfilling, you start pointing a lot about bad actors in a vague way and you’ll find that you start losing the benefits of the community. As long as these people don’t enter senior levels or community building roles you’re pretty good.
Another issue is that trust networks are how these issues are normally solved, and yet there’s pressure to open these networks, which runs into the teeth of these issues.
To be clear, I’m saying that this funding and trust problem is probably being worked on. Having a lot noise about this issue or people poking the elephant or just having bad vibes, but not substantiated, can be net negative.
Thank you for the comment. I edited out the bit you were concerned about as that seemed to be the quickest/easiest solution here. Let me know if you want more changes. (Feel free to edit / remove your post too.)
Hi, this is really thoughtful. In the principle of being consonant with your actions in your reply, following your lead, I edited my post.
However, I didn’t intend to create an edit to this thread and I especially did not intend to undo discussion.
It seems more communication is good.
It seems like raising the issue is good, as long as that is balanced with good judgement and proportionate action and beliefs. It seems like a good action was to understand and substantiate or explore issues.
Agree.
Fully agree we need new hard-to-fake signals. Ben’s list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are:
Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring processes. There could also be more advice and guidance for newer orgs on the best ways to look for and judge this when hiring. Decisions to promote staff should seek feedback from peers and direct reports.
Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naïve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or funding such people.
Accountability mechanisms. Top example: external impact reviews of organisations. This could provide a way to check for and discourage any corruption / excess / un-cooperativeness. Maybe an EA whistleblowing system (but maybe not needed). Maybe more accountability checking and feedback for individuals in senior roles in EA orgs (not so sure about this, as it can backfire).
So far the community seems to be doing well. Yet EA is gaining resources and power. And power has been known to corrupt. So lets make sure we build in mechanisms so that doesn’t happen to our wonderful community.
(Thanks to others in discussion for these ideas)
[edited]
Random but in the early days of YC they said they used to have a “no assholes” rule, which mean they’d try to not accept founders who seemed like assholes, even if they thought they might succeed, due to the negative externalities on the community.
Seems like a great rule. Do you know why they don’t have this rule anymore? (One plausible reason: The larger your community gets the harder such a rule is to implement, which would means this wouldn’t (anymore) be a feasible for the EA community.)
Hey, do you happen to know me in real life and would be willing to talk about these issues offline?
I’m asking because it seems unlikely you will be able to be more specific publicity (but it would be good if you were and were to just write here) and so it would be good to talk about the specific examples or perceptions in a private setting.
I know someone who went to EAG who is sort of skeptical and looks for these things, but they didn’t see a lot of bad things at all.
(Now, a caveat is that selection is a big thing. Maybe a person might miss these people for various idiosyncratic factors).
But I’m really skeptical about major issues and in the absence of substantive issues (which by the way, doesn’t need hard data to establish), it seems negative EV to generate alot of concern or use language.
One issue is that problems are self fulfilling, you start pointing a lot about bad actors in a vague way and you’ll find that you start losing the benefits of the community. As long as these people don’t enter senior levels or community building roles you’re pretty good.
Another issue is that trust networks are how these issues are normally solved, and yet there’s pressure to open these networks, which runs into the teeth of these issues.
To be clear, I’m saying that this funding and trust problem is probably being worked on. Having a lot noise about this issue or people poking the elephant or just having bad vibes, but not substantiated, can be net negative.
Thank you for the comment. I edited out the bit you were concerned about as that seemed to be the quickest/easiest solution here. Let me know if you want more changes. (Feel free to edit / remove your post too.)
Hi, this is really thoughtful. In the principle of being consonant with your actions in your reply, following your lead, I edited my post.
However, I didn’t intend to create an edit to this thread and I especially did not intend to undo discussion.
It seems more communication is good.
It seems like raising the issue is good, as long as that is balanced with good judgement and proportionate action and beliefs. It seems like a good action was to understand and substantiate or explore issues.