One big question is is this would be viewed more as a “community membership” thing or as a “directly impactful” intervention. I could imagine both being pretty different from one another.
I think personally I’m more excited by the second, because it seems more scalable.
The way I would view the “utilitarian intervention” version would be pretty intense, and much unlike almost all social programs, but it would be effective. 1. “Fairness” is a tricky word. The main thing that matters is who’s expected to produce value. 2. Many of the most valuable people are not EAs. Identifying these people and giving them support would be included. It could look like trying to find the most high-expected-value people globally, even if they have narrow online presences. 3. There would be pretty strict/disciplined measures for evaluating which individuals would represent a “good deal”. This would mean people would have rankings, maybe “predictions of impact”. 4. Maybe there would be “insurance” options, for people to have the feeling of stability (assuming this makes them more productive and risk-taking), even if help later on would in isolation be a net loss. (For example, funding after retirement)
I guess in some ways, this would be a very elite social program, for a very specific definition of “elite”.
Back to the “community membership” variant; one great thing about this is that maybe it could be mostly community-funded, and not in need of external funding. I imagine people in this camp would need to pay a lot of attention to find possible bad actors early and out them. It seems like a tough problem, but the solution space is large.
Another factor is that if people are willing to give up some privacy, then a lot of evaluation becomes easier, and gaming/abusing the system becomes harder.
Random comment: Do you or anyone else have any comments about the use of terminology with negative connotations, like “gatekeeping” or “elite”?
Background (unnecessary to read):
Basically I’ve been using the word “gatekeeping” a fair bit.
This word seems to be an accurate description of principled, prosocial activity to create functional teams or institutions. It includes activities no one finds surprising there is control over, such as grant making.
To see this another way, basically, someone somewhere (Party A) has given funding to achieve maximum impact for something (Party B), and we need people (Party C) to cause this to happen in some way. We owe Party A and B a lot, and that usually includes some sort of selection/control over party C.
Also, I think that “gatekeeping” seems particularly important in the early stages of founding a cause area or set of initiatives, where such activity seems necessary or has to occur by definition. In these situations, it seems less vulnerable to real or perceived abuse or at least insularity, at the same time it seems useful and virtuous to signpost and explain what gatekeeping is and what the parameters and intentions are.
Now, “elite” has the same problem (“elitism”). It is also an important, genuine and technical thing to consider and signpost, but it can also be associated with real or perceived misuse.
Maybe it’s tenable if I use just “gatekeeping”. I’m worried if I start passing docs, posts or comments around, filled mention of both “gatekeeping” and “elites” and terms of art from who knows what else (from various disciplines, not just EA), it might offend or at least look insensitive.
I guess I can change the words with another.
However, I dislike it when people change words for political reasons. It seems like bad practice for a number of reasons, for example imposing cognitive/jargon costs on everyone.
I’m not sure if you have any thoughts. I thought I would write this because this seems like one of those things that needs input from others.
I definitely think it’s important to pay attention to language when a simple substitution can avoid issues. Maybe it’d be better to use the word “evaluation” or “stewardship” rather than “gatekeeping”?
“High-impact” might also be a good substitute for “elite”.
However, I dislike it when people change words for political reasons. It seems like bad practice for a number of reasons, for example imposing cognitive/jargon costs on everyone.
I would suggest using contentious words when substitutes would significantly impede communication or obscure the point being made, but otherwise being flexible.
This seems excellent and I learned a lot from this comment and your post.
I agree with the impactfulness argument you have made and its potential. It seems important in being much larger scale. It might even ease other types of giving into the community somehow (because you might develop a competent, strong institution). It’s also impactful, by design.
Also, as you suggest, finding very valuable, non-EA people to execute causes seems like a pure win [1].
Now, it seems I have a grant by a major funder of EA longtermism projects. Related to this, I am researching (or really just talking about) a financial aid project to what you described.
This isn’t approved or even asked for by the grant maker, but there seems to be some possibility it will happen. (But not more than a 50% chance though).
Your thoughts would be valuable and I might contact you.
I might copy and paste some content from the document into the above comment to get feedback and ideas.
[1] But finding and funding such people also seems difficult. My guess that people who do this well (e.g. Peter Thiel of Thiel Fellows) are established in related activities or connected, to an extraordinary degree. My guess is that this activity of finding and choosing people seems structurally similar to grant making, such as GiveWell. I think that successive grantmakers for alternate causes in EA have a mixed track record compared to the original. Maybe this is because the inputs are deceptively hard and somewhat illegible from the outside.
I think this is a serious question.
One big question is is this would be viewed more as a “community membership” thing or as a “directly impactful” intervention. I could imagine both being pretty different from one another.
I think personally I’m more excited by the second, because it seems more scalable.
The way I would view the “utilitarian intervention” version would be pretty intense, and much unlike almost all social programs, but it would be effective.
1. “Fairness” is a tricky word. The main thing that matters is who’s expected to produce value.
2. Many of the most valuable people are not EAs. Identifying these people and giving them support would be included. It could look like trying to find the most high-expected-value people globally, even if they have narrow online presences.
3. There would be pretty strict/disciplined measures for evaluating which individuals would represent a “good deal”. This would mean people would have rankings, maybe “predictions of impact”.
4. Maybe there would be “insurance” options, for people to have the feeling of stability (assuming this makes them more productive and risk-taking), even if help later on would in isolation be a net loss. (For example, funding after retirement)
I guess in some ways, this would be a very elite social program, for a very specific definition of “elite”.
Back to the “community membership” variant; one great thing about this is that maybe it could be mostly community-funded, and not in need of external funding. I imagine people in this camp would need to pay a lot of attention to find possible bad actors early and out them. It seems like a tough problem, but the solution space is large.
Another factor is that if people are willing to give up some privacy, then a lot of evaluation becomes easier, and gaming/abusing the system becomes harder.
Random comment: Do you or anyone else have any comments about the use of terminology with negative connotations, like “gatekeeping” or “elite”?
Background (unnecessary to read):
Basically I’ve been using the word “gatekeeping” a fair bit.
This word seems to be an accurate description of principled, prosocial activity to create functional teams or institutions. It includes activities no one finds surprising there is control over, such as grant making.
To see this another way, basically, someone somewhere (Party A) has given funding to achieve maximum impact for something (Party B), and we need people (Party C) to cause this to happen in some way. We owe Party A and B a lot, and that usually includes some sort of selection/control over party C.
Also, I think that “gatekeeping” seems particularly important in the early stages of founding a cause area or set of initiatives, where such activity seems necessary or has to occur by definition. In these situations, it seems less vulnerable to real or perceived abuse or at least insularity, at the same time it seems useful and virtuous to signpost and explain what gatekeeping is and what the parameters and intentions are.
However, gatekeeping is basically a slur in common use.
Now, “elite” has the same problem (“elitism”). It is also an important, genuine and technical thing to consider and signpost, but it can also be associated with real or perceived misuse.
Maybe it’s tenable if I use just “gatekeeping”. I’m worried if I start passing docs, posts or comments around, filled mention of both “gatekeeping” and “elites” and terms of art from who knows what else (from various disciplines, not just EA), it might offend or at least look insensitive.
I guess I can change the words with another.
However, I dislike it when people change words for political reasons. It seems like bad practice for a number of reasons, for example imposing cognitive/jargon costs on everyone.
I’m not sure if you have any thoughts. I thought I would write this because this seems like one of those things that needs input from others.
I definitely think it’s important to pay attention to language when a simple substitution can avoid issues. Maybe it’d be better to use the word “evaluation” or “stewardship” rather than “gatekeeping”?
“High-impact” might also be a good substitute for “elite”.
I would suggest using contentious words when substitutes would significantly impede communication or obscure the point being made, but otherwise being flexible.
Hi Ozzie,
This seems excellent and I learned a lot from this comment and your post.
I agree with the impactfulness argument you have made and its potential. It seems important in being much larger scale. It might even ease other types of giving into the community somehow (because you might develop a competent, strong institution). It’s also impactful, by design.
Also, as you suggest, finding very valuable, non-EA people to execute causes seems like a pure win [1].
Now, it seems I have a grant by a major funder of EA longtermism projects. Related to this, I am researching (or really just talking about) a financial aid project to what you described.
This isn’t approved or even asked for by the grant maker, but there seems to be some possibility it will happen. (But not more than a 50% chance though).
Your thoughts would be valuable and I might contact you.
I might copy and paste some content from the document into the above comment to get feedback and ideas.
[1] But finding and funding such people also seems difficult. My guess that people who do this well (e.g. Peter Thiel of Thiel Fellows) are established in related activities or connected, to an extraordinary degree. My guess is that this activity of finding and choosing people seems structurally similar to grant making, such as GiveWell. I think that successive grantmakers for alternate causes in EA have a mixed track record compared to the original. Maybe this is because the inputs are deceptively hard and somewhat illegible from the outside.
I’d be very keen to hear what you’re planning/provide feedback.