I recently had a conversation with a local EA community builder. Like many local community builders they got their funding from CEA. They told me that their continued funding was conditioned on scoring high on the metric of how many people they directed towards long-term-ism career paths.
If this is in fact how CEA operates, then I think this bad, for because of the reasons described in this post. Even though I’m in AI Safety I value EA being about more than X-risk prevention.
I’m head of CEA’s groups team. It is true that we care about career changes—and it is true that our funders care about career changes. However it is not true that this is the only thing that we care about. There are lots of other things we value, for example grant-recipients have started effective institutions, set up valuable partnerships, engaged with public sector and philanthropic bodies. This list is not exhaustive! We also care about the welcomingness of groups, and we care about groups not using “polarizing techniques”.
In terms of longtermist pressure—I have recently written a post about why we believe in a principles first approach, rather than an explicitly longtermist route.
I have heard similar sentiments as Linda from multiple sources, including some community builders, so I am wondering if there might be some miscommunication going on. Could you give some concrete examples to help clarify this? For example, how does the CEA groups team value e.g. 1 person going to work for a GiveWell top charity vs 1 person going to work for a top AI charity?
Typically, unless someone is donating large amounts of money—we would interpret direct work as more valuable. But all of these things have a scale, and there is a qualitative part to the interpretation. With donations, this is especially obvious—where it is very measurably true that some people are able to donate much more than others. However there is also an element of this with careers, where some people are able to have a huge impact with their careers, and others have smaller impact (yet still large in absolute terms). Because there are a lot of sensitive, qualitative judgement calls—we can’t provide full reasoning transparency.
Hey Rob, Thank you so much for your answer, it´s really interesting to learn more about this. I understand that there are good reasons to not provide full reasoning transparency, but if these judgment calls are being made and underpin the work of CEA’s groups team, that seems very relevant for the EA movement. Do I interpret your comment correctly, that the CEA groups team does have an internal qualitative ranking, but you are not able to share it publicly? So different values could be assigned to the same resources, like a theoretical comparison of two people taking the same job for two different organisations?
If these judgment calls are being made and underpin the work of CEA’s groups team, that seems very relevant for the EA movement.
I agree. We’re working on increasing transparency—expect to see more posts on this in the future
Do I interpret your comment correctly, that the CEA groups team does have an internal qualitative ranking, but you are not able to share it publicly?
I’m not 100% clear what you mean here, so I’ve taken a few guesses, and answered all of them
Do we have a qualitative ranking of the grants we’ve made: No. We are interested in making the “fund/don’t fund” decision—and as such a qualitative ranking within everyone we’ve funded doesn’t help us. We do have a list of the funding decisions we’ve made, and notes on the reasons why these decisions were made. These often involve qualitative judgements. We will sometimes look back at past decisions, to help us calibrate.
Do we have a qualitative ranking of common actions community members might take: No. We don’t have an X such that we could say “<job category> is worth X% of <job category>, holding ‘ability’ constant” for common EA jobs. Plausibly we should have something like this, but even this would need to be applied carefully—as different organisations are bottlenecked by different things.
Do you have heuristics that help you compare different community building outcomes: Yes. These are different between our programs, as it depends on how a program is attempting to help. E.g., in virtual programs admissions, we aren’t able to applicants on outcomes, as for many participants, it is one of their first interactions with EA. As I mentioned above, I want us to increase transparency on this.
I also want to emphasise that an important component in our grantmaking is creating healthy intellectual scenes.
I wonder if filling out something like the template I laid out in this post could allow transparency without disclosing confidential details for the CEA group’s team.
In addition, if I were getting career-related information from a community builder, that community builder’s future career prospects depended on getting people like me to choose a specific career path, and that fact was neither disclosed nor reasonably implied, I would feel misled by omission (at best).
By analogy, let’s say I went to a military recruiter and talked to them at length about opportunities in various branches of the military. Even though they identified themselves as a generic military recruiter, they secretly only got credit for promotion if I decided to join the Navy. I would feel entitled to proactive disclosure of that information, and would feel misled if I got a pro-Navy pitch without such disclosure.
(I am not saying I would feel misled if the community builder were evaluated on getting people to make EA career choices more broadly. I think it’s pretty obvious that recruiting is part of the mission and that community builders may be evaluated on that. Likewise, I wouldn’t feel misled if the military recruiter didn’t tell me they were evaluated on how many people they recruited for the military as a whole.)
In addition, if I were getting career-related information from a community builder, that community builder’s future career prospects depended on getting people like me to choose a specific career path, and that fact was neither disclosed nor reasonably implied, I would feel misled by omission (at best).
As far as I know, this is exactly what is happening.
I recently had a conversation with a local EA community builder. Like many local community builders they got their funding from CEA. They told me that their continued funding was conditioned on scoring high on the metric of how many people they directed towards long-term-ism career paths.
If this is in fact how CEA operates, then I think this bad, for because of the reasons described in this post. Even though I’m in AI Safety I value EA being about more than X-risk prevention.
Hey Linda,
I’m head of CEA’s groups team. It is true that we care about career changes—and it is true that our funders care about career changes. However it is not true that this is the only thing that we care about. There are lots of other things we value, for example grant-recipients have started effective institutions, set up valuable partnerships, engaged with public sector and philanthropic bodies. This list is not exhaustive! We also care about the welcomingness of groups, and we care about groups not using “polarizing techniques”.
In terms of longtermist pressure—I have recently written a post about why we believe in a principles first approach, rather than an explicitly longtermist route.
I have heard similar sentiments as Linda from multiple sources, including some community builders, so I am wondering if there might be some miscommunication going on.
Could you give some concrete examples to help clarify this? For example, how does the CEA groups team value e.g. 1 person going to work for a GiveWell top charity vs 1 person going to work for a top AI charity?
Hey Miri,
Typically, unless someone is donating large amounts of money—we would interpret direct work as more valuable. But all of these things have a scale, and there is a qualitative part to the interpretation. With donations, this is especially obvious—where it is very measurably true that some people are able to donate much more than others. However there is also an element of this with careers, where some people are able to have a huge impact with their careers, and others have smaller impact (yet still large in absolute terms). Because there are a lot of sensitive, qualitative judgement calls—we can’t provide full reasoning transparency.
Hey Rob,
Thank you so much for your answer, it´s really interesting to learn more about this. I understand that there are good reasons to not provide full reasoning transparency, but if these judgment calls are being made and underpin the work of CEA’s groups team, that seems very relevant for the EA movement.
Do I interpret your comment correctly, that the CEA groups team does have an internal qualitative ranking, but you are not able to share it publicly? So different values could be assigned to the same resources, like a theoretical comparison of two people taking the same job for two different organisations?
I agree. We’re working on increasing transparency—expect to see more posts on this in the future
I’m not 100% clear what you mean here, so I’ve taken a few guesses, and answered all of them
Do we have a qualitative ranking of the grants we’ve made: No. We are interested in making the “fund/don’t fund” decision—and as such a qualitative ranking within everyone we’ve funded doesn’t help us. We do have a list of the funding decisions we’ve made, and notes on the reasons why these decisions were made. These often involve qualitative judgements. We will sometimes look back at past decisions, to help us calibrate.
Do we have a qualitative ranking of common actions community members might take: No. We don’t have an X such that we could say “<job category> is worth X% of <job category>, holding ‘ability’ constant” for common EA jobs. Plausibly we should have something like this, but even this would need to be applied carefully—as different organisations are bottlenecked by different things.
Do you have heuristics that help you compare different community building outcomes: Yes. These are different between our programs, as it depends on how a program is attempting to help. E.g., in virtual programs admissions, we aren’t able to applicants on outcomes, as for many participants, it is one of their first interactions with EA. As I mentioned above, I want us to increase transparency on this.
I also want to emphasise that an important component in our grantmaking is creating healthy intellectual scenes.
Hey Rob,
I wonder if filling out something like the template I laid out in this post could allow transparency without disclosing confidential details for the CEA group’s team.
In addition, if I were getting career-related information from a community builder, that community builder’s future career prospects depended on getting people like me to choose a specific career path, and that fact was neither disclosed nor reasonably implied, I would feel misled by omission (at best).
By analogy, let’s say I went to a military recruiter and talked to them at length about opportunities in various branches of the military. Even though they identified themselves as a generic military recruiter, they secretly only got credit for promotion if I decided to join the Navy. I would feel entitled to proactive disclosure of that information, and would feel misled if I got a pro-Navy pitch without such disclosure.
(I am not saying I would feel misled if the community builder were evaluated on getting people to make EA career choices more broadly. I think it’s pretty obvious that recruiting is part of the mission and that community builders may be evaluated on that. Likewise, I wouldn’t feel misled if the military recruiter didn’t tell me they were evaluated on how many people they recruited for the military as a whole.)
As far as I know, this is exactly what is happening.