I’m head of CEA’s groups team. It is true that we care about career changes—and it is true that our funders care about career changes. However it is not true that this is the only thing that we care about. There are lots of other things we value, for example grant-recipients have started effective institutions, set up valuable partnerships, engaged with public sector and philanthropic bodies. This list is not exhaustive! We also care about the welcomingness of groups, and we care about groups not using “polarizing techniques”.
In terms of longtermist pressure—I have recently written a post about why we believe in a principles first approach, rather than an explicitly longtermist route.
I have heard similar sentiments as Linda from multiple sources, including some community builders, so I am wondering if there might be some miscommunication going on. Could you give some concrete examples to help clarify this? For example, how does the CEA groups team value e.g. 1 person going to work for a GiveWell top charity vs 1 person going to work for a top AI charity?
Typically, unless someone is donating large amounts of money—we would interpret direct work as more valuable. But all of these things have a scale, and there is a qualitative part to the interpretation. With donations, this is especially obvious—where it is very measurably true that some people are able to donate much more than others. However there is also an element of this with careers, where some people are able to have a huge impact with their careers, and others have smaller impact (yet still large in absolute terms). Because there are a lot of sensitive, qualitative judgement calls—we can’t provide full reasoning transparency.
Hey Rob, Thank you so much for your answer, it´s really interesting to learn more about this. I understand that there are good reasons to not provide full reasoning transparency, but if these judgment calls are being made and underpin the work of CEA’s groups team, that seems very relevant for the EA movement. Do I interpret your comment correctly, that the CEA groups team does have an internal qualitative ranking, but you are not able to share it publicly? So different values could be assigned to the same resources, like a theoretical comparison of two people taking the same job for two different organisations?
If these judgment calls are being made and underpin the work of CEA’s groups team, that seems very relevant for the EA movement.
I agree. We’re working on increasing transparency—expect to see more posts on this in the future
Do I interpret your comment correctly, that the CEA groups team does have an internal qualitative ranking, but you are not able to share it publicly?
I’m not 100% clear what you mean here, so I’ve taken a few guesses, and answered all of them
Do we have a qualitative ranking of the grants we’ve made: No. We are interested in making the “fund/don’t fund” decision—and as such a qualitative ranking within everyone we’ve funded doesn’t help us. We do have a list of the funding decisions we’ve made, and notes on the reasons why these decisions were made. These often involve qualitative judgements. We will sometimes look back at past decisions, to help us calibrate.
Do we have a qualitative ranking of common actions community members might take: No. We don’t have an X such that we could say “<job category> is worth X% of <job category>, holding ‘ability’ constant” for common EA jobs. Plausibly we should have something like this, but even this would need to be applied carefully—as different organisations are bottlenecked by different things.
Do you have heuristics that help you compare different community building outcomes: Yes. These are different between our programs, as it depends on how a program is attempting to help. E.g., in virtual programs admissions, we aren’t able to applicants on outcomes, as for many participants, it is one of their first interactions with EA. As I mentioned above, I want us to increase transparency on this.
I also want to emphasise that an important component in our grantmaking is creating healthy intellectual scenes.
I wonder if filling out something like the template I laid out in this post could allow transparency without disclosing confidential details for the CEA group’s team.
Hey Linda,
I’m head of CEA’s groups team. It is true that we care about career changes—and it is true that our funders care about career changes. However it is not true that this is the only thing that we care about. There are lots of other things we value, for example grant-recipients have started effective institutions, set up valuable partnerships, engaged with public sector and philanthropic bodies. This list is not exhaustive! We also care about the welcomingness of groups, and we care about groups not using “polarizing techniques”.
In terms of longtermist pressure—I have recently written a post about why we believe in a principles first approach, rather than an explicitly longtermist route.
I have heard similar sentiments as Linda from multiple sources, including some community builders, so I am wondering if there might be some miscommunication going on.
Could you give some concrete examples to help clarify this? For example, how does the CEA groups team value e.g. 1 person going to work for a GiveWell top charity vs 1 person going to work for a top AI charity?
Hey Miri,
Typically, unless someone is donating large amounts of money—we would interpret direct work as more valuable. But all of these things have a scale, and there is a qualitative part to the interpretation. With donations, this is especially obvious—where it is very measurably true that some people are able to donate much more than others. However there is also an element of this with careers, where some people are able to have a huge impact with their careers, and others have smaller impact (yet still large in absolute terms). Because there are a lot of sensitive, qualitative judgement calls—we can’t provide full reasoning transparency.
Hey Rob,
Thank you so much for your answer, it´s really interesting to learn more about this. I understand that there are good reasons to not provide full reasoning transparency, but if these judgment calls are being made and underpin the work of CEA’s groups team, that seems very relevant for the EA movement.
Do I interpret your comment correctly, that the CEA groups team does have an internal qualitative ranking, but you are not able to share it publicly? So different values could be assigned to the same resources, like a theoretical comparison of two people taking the same job for two different organisations?
I agree. We’re working on increasing transparency—expect to see more posts on this in the future
I’m not 100% clear what you mean here, so I’ve taken a few guesses, and answered all of them
Do we have a qualitative ranking of the grants we’ve made: No. We are interested in making the “fund/don’t fund” decision—and as such a qualitative ranking within everyone we’ve funded doesn’t help us. We do have a list of the funding decisions we’ve made, and notes on the reasons why these decisions were made. These often involve qualitative judgements. We will sometimes look back at past decisions, to help us calibrate.
Do we have a qualitative ranking of common actions community members might take: No. We don’t have an X such that we could say “<job category> is worth X% of <job category>, holding ‘ability’ constant” for common EA jobs. Plausibly we should have something like this, but even this would need to be applied carefully—as different organisations are bottlenecked by different things.
Do you have heuristics that help you compare different community building outcomes: Yes. These are different between our programs, as it depends on how a program is attempting to help. E.g., in virtual programs admissions, we aren’t able to applicants on outcomes, as for many participants, it is one of their first interactions with EA. As I mentioned above, I want us to increase transparency on this.
I also want to emphasise that an important component in our grantmaking is creating healthy intellectual scenes.
Hey Rob,
I wonder if filling out something like the template I laid out in this post could allow transparency without disclosing confidential details for the CEA group’s team.