Hey Rob, Thank you so much for your answer, it´s really interesting to learn more about this. I understand that there are good reasons to not provide full reasoning transparency, but if these judgment calls are being made and underpin the work of CEA’s groups team, that seems very relevant for the EA movement. Do I interpret your comment correctly, that the CEA groups team does have an internal qualitative ranking, but you are not able to share it publicly? So different values could be assigned to the same resources, like a theoretical comparison of two people taking the same job for two different organisations?
If these judgment calls are being made and underpin the work of CEA’s groups team, that seems very relevant for the EA movement.
I agree. We’re working on increasing transparency—expect to see more posts on this in the future
Do I interpret your comment correctly, that the CEA groups team does have an internal qualitative ranking, but you are not able to share it publicly?
I’m not 100% clear what you mean here, so I’ve taken a few guesses, and answered all of them
Do we have a qualitative ranking of the grants we’ve made: No. We are interested in making the “fund/don’t fund” decision—and as such a qualitative ranking within everyone we’ve funded doesn’t help us. We do have a list of the funding decisions we’ve made, and notes on the reasons why these decisions were made. These often involve qualitative judgements. We will sometimes look back at past decisions, to help us calibrate.
Do we have a qualitative ranking of common actions community members might take: No. We don’t have an X such that we could say “<job category> is worth X% of <job category>, holding ‘ability’ constant” for common EA jobs. Plausibly we should have something like this, but even this would need to be applied carefully—as different organisations are bottlenecked by different things.
Do you have heuristics that help you compare different community building outcomes: Yes. These are different between our programs, as it depends on how a program is attempting to help. E.g., in virtual programs admissions, we aren’t able to applicants on outcomes, as for many participants, it is one of their first interactions with EA. As I mentioned above, I want us to increase transparency on this.
I also want to emphasise that an important component in our grantmaking is creating healthy intellectual scenes.
Hey Rob,
Thank you so much for your answer, it´s really interesting to learn more about this. I understand that there are good reasons to not provide full reasoning transparency, but if these judgment calls are being made and underpin the work of CEA’s groups team, that seems very relevant for the EA movement.
Do I interpret your comment correctly, that the CEA groups team does have an internal qualitative ranking, but you are not able to share it publicly? So different values could be assigned to the same resources, like a theoretical comparison of two people taking the same job for two different organisations?
I agree. We’re working on increasing transparency—expect to see more posts on this in the future
I’m not 100% clear what you mean here, so I’ve taken a few guesses, and answered all of them
Do we have a qualitative ranking of the grants we’ve made: No. We are interested in making the “fund/don’t fund” decision—and as such a qualitative ranking within everyone we’ve funded doesn’t help us. We do have a list of the funding decisions we’ve made, and notes on the reasons why these decisions were made. These often involve qualitative judgements. We will sometimes look back at past decisions, to help us calibrate.
Do we have a qualitative ranking of common actions community members might take: No. We don’t have an X such that we could say “<job category> is worth X% of <job category>, holding ‘ability’ constant” for common EA jobs. Plausibly we should have something like this, but even this would need to be applied carefully—as different organisations are bottlenecked by different things.
Do you have heuristics that help you compare different community building outcomes: Yes. These are different between our programs, as it depends on how a program is attempting to help. E.g., in virtual programs admissions, we aren’t able to applicants on outcomes, as for many participants, it is one of their first interactions with EA. As I mentioned above, I want us to increase transparency on this.
I also want to emphasise that an important component in our grantmaking is creating healthy intellectual scenes.