A hypothetical example that I would view as asking for trust would be someone telling me not to join an organization, but not telling me why. Or claiming that another person shouldn’t be trusted, without giving details. I personally very rarely see folks do this. An organization doing something different and explaining their reasoning (ex. giving feedback was not viewed as not a good ROI) is not asking for trust.
Regarding why giving feedback at scale is hard, most of these positions have at best vague evaluation metrics which usually bottom out in “help the organization achieve its goals.” Any specific criteria is very prone to being Goodharted. And the people who most need feedback are in my experience disproportionately likely to argue with you about it and make a stink to management. No need to trust me on this, just try out giving feedback at scale and see if it’s hard.
My admittedly limited understanding of the UK Civil Service suggests that it’s more amenable to quantization compared to GiveWell research analysts and Google software engineers. For example, if your job is working at the UK equivalent of a DMV, we could grade you based on number of customers served and a notion of error rate. That would seem pretty fair and somewhat hard to game. For a programmer, we could grade you based on tickets closed and bugs introduced. In contrast, this is absolute trash as the sole metric (although it does have some useful info).
Any specific criteria is very prone to being Goodharted.
I don’t think CEA should share specific criteria. I think they should give rejects brief, tentative suggestions of how to develop as an EA in ways that will strengthen their application next time. Growth mindset over fixed mindset. Even a completely generic “maybe you should get 80K advising” message for every reject would go a long way.
Earlier in this thread, I claimed that senior EAs put very little trust in junior EAs. The Goodharting discussion illustrates that well. The assumption is that if feedback is given, junior EAs will cynically game the system instead of using the feedback to grow in good faith. I’m sure a few junior EAs will cynically game the system, but if the “cynical system-gaming” people outweigh the “good faith career growth” people, we have much bigger problems than feedback. (And such an imbalance seems implausible in a movement focused on altruism.)
I’d argue that lack of feedback actually invites cynical system-gaming, because you’re not giving people anywhere productive to direct their energies. And operating in a low-trust regime invites cynicism in general.
And the people who most need feedback are in my experience disproportionately likely to argue with you about it and make a stink to management.
Make it clear you won’t go back and forth this way.
This post explains why giving feedback is so important. If 5 minutes of feedback makes the difference for a reject getting bummed out and leaving the EA movement, it could be well worthwhile. My intuition is that this happens quite a bit, and CEA just isn’t tracking it.
Re: making a stink—the person who’s made the biggest stink in EA history is probably Émile P. Torres. If you read the linked post, he seems to be in a cycle of: getting rejected, developing mental health issues from that, misbehaving due to mental health issues, then experiencing further rejections. (Again I refer you to the “Cost of Rejection” post—mental health issues from rejection seem common, and lack of feedback is a big factor. As you might’ve guessed by this point, I was rejected for some EA stuff, and the mental health impact was much larger and longer than I would’ve predicted in advance.)
I think we would prefer that rejects make a stink to management vs making a stink on social media. And 5 minutes of feedback to prevent someone from entering the same cycle Torres is in seems well worthwhile.
No need to trust me on this, just try out giving feedback at scale and see if it’s hard.
Again, I do have significant knowledge related to giving feedback at scale. It isn’t nearly as hard as people say if you do it the right way.
My admittedly limited understanding of the UK Civil Service suggests that it’s more amenable to quantization compared to GiveWell research analysts and Google software engineers. For example, if your job is working at the UK equivalent of a DMV, we could grade you based on number of customers served and a notion of error rate. That would seem pretty fair and somewhat hard to game. For a programmer, we could grade you based on tickets closed and bugs introduced. In contrast, this is absolute trash as the sole metric (although it does have some useful info).
This seems like a red herring? I assume anyone applying for an analyst position at Givewell would be applying for a similar type of position at the Civil Service. White collar work may be hard to quantize, but that doesn’t mean job performance can’t be evaluated. And I don’t see what evaluation of on-the-job performance has to do with our discussion.
I assume anyone applying for an analyst position at Givewell would be applying for a similar type of position at the Civil Service.
My experience with government positions is that they are legally required to have relatively formulaic hiring criteria. A benefit of this is that it’s easy to give feedback: you just screenshot your rubric and say “here are the columns where you didn’t get enough points”.
So my guess is that even if there was literally the same position at GiveWell and the UK Civil Service it would be substantially easier to give feedback for the civil service one (which of course doesn’t necessarily mean that GW shouldn’t give feedback, just that they are meaningfully different reference classes).
I don’t think CEA should share specific criteria. I think they should give rejects brief, tentative suggestions of how to develop as an EA in ways that will strengthen their application next time. Growth mindset over fixed mindset. Even a completely generic “maybe you should get 80K advising” message for every reject would go a long way.
When we have a specific idea about what would improve someone’s chances (like “you didn’t give much detail on your application, could you add more information?”) we’ll often give it.
I guess you would rather they say “always” instead of “often,” but otherwise it seems like what you want? And my recollection is that even the generic rejection emails do contain generic advice like linking to 80 K?
I guess this is kind of a tangent on the thread, but for what it’s worth I’m not sure that EAG is actually doing something different than what you are suggesting.
(Note: I work for CEA, but not on the events team.)
This thread is the kind of tiring back and forth I’m talking about. Please, try organizing feedback for 5k+ rejected applicants for something every year and then come back to tell me why I’m wrong and it really is easy. I promise to humbly eat crow at that time.
For what it’s worth, I’m also feeling quite frustrated. I’ve been repeatedly giving you details of how an organization I’m very familiar with (can’t say more without compromising anonymity) did exactly what you claim is so difficult, and nothing seems to get through.
I won’t trouble you with further replies in this thread :-)
You can see how that the lack of details is basically asking me to… trust you without evidence?
Edit: to use less ’gotcha phrasing, anonymously claiming that another organization is doing better on feedback, but not telling me how, is asking for me to blindly trust you for very little reason.
I don’t think feedback practices are widely considered secrets that have to be protected, and if your familiarity is with the UK Civil Service, that’s a massive organization where you can easily give a description without unduly narrowing yourself down.
A hypothetical example that I would view as asking for trust would be someone telling me not to join an organization, but not telling me why. Or claiming that another person shouldn’t be trusted, without giving details. I personally very rarely see folks do this. An organization doing something different and explaining their reasoning (ex. giving feedback was not viewed as not a good ROI) is not asking for trust.
Regarding why giving feedback at scale is hard, most of these positions have at best vague evaluation metrics which usually bottom out in “help the organization achieve its goals.” Any specific criteria is very prone to being Goodharted. And the people who most need feedback are in my experience disproportionately likely to argue with you about it and make a stink to management. No need to trust me on this, just try out giving feedback at scale and see if it’s hard.
My admittedly limited understanding of the UK Civil Service suggests that it’s more amenable to quantization compared to GiveWell research analysts and Google software engineers. For example, if your job is working at the UK equivalent of a DMV, we could grade you based on number of customers served and a notion of error rate. That would seem pretty fair and somewhat hard to game. For a programmer, we could grade you based on tickets closed and bugs introduced. In contrast, this is absolute trash as the sole metric (although it does have some useful info).
I don’t think CEA should share specific criteria. I think they should give rejects brief, tentative suggestions of how to develop as an EA in ways that will strengthen their application next time. Growth mindset over fixed mindset. Even a completely generic “maybe you should get 80K advising” message for every reject would go a long way.
Earlier in this thread, I claimed that senior EAs put very little trust in junior EAs. The Goodharting discussion illustrates that well. The assumption is that if feedback is given, junior EAs will cynically game the system instead of using the feedback to grow in good faith. I’m sure a few junior EAs will cynically game the system, but if the “cynical system-gaming” people outweigh the “good faith career growth” people, we have much bigger problems than feedback. (And such an imbalance seems implausible in a movement focused on altruism.)
I’d argue that lack of feedback actually invites cynical system-gaming, because you’re not giving people anywhere productive to direct their energies. And operating in a low-trust regime invites cynicism in general.
Make it clear you won’t go back and forth this way.
This post explains why giving feedback is so important. If 5 minutes of feedback makes the difference for a reject getting bummed out and leaving the EA movement, it could be well worthwhile. My intuition is that this happens quite a bit, and CEA just isn’t tracking it.
Re: making a stink—the person who’s made the biggest stink in EA history is probably Émile P. Torres. If you read the linked post, he seems to be in a cycle of: getting rejected, developing mental health issues from that, misbehaving due to mental health issues, then experiencing further rejections. (Again I refer you to the “Cost of Rejection” post—mental health issues from rejection seem common, and lack of feedback is a big factor. As you might’ve guessed by this point, I was rejected for some EA stuff, and the mental health impact was much larger and longer than I would’ve predicted in advance.)
I think we would prefer that rejects make a stink to management vs making a stink on social media. And 5 minutes of feedback to prevent someone from entering the same cycle Torres is in seems well worthwhile.
Again, I do have significant knowledge related to giving feedback at scale. It isn’t nearly as hard as people say if you do it the right way.
This seems like a red herring? I assume anyone applying for an analyst position at Givewell would be applying for a similar type of position at the Civil Service. White collar work may be hard to quantize, but that doesn’t mean job performance can’t be evaluated. And I don’t see what evaluation of on-the-job performance has to do with our discussion.
My experience with government positions is that they are legally required to have relatively formulaic hiring criteria. A benefit of this is that it’s easy to give feedback: you just screenshot your rubric and say “here are the columns where you didn’t get enough points”.
So my guess is that even if there was literally the same position at GiveWell and the UK Civil Service it would be substantially easier to give feedback for the civil service one (which of course doesn’t necessarily mean that GW shouldn’t give feedback, just that they are meaningfully different reference classes).
The post you linked says:
I guess you would rather they say “always” instead of “often,” but otherwise it seems like what you want? And my recollection is that even the generic rejection emails do contain generic advice like linking to 80 K?
I guess this is kind of a tangent on the thread, but for what it’s worth I’m not sure that EAG is actually doing something different than what you are suggesting.
(Note: I work for CEA, but not on the events team.)
This thread is the kind of tiring back and forth I’m talking about. Please, try organizing feedback for 5k+ rejected applicants for something every year and then come back to tell me why I’m wrong and it really is easy. I promise to humbly eat crow at that time.
For what it’s worth, I’m also feeling quite frustrated. I’ve been repeatedly giving you details of how an organization I’m very familiar with (can’t say more without compromising anonymity) did exactly what you claim is so difficult, and nothing seems to get through.
I won’t trouble you with further replies in this thread :-)
You can see how that the lack of details is basically asking me to… trust you without evidence?
Edit: to use less ’gotcha phrasing, anonymously claiming that another organization is doing better on feedback, but not telling me how, is asking for me to blindly trust you for very little reason.
I don’t think feedback practices are widely considered secrets that have to be protected, and if your familiarity is with the UK Civil Service, that’s a massive organization where you can easily give a description without unduly narrowing yourself down.