A model that I heard TripleByte used sounds interesting to me.
I wrote a comment about TripleByte’s feedback process here; this blog post is great too. In our experience, the fear of lawsuits and PR disasters from giving feedback to rejected candidates was much overblown, even at a massive scale. (We gave every candidate feedback regardless of how well they performed on our interview.)
Something I didn’t mention in my comment is that much of TripleByte’s feedback email was composed of prewritten text blocks carefully optimized to be helpful and non-offensive. While interviewing a candidate, I would check boxes for things like “this candidate used their debugger poorly”, and then their feedback email would automatically include a prewritten spiel with links on how to use a debugger well (or whatever). I think this model could make a lot of sense for the fund:
It makes giving feedback way more scalable. There’s a one-time setup cost of prewriting some text blocks, and probably a minor ongoing cost of gradually improving your blocks over time, but the marginal cost of giving a candidate feedback is just 30 seconds of checking some boxes. (IIRC our approach was to tell candidates “here are some things we think it might be helpful for you to read” and then when in doubt, err on the side of checking more boxes. For funding, I’d probably take it a step further, and rank or score the text blocks according to their importance to your decision. At TripleByte, we would score the candidate on different facets of their interview performance and send them their scores—if you’re already scoring applications according to different facets, this could be a cheap way to provide feedback.)
Minimize lawsuit risk. It’s not that costly to have a lawyer vet a few pages of prewritten text that will get reused over and over. (We didn’t have a lawyer look over our feedback emails, and it turned out fine, so this is a conservative recommendation.)
Minimize PR risk. Someone who posts their email to Twitter can expect bored replies like “yeah, they wrote the exact same thing in my email.” (Again, PR risk didn’t seem to be an issue in practice despite giving lots of freeform feedback along with the prewritten blocks, so this seems like a conservative approach to me.)
If I were you, I think I’d experiment with hiring one of the writers of the TripleByte feedback emails as a contractor or consultant. Happy to make an intro.
A few final thoughts:
Without feedback, a rejectee is likely to come up with their own theory of why they were rejected. You have no way to observe this theory or vet its quality. So I think it’s a mistake to hold yourself to a high bar. You just have to beat the rejectee’s theory. (BTW, most of the EA rejectee theories I’ve heard have been very cynical.)
You might look into liability insurance if you don’t have it already; it probably makes sense to get it for other reasons anyway. I’d be curious how the cost of insurance changes depending on the feedback you’re giving.
I wrote a comment about TripleByte’s feedback process here; this blog post is great too. In our experience, the fear of lawsuits and PR disasters from giving feedback to rejected candidates was much overblown, even at a massive scale. (We gave every candidate feedback regardless of how well they performed on our interview.)
Something I didn’t mention in my comment is that much of TripleByte’s feedback email was composed of prewritten text blocks carefully optimized to be helpful and non-offensive. While interviewing a candidate, I would check boxes for things like “this candidate used their debugger poorly”, and then their feedback email would automatically include a prewritten spiel with links on how to use a debugger well (or whatever). I think this model could make a lot of sense for the fund:
It makes giving feedback way more scalable. There’s a one-time setup cost of prewriting some text blocks, and probably a minor ongoing cost of gradually improving your blocks over time, but the marginal cost of giving a candidate feedback is just 30 seconds of checking some boxes. (IIRC our approach was to tell candidates “here are some things we think it might be helpful for you to read” and then when in doubt, err on the side of checking more boxes. For funding, I’d probably take it a step further, and rank or score the text blocks according to their importance to your decision. At TripleByte, we would score the candidate on different facets of their interview performance and send them their scores—if you’re already scoring applications according to different facets, this could be a cheap way to provide feedback.)
Minimize lawsuit risk. It’s not that costly to have a lawyer vet a few pages of prewritten text that will get reused over and over. (We didn’t have a lawyer look over our feedback emails, and it turned out fine, so this is a conservative recommendation.)
Minimize PR risk. Someone who posts their email to Twitter can expect bored replies like “yeah, they wrote the exact same thing in my email.” (Again, PR risk didn’t seem to be an issue in practice despite giving lots of freeform feedback along with the prewritten blocks, so this seems like a conservative approach to me.)
If I were you, I think I’d experiment with hiring one of the writers of the TripleByte feedback emails as a contractor or consultant. Happy to make an intro.
A few final thoughts:
Without feedback, a rejectee is likely to come up with their own theory of why they were rejected. You have no way to observe this theory or vet its quality. So I think it’s a mistake to hold yourself to a high bar. You just have to beat the rejectee’s theory. (BTW, most of the EA rejectee theories I’ve heard have been very cynical.)
You might look into liability insurance if you don’t have it already; it probably makes sense to get it for other reasons anyway. I’d be curious how the cost of insurance changes depending on the feedback you’re giving.