I did a lot of structured BOTECs for a different grant-making organization, but decided against sharing them with applicants in the feedback. The main problems were that one of the key inputs was a ‘how competent are the applicants at executing on this’, which felt awkward to share if someone got a very low number, and that the overall scores were approximately log-normally distributed, so almost everyone would have ended up looking pretty bad after normalization.
I think that part of the model could be left out (left as a variable, or factored out of the BOTEC if possible), or only published for successful applicants.
I did a lot of structured BOTECs for a different grant-making organization, but decided against sharing them with applicants in the feedback. The main problems were that one of the key inputs was a ‘how competent are the applicants at executing on this’, which felt awkward to share if someone got a very low number, and that the overall scores were approximately log-normally distributed, so almost everyone would have ended up looking pretty bad after normalization.
I think that part of the model could be left out (left as a variable, or factored out of the BOTEC if possible), or only published for successful applicants.