I think the main benefit of applying to jobs you don’t think you’ll get remains the possibility you might actually get them.
If you’re applying for feedback you’ll usually be disappointed though. Most organizations offer little feedback due to a mix of caution, other things to do and the tendency of “honestly you were pretty OK, the other candidates just stood out as better in a mix of different ways” to be the actual answer, which isn’t really actionable.
Think this is a good post. But the specific example below feels like so much of a mischaracterization of typical debate it might actually illustrate part of the problem.
Whilst you may have experienced specific counterexamples, I think relatively few objectors to EA are questioning the principle that restoring the sight of 1000 people might plausibly be agreed to be a preferable outcome to moderately aiding one blind person[1] as opposed to generalising from this to what EA actually is, which is a much bolder set of claims and priorities than “sometimes you can help more people achieve the same ends with the same amount of money”
tbh even if they are explicitly asking an EA to state something as fundamental as how they can treat helping 1000 blind people > 1 blind person as established fact, if they’re someone who’s ever had a conversation with an EA before there’s a decent chance they’re doing so to ward off the EA promptly substituting 2000 chickens or
priorsconvenient imputed probability claims into the equation! And, in general, I think people asking why EA is so convinced it’s so much better at prioritising and insisting philanthropic choices are arbitrary aren’t doing so because they don’t think cost benefit analysis can be done in any circumstances[2] they’re asking because of how many other assumptions are necessarily smuggled in to even attempt cause neutrality (or because they think the assumptions smuggled in are wrong,[3] or just not actually that neutral).Obviously this isn’t because EAs refuse to debate their assumptions at all. Indeed they love nothing more than respectful debate with people who like to state their own priors and utility ranges, and even someone questioning the validity of the whole EA utilitarian consequentialist framework might get entertained so long as they establish themselves as being suitably intellectual if they contextualise it by explaining their own ethical framework first. But there’s a strong tendency to assume that if people don’t come armed with the right jargon and specific alternatives or at least have the decency to state that their question is axiological they must just not understand anything, or actually reject the whole edifice of science.[4] I think even when delivered without a hint of condescension that sort of talking past people is more annoying than the bluntness[5]
-
Similarly I don’t think advising people midway through medical school to retrain as AI researchers is a form of fanaticism which is just unrealistic enough to be annoying (though it’s an excellent example of unrealistic enough to be annoying fanaticism). It’s also a fundamental error based on evaluating outsider professions differently from the recommended ones: completely forgetting that if we’re dismissing the impact of choosing to become a doctor by comparing them with the counterfactual of the next best applicant like 80k hours says, we should we should treat the impact of the alternative career the same way. Which means the impact of a couple of medical students career changing to the even more competitive profession of AI safety should be measured by how much more likely they are to solve AI alignment than the multiple CompSci graduates who’ve been obsessing over it for five years applying for the same positions. Even if they somehow find a job in that field they’re unlikely to tangibly change it[6]. It’s not just the fanaticism producing recommendations that are annoying or too late, it’s also that cargo culting the arguments produces recommendations that are plain bad.[7]
There are obviously also explicit arguments for preferring to help one person over 1000s far away based on axiological assumptions about duty to particular communities or individuals, but yeah, those arguments don’t tend to be made implicitly in the form of a question....
again, people (including EAs) sometimes have arguments against the applying it to particular fields and evidence bases which are really good but they tend to explicitly make them
there are lots of epistemological questions about recommendations that have a better answer than “don’t you know the difference between opinion and fact?” too: the better EA analyses are explicit about their evidence base and the extent to which it represents a knowledge claim and their level of confidence X actually does deliver Z more cost effectively than Y
I’m not saying bad questions don’t exist, I’m saying that reasonable ones often get framed as unreasonable ones because lecturing people about how cost benefit analysis is just maths and they surely aren’t arguing with maths is easier than justifying the use of “moral weights”
tbh online at least I think EA’s style of debate is more likely to be accused—fairly and otherwise—of annoyingly polite persistence
nor will the modal AI researcher...
tbf the other default EA recommendation of doing medical research instead probably isn’t a bad suggestion in this instance. Or maybe they should study how to use AI to do medical research to make everyone happy ;)