Well, such a low pay creates additional mental pressure to resist temptation to get 5-10x money in a normal job.
Iâd rather select people carefully, but then provide them with at least a ~middle class wage
The problem is that if you select people cautiously, you miss out on hiring people significantly more competent than you. The people who are much higher competence will behave in ways you donât recognise as more competent. If you were able to tell what right things to do are, you would just do those things and be at their level. Innovation on the frontier is anti-inductive.
That said, â30k/âyearâ was just an arbitrary example, not something Iâve calculated or thought deeply about. I think that sum works for a lot of people, but I wouldnât set it as a hard limit.
Based on data sampled from looking at stuff. :P Only supposed to demonstrate the conceptual point.
Your âdeference limitâ is the level of competence above your own at which you stop being able to tell the difference between competences above that point. For games with legible performance metrics like chess, you get a very high deference limit merely by looking at Elo ratings. In altruistic research, however...
Iâm sorry I didnât express myself clearly. By âselect people carefullyâ, I meant selecting for correct motivations, that you have tried to filter for using the subsistence salary. I would prefer using some other selection mechanism (like references), and then provide a solid paycheck (like MIRI does).
Itâs certainly noble to give away everything beyond 30k like Singer and MacAskill do, but I think it should be a choice rather than a requirement.
Well, such a low pay creates additional mental pressure to resist temptation to get 5-10x money in a normal job. Iâd rather select people carefully, but then provide them with at least a ~middle class wage
The problem is that if you select people cautiously, you miss out on hiring people significantly more competent than you. The people who are much higher competence will behave in ways you donât recognise as more competent. If you were able to tell what right things to do are, you would just do those things and be at their level. Innovation on the frontier is anti-inductive.
If good research is heavy-tailed & in a positive selection-regime, then cautiousness actively selects against features with the highest expected value.[1]
That said, â30k/âyearâ was just an arbitrary example, not something Iâve calculated or thought deeply about. I think that sum works for a lot of people, but I wouldnât set it as a hard limit.
Based on data sampled from looking at stuff. :P Only supposed to demonstrate the conceptual point.
Your âdeference limitâ is the level of competence above your own at which you stop being able to tell the difference between competences above that point. For games with legible performance metrics like chess, you get a very high deference limit merely by looking at Elo ratings. In altruistic research, however...
Iâm sorry I didnât express myself clearly. By âselect people carefullyâ, I meant selecting for correct motivations, that you have tried to filter for using the subsistence salary. I would prefer using some other selection mechanism (like references), and then provide a solid paycheck (like MIRI does).
Itâs certainly noble to give away everything beyond 30k like Singer and MacAskill do, but I think it should be a choice rather than a requirement.