One thing is just that discouragement is culturally quite hard and there are strong disincentives to do so; eg I think I definitely get more flak for telling people they shouldn’t do X than telling them they should (including a recent incidence which was rather personally costly). And I think I’m much more capable of diplomatic language than the median person in such situations; some of my critical or discouraging comments on this forum are popular.
I also know at least 2 different people who were told (probably wrongly) many years ago that they can’t be good researchers, and they still bring it up as recently as this year. Presumably people falsely told they can be good researchers (or correctly told that they cannot) are less likely to e.g. show up at EA Global. So it’s easier for people in positions of relative power or prestige to see the positive consequences of encouragement, and the negative consequences of discouragement, than the reverse.
Sometimes when people ask me about their chances, I try to give them off-the-cuff numerical probabilities. Usually the people I’m talking to appreciate it but sometimes people around them (or around me) get mad at me.
(Tbf, I have never tried scoring these fast guesses, so I have no idea how accurate they are).
How my perspective has changed on this during the last few years is to advise others not to give much weight to a single point of feedback. Especially for those who’ve told me only one or two people have discouraged them from be(com)ing a researcher, I tell them not to stop trying in spite of that. That’s even when the person giving the discouraging feedback is in a position of relative power or prestige.
The last year seems to have proven that the power or prestige someone has gained in EA is a poor proxy for how much weight their judgment should be given on any, single EA-relsted topic. If Will MacAskill and many of his closest peers are doubting how they’ve conceived of EA for years in the wake of the FTX collapse, I expect most individual effective altruists confident enough to judge another’s entire career trajectory are themselves likely overconfident.
Another example is AI safety. I’ve talked to dozens of aspiring AI safety researchers who’ve felt very discouraged
An illusory consensus thrust upon them that their work was essentially worthless because it didn’t superficially resemble the work being done by the Machine Intelligence Research Institute or whatever other approach was in vogue at the time. For years, I suspected that was bullshit.
Some of the brightest effective altruists I’ve met were being inundated by personal criticism harsher than any even Eliezer Yudkowsky would give. I told those depressed, novice AIS researchers to ignore those dozens of jerks who concluded the way to give constructive criticism, like they presumed Eliezer would, was to emulate a sociopath. These people were just playing a game of ‘follow the leader’ not even the “leaders” would condone. I distrusted their hot takes based on clout and vibes about who was competent and who wasn’t
Meanwhile, increasingly over the last year or two, more and more of the AIS field, including some of its most reputed luminaires, have come out of the woodwork more and more to say, essentially, “lol, turns out we didn’t know what we were doing with alignment the whole time, we’re definitely probably all gonna die soon, unless we can convince Sam Altman to hit the off switch at OpenAI.” I feel vindicated in my skepticism of the quality of the judgement of many of our peers.
Yeah this sounds right.
One thing is just that discouragement is culturally quite hard and there are strong disincentives to do so; eg I think I definitely get more flak for telling people they shouldn’t do X than telling them they should (including a recent incidence which was rather personally costly). And I think I’m much more capable of diplomatic language than the median person in such situations; some of my critical or discouraging comments on this forum are popular.
I also know at least 2 different people who were told (probably wrongly) many years ago that they can’t be good researchers, and they still bring it up as recently as this year. Presumably people falsely told they can be good researchers (or correctly told that they cannot) are less likely to e.g. show up at EA Global. So it’s easier for people in positions of relative power or prestige to see the positive consequences of encouragement, and the negative consequences of discouragement, than the reverse.
Sometimes when people ask me about their chances, I try to give them off-the-cuff numerical probabilities. Usually the people I’m talking to appreciate it but sometimes people around them (or around me) get mad at me.
(Tbf, I have never tried scoring these fast guesses, so I have no idea how accurate they are).
How my perspective has changed on this during the last few years is to advise others not to give much weight to a single point of feedback. Especially for those who’ve told me only one or two people have discouraged them from be(com)ing a researcher, I tell them not to stop trying in spite of that. That’s even when the person giving the discouraging feedback is in a position of relative power or prestige.
The last year seems to have proven that the power or prestige someone has gained in EA is a poor proxy for how much weight their judgment should be given on any, single EA-relsted topic. If Will MacAskill and many of his closest peers are doubting how they’ve conceived of EA for years in the wake of the FTX collapse, I expect most individual effective altruists confident enough to judge another’s entire career trajectory are themselves likely overconfident.
Another example is AI safety. I’ve talked to dozens of aspiring AI safety researchers who’ve felt very discouraged An illusory consensus thrust upon them that their work was essentially worthless because it didn’t superficially resemble the work being done by the Machine Intelligence Research Institute or whatever other approach was in vogue at the time. For years, I suspected that was bullshit.
Some of the brightest effective altruists I’ve met were being inundated by personal criticism harsher than any even Eliezer Yudkowsky would give. I told those depressed, novice AIS researchers to ignore those dozens of jerks who concluded the way to give constructive criticism, like they presumed Eliezer would, was to emulate a sociopath. These people were just playing a game of ‘follow the leader’ not even the “leaders” would condone. I distrusted their hot takes based on clout and vibes about who was competent and who wasn’t
Meanwhile, increasingly over the last year or two, more and more of the AIS field, including some of its most reputed luminaires, have come out of the woodwork more and more to say, essentially, “lol, turns out we didn’t know what we were doing with alignment the whole time, we’re definitely probably all gonna die soon, unless we can convince Sam Altman to hit the off switch at OpenAI.” I feel vindicated in my skepticism of the quality of the judgement of many of our peers.