Log-normal lamentations
[Morose. Also very roughly drafted. Cross]
Normally, things are distributed normally. Human talents may turn out to be one of these things. Some people are lucky enough to find themselves on the right side of these distributions – smarter than average, better at school, more conscientious, whatever. To them go many spoils – probably more so now than at any time before, thanks to the information economy.
There’s a common story told about a hotshot student at school whose ego crashes to earth when they go to university and find themselves among a group all as special as they thought they were. The reality might be worse: many of the groups the smart or studious segregate into (physics professors, Harvard undergraduates, doctors) have threshold (or near threshold)-like effects: only those with straight A’s, only those with IQs > X, etc. need apply. This introduces a positive skew to the population: most (and the median) are below the average, brought up by a long tail of the (even more) exceptional. Instead of comforting ourselves at looking at the entire population to which we compare favorably, most of us will look around our peer group and find ourselves in the middle, and having to look a long way up to the best. 1
Yet part of growing up is recognizing there will inevitably be people better than you are – the more able may be able to buy their egos time, but no more. But that needn’t be so bad: in several fields (such as medicine) it can be genuinely hard to judge ‘betterness’, and so harder to find exemplars to illuminate your relative mediocrity. Often there are a variety of dimensions to being ‘better’ at something: although I don’t need to try too hard to find doctors who are better at some aspect of medicine than I (more knowledgeable, kinder, more skilled in communication etc.) it is mercifully rare to find doctors who are better than me in all respects. And often the tails are thin: if you’re around 1 standard deviation above the mean, people many times further from the average than you are will still be extraordinarily rare, even if you had a good stick to compare them to yourself.
Look at our thick-tailed works, ye average, and despair! 2
One nice thing about the EA community is that they tend to be an exceptionally able bunch: I remember being in an ‘intern house’ that housed the guy who came top in philosophy at Cambridge, the guy who came top in philosophy at Yale, and the guy who came top in philosophy at Princeton – and although that isn’t a standard sample, we seem to be drawn disproportionately not only from those who went to elite universities, but those who did extremely well at elite universities. 3 This sets the bar very high.
Many of the ‘high impact’ activities these high achieving people go into (or aspire to go into) are more extreme than normal(ly distributed): log-normal commonly, but it may often be Pareto. The distribution of income or outcomes from entrepreneurial ventures (and therefore upper-bounds on what can be ‘earned to give’), the distribution of papers or citations in academia, the impact of direct projects, and (more tenuously) degree of connectivity or importance in social networks or movements would all be examples: a few superstars and ‘big winners’, but orders of magnitude smaller returns for the rest.
Insofar as I have ‘EA career path’, mine is earning to give: if I were trying to feel good about the good I was doing, my first port of call would be my donations. In sum, I’ve given quite a lot to charity – ~£15,000 and counting – which I’m proud of. Yet I’m no banker (or algo-trader) – those who are really good (or lucky, or both) can end up out of university with higher starting salaries than my peak expected salary, and so can give away more than ten times more than I will be able to. I know several of these people, and the running tally of each of their donations is often around ten times my own. If they or others become even more successful in finance, or very rich starting a company, there might be several more orders of magnitude between their giving and mine. My contributions may be little more than a rounding error to their work.
A shattered visage
Earning to give is kinder to the relatively minor players than other ‘fields’ of EA activity, as even though Bob’s or Ellie’s donations are far larger, they do not overdetermine my own: that their donations dewormed 1000x children does not make the 1x I dewormed any less valuable. It is unclear whether this applies to other ‘fields’: Suppose I became a researcher working on a malaria vaccine, but this vaccine is discovered by Sally the super scientist and her research group across the world. Suppose also that Sally’s discovery was independent of my own work. Although it might have been ex anteextremely valuable for me to work on malaria, its value is vitiated when Sally makes her breakthrough, in the same way a lottery ticket loses value after the draw.
So there are a few ways an Effective Altruist mindset can depress our egos:
It is generally a very able and high achieving group of people, setting the ‘average’ pretty high.
‘Effective Altruist’ fields tend to be heavy-tailed, so that being merely ‘average’ (for EAs!) in something like earning to give mean having a much smaller impact when compared to one of the (relatively common) superstars.
(Our keenness for quantification makes us particularly inclined towards and able to make these sorts of comparative judgements, ditto the penchant for taking things to be commensurate).
Many of these fields have ‘lottery-like’ characteristics where ex ante and ex postvalue diverge greatly. ‘Taking a shot’ at being an academic or entrepreneur or politician or leading journalist may be a good bet ex ante for an EA because the upside is so high even if their chances of success remain low (albeit better than the standard reference class). But if the median outcome is failure, the majority who will fail might find the fact it was a good idea ex ante of scant consolation – rewards (and most of the world generally) run ex post facto.
What remains besides
I don’t haven’t found a ready ‘solution’ for these problems, and I’d guess there isn’t one to be found. We should be sceptical of ideological panaceas that can do no wrong and everything right, and EA is no exception: we should expect it to have some costs, and perhaps this is one of them. If so, better to accept it rather than defend the implausibly defensible.
In the same way I could console myself, on confronting a generally better doctor: “Sure, they are better at A, and B, and C, … and Y, but I’m better at Z!”, one could do the same with regards to the axes one’s ‘EA work’. “Sure, Ellie the entrepreneur has given hundreds of times more money to charity, but what’s she like at self-flagellating blog posts, huh?” There’s an incentive to diversify as (combinatorically) it will be less frequent to find someone who strictly dominates you, and although we want to compare across diverse fields, doing so remains difficult. Pablo Stafforini has mentioned elsewhere whether EAs should be ‘specialising’ more instead of spreading their energies over disparate fields: perhaps this makes that less surprising. 4
Insofar as people’s self-esteem is tied up with their work as EAs (and, hey, shouldn’t it be, in part?) There perhaps is a balance to be struck between soberly and frankly discussing the outcomes and merits of our actions, and being gentle to avoid hurting our peers by talking down their work. Yes, we would all want to know if what we were doing was near useless (or even net negative), but this should be broken with care. 5
‘Suck it up’ may be the best strategy. These problems become more acute the more we care about our ‘status’ in the EA community; the pleasure we derive from not only doing good, but doing more good than our peers; and our desire to be seen as successful. Good though it is for these desires to be sublimated to better ends (far preferable all else equal that rivals choose charitable donations rather than Veblen goods to be the arena of their competition), it would be even better to guard against these desires in the first place. Primarily, worry about how to do the most good. 6
Notes:
As further bad news, there may be progression of ‘tiers’ which are progressively more selective, somewhat akin to stacked band-pass filters: even if you were the best maths student at your school, then the best at university, you may still find yourself plonked around median in a positive-skewed population of maths professors – and if you were an exceptional maths professor, you might find yourself plonked around median in the population of fields medalists. And so on (especially – see infra – if the underlying distribution is something scale-free).
I wonder how much this post is a monument to the grasping vaingloriousness of my character…
Pace: academic performance is not the only (nor the best) measure of ability. But it is a measure, and a fairly germane one for the fairly young population ‘in’ EA.
Although there are other more benign possibilities, given diminishing marginal returns and the lack of people available. As a further aside, I’m wary of arguments/discussions that note bias or self-serving explanations that lie parallel to an opposing point of view (“We should expect people to be more opposed to my controversial idea than they should be due to status quo and social desirability biases”, etc.) First because there are generally so many candidate biases available they end up pointing in most directions; second because it is unclear whether knowing about or noting biases makes one less biased; and third because generally more progress can be made on object level disagreement than on trying to evaluate the strength and relevance of particular biases.
Another thing I am wary of is Crocker’s rules: the idea that you unilaterally declare: ‘don’t worry about being polite with me, just tell it to me straight! I won’t be offended’. Naturally, one should try and separate one’s sense of offense from whatever information was there – it would be a shame to reject a correct diagnosis of our problems because of how it was said. Yet that is very different from trying to eschew this ‘social formatting’ altogether: people (myself included) generally find it easier to respond well when people are polite, and I suspect this even applies to those eager to make Crocker’s Rules-esque declarations. We might (especially if we’re involved in the ‘rationality’ movement) want to overcome petty irrationalities like incorrectly updating on feedback because of an affront to our status or self esteem. Yet although petty, they are surprisingly difficult to budge (if I cloned you 1000 times and ‘told it straight’ to half, yet made an effort to be polite with the other half, do you think one group would update better?) and part of acknowledging our biases should be an acknowledgement that it is sometimes better to placate them rather than overcome them.
cf. Max Ehrmann put it well:
… If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself.
Enjoy your achievements as well as your plans. Keep interested in your own career, however humble…
- Impact obsession: Feeling like you never do enough good by 23 Aug 2023 11:32 UTC; 155 points) (
- 15 Feb 2022 12:10 UTC; 9 points) 's comment on Propose and vote on potential EA Wiki entries by (
- 12 Oct 2015 20:34 UTC; 2 points) 's comment on EA’s Image Problem by (
- 13 Jul 2015 23:53 UTC; 1 point) 's comment on The Bittersweetness of Replaceability by (
If anyone is ever at a point where they are significantly discouraged by thoughts along these lines (as I’ve been at times), there’s an Effective Altruist self-help group where you can find other EAs to talk to about how you’re feeling (and it really does help!). The group is hidden, but if you message me, I can point you in the right direction (or you can find information about it on the sidebar of the Effective Altruist facebook group).
I wonder if there’s a large amount of impact to be had in people outside of the tail trying to enhance the effectiveness of people in the tail (these might look like being someone’s personal assistant or sidekick, introducing someone in the tail to someone cool outside of the EA movement, being a solid employee for someone who founds an EA startup, etc.)? Being able to improve impact of someone in the tail (even if you can’t quantify what you accomplished) might avert the social comparison aspect, as one would feel like they’d be able to at least take partial credit for the accomplishments of the EA superstars.
If this was a solution to some of the issues which the OP raised, it would only be a solution for a small number of us. All the same, I think you’re right that having ‘sidekick’ roles could be very valuable!
It would be worth checking out this discussion: http://effective-altruism.com/ea/dl/i_am_samwise_link/
Thanks for writing this. I often feel quite similar—I can find being in contact with so many amazing people either inspiring or oddly demotivating!
That’s the old status-conscious monkey brain talking (everyone’s grasping vaingloriousness), and we shouldn’t feed it, but it’s good to acknowledge that it’s there from time to time.
Overall, I think the EA movement is pretty good at being positive. I’ve found such criticism as there is is usually self-criticism—if anything, I find people to be unusually generous with praise, which is lovely. I think you hit the nail on the head with your four sources of ego-damage. And yeah, I think the right thing to do is to try and not be bothered. For bonus points, remember to praise people when they do good things!
What Ryan said. There’s a lot of academic produce in the movement that I’m in awe of, but since we’re working toward a common goal, it doesn’t irk me ego-wise in the least. Rather it motivates me to learn to produce such output myself. It also makes me want to hug the authors a lot.
What I do find worrying is the prospect of having one’s own work displaced by superior work that didn’t built upon the first. That’s a pity, and I don’t know of any perfect way to avoid it. Publishing quickly and incrementally could help, and in some areas it may be possible to coordinate such things with all others. But surely that doesn’t solve the problem in general.
Life’s not all a competition, Greg! ;)
I can think of a few key dystopias (brave new world, the rise of meritocracy 1870-2033, We) that have utilitarian reasoning taken to a drastic conclusion in some way. Many of them point to this dynamic and the implicit critique is that to think about human value in this was is base. But we have this kind of comparative social destruction permeating our society anyway at the moment, its just more general than EA points. Nice article and great to explore—another risk of the movement we should register ! :)
Perhaps its the virtue of decision-theoretically consistent behaviour we should be praising and celebrating as well as the big splash results?
I’ve often found the EAs around me to be
(i) very supportive of taking on things that are ex ante good ideas, but carry significant risk of failing altogether, and
(ii) good at praising these decisions after they have turned out to fail.
It doesn’t totally remove the sting to have those around you say “Great job taking that risk, it was the right decision and the EV was good!” and really mean it, but I do find that it helps, and it’s a habit I’m trying to build to praise these kinds of things after the fact as much as I praise big successes.
Of course there is some tension; often, if a thing fails to produce value, it’s useful to figure out how we could have anticipated that failure, and why it might not have been the right decision ex ante. Balance, I guess.
I’ve had an epiphany about Altruism itself, and I hope you’ll bear with me through this explanation.
Do you know of Maslow’s Hierarchy of Motivation? I learned about this only a couple years ago, but it changed everything in how I view people.
Basically, there are 4 levels of “deficiency” needs that motivate people: 1) Physiological needs (food, shelter, etc); 2) Safety needs (physical safety, economic safety, mental safety, etc); 3) Love/acceptance (loved and cared for, and not being rejected by others); 4) Esteem (being respected, which is different from love. This is tied to self confidence.)
These needs are hierarchical in nature—generally speaking, but not exclusively.
When these needs are met—and not just met, but mastered according to Maslow himself—then the person moves into Self-Actualization. This is the point where the person strives to reach their full potential. This can be an athlete, or a programmer, or a teacher, or someone really pulling all the stops out and being the best they can be at whatever they love to do.
Self-Actualization is the place of passion.
Maslow believed there’s a place beyond even that. He called it Self Transcendence. This is the more spiritual place of altruism—when you want to give back to your society.
Self-Transcendence is the place of altruism.
This is probably where many members here are.
There is nothing special about us, according to this hierarchy. It just means we have gotten all of our needs met early, and we have had time to really devote ourselves to reaching our potential. This doesn’t mean we’re gifted. It means we have had some success in reaching that potential.
That’s important: we are not special. We have merely had our needs met.
(Note: this is the core of my own secret project: how to have everyone in the world get their needs met. Imagine that: what if all people reached Self Actualization and most people reached Self Transcendence? How powerful the human race would be then!)
(I believe the Human Potential is the greatest untapped resource on the Earth right now. But I digress. :) )
ESTEEM:
I want to point out in particular the Esteem need—which, to me, reads as the whole point of your post.
The difference between esteem and love is subtle, but important. I’m realizing this because I’ve realized I’ve been deficient in this need.
I’ve needed people to praise me and say I’m awesome. Sounds silly, but it’s a real need. It drives me to work hard and do amazing things. Without that need fulfilled, I’ve sunk into a life of mediocrity. (Currently I’m a housewife when I used to work at Pixar.)
Recently, I’ve regained my esteem, and now I’m plunging forward on some Big Ideas that I want to see through. I feel like I’ve regained my “old self” that strived hard for impossible things.
So the answer to your problem is to make sure you’re getting that need met.
The answer is for all of us to make sure we’re encouraging each other. Because it is hard being in a crowd of amazing people. (I had some depressing days at Pixar...until a friend said, “don’t see the single great thing about each person here, and think each person does all of those things. You are making it. You’re just as good as everyone else here.” That helped me keep perspective, and realize I had my own single great thing. :) )
If Esteem is, indeed, important to Self Actualization and Transcendence, then feeding each other esteem as we go along could be key to keeping the energy high in this group.
One approach to this could be tying your self-esteem into something other than your personal impact. You might try setting your goal to “be an effective altruist” or “be a member of the effective altruist tribe”. There are reasonable and achievable criteria (ie. the GWWC pledge) for this, and performance of people on the tail in no way effects your ability to pass these criteria. And, while trying to improve one’s own impact is a thing that effective altruists do, it’s not necessary to do or to achieve any specific criteria of success to meet the self-esteem criteria. A useful supplement to this attitude is a feeling of excitement about where effective altruism is going, which is a feeling that is actually enhanced by the achievements of the long tail. (“I can’t wait to see what these amazing people are going to accomplish!”)
Maybe the status issues in the “lottery ticket” fields could be partially alleviated by having a formal mechanism of redistributing credit for success according to the ex-ante probabilities—for the malaria vaccine example, you could create something like impact certificates covering the output of all EAs working in the area, and distribute them according to an ex-ante estimate of each researcher’s usefulness, or some other agreed on distribution. In that case, you would end up with a certificate saying you own x% of the discovery of the malaria vaccine, which would be pretty cool to have (and valuable to have, if a the impact certificate market takes off).