I manage a team of data people and do projects and operations stuff for Greenpeace. https://www.linkedin.com/in/rj-mitchell/
Long-time giver to GiveWell charities, looking to get more directly involved.
One potential solution could involve explicitly funding such public goods. For example, funders could give an organisation additional funding to allow their staff to contribute more to effective altruism public goods, despite competing priorities.
I was thinking something similar reading some comments around funds giving (or not giving) feedback. There does seem to be a missed equilibrium:
It’s in everyone’s efforts if there is more feedback, support, coordination etc.
It’s not in the interests or capability of any one organisation to take this on themselves.
I might not jump to assuming it would all be coming off existing staff’s plates though.
Anyway, great post.
This should recognise that more reliable motivation comes from norm-following rather than from individual willpower
I think this is right and is more true and important when the positive impacts you might have are distant in time, space or both. If you’re doing something to help your local community then you should be able to see the impact yourself fairly quickly and willpower could well be the best thing to get you out picking litter or whatever. This falls down a bit if your beneficiaries are halfway round the world, in the future, or both.
It seems like there are certain principles that have a ‘soft’ and a ‘hard’ version—you list a few here. The soft ones are slightly fuzzy concepts that aren’t objectionable, and the hard ones are some of the tricky outcomes you come to if you push them. Taking a couple of your examples:
Soft: We should try to do as much good with donations as possible
Hard: We will sometimes guide time and money away from things that are really quite important, because they’re not the most important
Soft: Long-term impacts are more important than short-term impacts
Hard: We may pass up interventions with known and high visible short-term benefits in favour of those with long-term impacts that may not be immediately obvious
This may seem obvious, but to people who aren’t familiar, leading with the soft ones on the basis that the hard ones will come up soon enough if someone is interested or does their research will be more effective in giving a positive impression than jumping straight to the hard stuff. But I see a lot more jumping than would be justified. I can see why, but if you were trying to persuade someone to join or have a good opinion of your political party, would you lead with ‘we should invest in public services’ or ‘you should pay more taxes’?
Yes, in practice interview questions should vary a lot between different roles, even if on paper the roles are fairly similar, so I’m not sure they could be coordinated, beyond possibly some entry level roles.
In a situation where someone is good but doesn’t quite fit in a role the referral element might be useful. Often I’ve interviewed someone thinking ‘they’re great but not as good a fit for the role’ even if they match on paper, and being able to refer that person on to another organisation would be a mutual benefit.
I’d heard of Peter Singer in an animal rights context years before I knew anything around his EA association or human philosophy in general. I wonder if a lot of people who have heard of him are in the same place I was.
I don’t think approaching this as ‘why not to pursue a path’ is helpful. I think it’s more about helping people be aware of things they may not know so they can make an educated decision. That decision may then be ‘it’s not for me’. Think of the numbers showing how few people become professional athletes. The framing isn’t ‘don’t do it because you won’t make it’. It’s ‘few people make it, decide in full knowledge.’
Celebrate all the good actions[that people are taking (not diminish people when they don’t go from 0 to 100 in under 10 seconds flat).
I’m uncomfortable doing too much celebrating of actions that are much lower impact than other actions
I think the following things can both be true:
The best actions are much higher impact than others and should be heavily encouraged.
Most people will come in on easier but lower impact actions and if there isn’t an obvious and stepped progression to get to higher impact actions and support to facilitate this then many will fall out unnecessarily. Or may be put off entirely if ‘entry level’ actions either aren’t available or receive a very low reward or status.
I didn’t read the OP as saying that we should settle with lower impact actions if there’s the potential for higher impact ones. I read it as saying that we should make it easier for people to find their level—either helping them to reach higher impact over time if for whatever reason they’re unable or unwilling to get there straight away, or making space for lower impact actions if for whatever reason that’s what’s available.
Some of this will involve shouting out and rewarding less impactful actions beyond their absolute value not for its own sake but because this may be the best way of helping this progression. I’ve definitely noticed the ‘0-100’ thing and if I was younger and less experienced it might have bothered me more.
[Policymakers]They said that computers would never beat our best chess player; suddenly they did. They said they would never beat our best Go player; suddenly they did. Now they say AI safety is a future problem that can be left to the labs. Would you sit down with Garry Kasparov and Lee Se-dol and take that bet?
Thanks Jordan. I wanted to pick up on the Turo element. You mention that this is something you only recently stumbled across, and it doesn’t sound like you have prior experience or training in this area, and that you aren’t especially passionate about it. You also say that you could make $200k a year on it working a 40 hour week. Where did you get these figures? There aren’t many opportunities you can go into without experience and start earning $200k a year.
It may be possible, but I’d suggest it’s a high bar to reach as such opportunities are rare, so I’d be interested to see more analysis here. You also mention risks, and it doesn’t look like these are gone into in great deal. So I would really look for some maximally rational analysis on this aspect first.
‘why seeing options other than the expected one would make me less likely to follow through’
I think the key is that ‘following through’ can mean several things that are similar from the perspective of GWWC but quite different from the perspective of the person pledging.
In my case I’d already been giving >10% for quite a while but thought it might be nice to formalise it. If I hadn’t filled in the pledge it wouldn’t have made any difference to my giving. So the value of the pledge to me was relatively low. If the website had been confusing or offputting I might have given up.
There are others who will already have decided to give 10% but haven’t yet started. The pledge then would have a bit more value if there’s a chance it could prevent backsliding but assuming the person had fully committed to giving at this level already, the GWWC pledge wouldn’t be crucial to the potential pledger.
Finally, there are people who for whatever reason come across the website without yet having decided to give 10% (or even 1%) and make a decision to sign up when they’re there. This is where the more standard marketing theory comes into play.
For the first two groups, the non-conversion is something like ‘I can’t even see what I’m meant to be signing up for. Never mind, it’s not going to affect how I’ll actually give anyway.’ Friction in this case is anything that makes it harder to identify what the 10% pledge is and how to sign up to it. I spent a couple of seconds looking between the three options but it was ultimately pretty easy to work out which one was the one I wanted. This would be even easier if it was the one main option.
For the third, it could well be ‘There’s too much choice, maybe I don’t want to do it.’ At any rate, it will be much different from people who had already committed to giving 10%.
The ‘loss’ to GWWC for all three looks the same but there’s only a substantial loss to the wider world with the third group.
I know people not always remembering what’s in their minds can be an issue but I doubt it would be a problem on something like ‘did you intend to give 10% when you arrived on the GWWC website?’ and certainly not on ‘have you already been giving 10%?’ There’s such a difference between the groups it would be really helpful to at least get an indication how they split out.
Well, it looks like I’m hijacking a thread about organisational scaling with some anxieties around referring to people in overly utilitarian ways that I’ve talked about elsewhere. Which is fair enough; interestingly I’ve done the opposite and talked about org scaling on threads that were fairly tangentially related and got quite a few upvotes for it. All very intriguing and if you’re not occasionally getting blasted, you’re not learning as much as you might, getting enough information about e.g. limits, etc...
Every person in your company is a vector. Your progress is determined by the sum of all vectors.
‘Hey! I’m not a vector!’ I cried out to myself internally as I read this. I mean, I get it and there’s a nice tool / thought process in there, but this feels somewhat dehumanising without something to contextualise it. There are loads of tools you might employ to make good decisions that might involve placing someone in a matrix or similar, but hopefully it’s obvious that it’s a modelled exercise for a particular goal and you don’t literally say ‘people are maths’ while you do it.
Anyway, I was thinking of political parties as I read this. If your party does well, you get an influx of members who somewhat share the same goals but are different from the existing core, not chosen by you, probably less knowledgeable about your history and ideology, and less immediately aligned. You have essentially no ability to produce alignment via financial mechanisms or ‘hiring’ processes. How do you get people to pull together? There’s some recent examples of UK parties absolutely mangling this, but probably some good examples too (Obama 2008? German Green Party?) Obviously in organisations there are then additional mechanisms, but this seems interesting to study from the cultural elements which can be more separated out.
Thanks everyone, this is very interesting and well worth having a look through the attached Gitbook.
Around the intuitive interpretation:
Perhaps giving people more options makes them indecisive. They may be particularly reluctant to choose a “relatively ambitious giving pledge” if a less ambitious option is highlighted.
It’s possible that this is the reason, but there’s an alternative interpretation based around the fact that GWWC is already quite well-known and referenced as ‘the place you go to donate 10% of your income’. So if a lot of people are coming onto your page with that goal in mind, then it would make sense that the layouts that centre that option and make it as frictionless as possible will do better. Which is what we see here—the option centring a different option does much worse, but the one that does best is the one that most highlights the 10% pledge, not the one that contextualises it next to an even higher level pledge given equal space.
My own experience of using the site was very similar—I came on, looked around a bit for the 10% option I’d already decided on (in the original setup), then signed up. Things like favouring the middle option and the effects of anchoring are more relevant in a situation where someone has decided to buy, say, a broadband package but hasn’t chosen which one; the lack of effect from them here might indicate that relatively fewer people are coming onto the page unsure how much to give.
You could try testing the 10% pledge next to the further pledge without the 1% pledge, but the really key thing feels like a post-pledge survey. ‘Did you already know what you would pledge when you went on our website?’ ‘If so, did you consider giving at a different level when you saw the options?’ etc. I’m sure you’d get a good response rate as people would be motivated to ensure others completed the pledge. Or if you already have this information, it would be really useful to see it!
This is good advice and can be expanded outside software developers as you say. It’s also great to see you offering CV help!
As someone who’s hired a decent number of people, the one caveat I would add is that this will be really useful to follow as above if you are applying for a job where there is a degree of discretion among decision-makers around what they’re assessing. It’s less immediately applicable, but still potentially valuable, if the initial selection is based solely on scoring against predefined criteria. Sometimes this will be explicit (‘applications will be assessed against the person specification’), sometimes implicit. This seems to happen less often for EA jobs, but I’m sure it does happen for at least some.
At any rate, if it’s just about criteria, your task is then to list out all the criteria from the job pack, tick them off when you’ve put them in the application, read them back, think ‘would I at least be scored as meeting expectations on this and ideally exceeding them?‘, and update accordingly. In which case, this sort of approach can help you move up to ‘exceeds expectations’ on a criterion if you can show you can hit it from multiple angles, e.g. in both work and personal life. It could also help you get a longlist that you could pick and choose from for those types of applications and help at interview...
For all that I’ve read and done with ToCs and critical path analysis, the first thing that comes to my mind is still ‘avoiding this’:
(I genuinely find thinking ‘make sure you don’t do this’ at all stages is more effective than any theory I’ve read.)
Also, anything that has 2-3 paths to a potential goal that are at least partially independent will usually leave you in a better place than one linear path. Then it’s not so much ‘backchaining’ as switching emphasis (‘lobbying seems to have stalled, so let’s try publicity/behaviour change… then who knows, lobbying might be back on again’).
Thanks for the detailed response and for linking to that other post. I’ve been dealing with chickenpox in the house so this is probably later and briefer than the analysis deserves.
+1 to ‘Command and Control’ and ‘Nuclear Folly’ as well worth reading—between them, enough to dispel any illusions that the destructive power of nuclear weapons was matched with processes to avoid going wrong, whether by accident or human folly. I’ll check out ‘The Bomb’.
The worrying aspect for me is the combination of leeway for particular commanding officers combined with environmental factors that reduce the ability of those officers to know what’s going on, and/or to exercise rational judgement. The sub is the most obvious example of this.
beyond the fact that the Soviet response to a US invasion of Cuba could be to attempt to take Berlin
That’s a pretty strong argument in favour of escalation to nuclear exchange! I think it’s also other situations taking up the bandwidth of intelligence and politicians, introducing uncertainty, increasing the number of locations where normal accidents or individuals doing something stupid could increase tensions. For China, it came to nothing but one more thing taking up attention and not ideal if you’re dealing with one nuclear-armed Communist country to have another one with an unpredictable leader invading another country...
there were no American war plans for instance that escalated from the use of tactical nuclear weapons by the Soviets to firing nuclear missiles
What’s your source for this?
I’d also comment that this misses the wider global context. There were tensions over Berlin, and China and India briefly went to war alongside the Cuban missile crisis; potential overlaps between these conflicts raised the risk of nuclear exchange considerably, possibly not even beginning around Cuba, and at any rate expanding beyond it if it got going.
I haven’t come across this yet… is it what I think it is?
Hi Eli! I’m glad those orgs are using Salesforce. It’s powerful and scales very well. Annoyingly Salesforce themselves can be a massive sales and hype machine though, so it’s not always easy to get the best advice from them directly. So freelance can be doubly useful.
Very interesting. I haven’t come into contact with any student groups, so can’t comment on that. But here’s my experiences of what’s worked well and less well coming in as a longtime EA-ish giver in my late 30s looking for a more effective career:
(Free) books: I love books—articles and TED talks are fine for getting a quick and simple understanding of something, but nothing beats the full understanding from a good book. And some of the key ones are being given away free! Picking out a few, the Alignment Problem, The Precipice and Scout Mindset give a grounding in AI alignment, longtermism/existential risk and rational thinking techniques, and once you have a handful under your belt you’re in a solid place to understand and contribute to some discussions. They’re good writers too; it’s not just information transfer. The approach of ‘here’s a free book, go away and read it, here’s some resources if you want to research further’ sounds like the polar opposite of what’s described above. It worked well for me. Maybe a proper ‘EA book starter list’ would help it work even better (there’s a germ of this lurking halfway down the page here, but surely this could be standalone and more loved...)
Introductions culture: People seem happy to give their time up to talk to you after exchanging a couple of messages. After meeting people they’re eager to introduce you to others you might be a good ‘match’ with or at least give leads. Apart from its obvious benefits this is really good for keeping spirits up early on when it might be a bit daunting otherwise.
80k careers guides: Pretty obvious but very well-written and a good starting point.Jobs boards e.g. 80k, Work on Climate, Facebook/LinkedIn groups: Well-curated and give a clear view of what’s available in the sector and particular roles are generally well-written. On ones where people post their own jobs they almost always follow community norms. Not entirely free from the usual problems (hype, jobs without posted salaries) but better than most. I’ve seen some jobs that are what I want but in other countries, which makes me hopeful I’m looking in the right place, especially if I can also start meeting some more people. Talking of which...This forum: Smart discussion, some key people on here writing and listening to feedback, seemed welcoming and receptive when I just rocked up and started writing some comments.Less goodOccasionally, apparent coldness to immediate suffering: I’ve only seen this a bit, but even one example could be enough to put someone off for good. I can see what motivates it, but if a person says ‘I think x is one of the most pressing current problems’, and the response is what seems like a dismissive ‘well, x isn’t a genuinely existential risk so it’s not a priority’, it can come across as a lack of empathy or, at worst, humanity. It’s not the argument itself, as I’ve no issue with ranking charities or interventions and producing recommendations, but more the apparent absolutism and lack of compassion involved (even if, ironically, it could be produced by compassion for an imagined future greater good). Processes that don’t seem fit for the scale of EA: I’ve bigged up 80k above so I’ll use them as an example here. Ordered a free book, it arrived, got an email later saying ‘ah looks like we have distribution problems, here’s a digital copy while you’re waiting’… then another one saying ‘oops forgot to attach it, here it is’. Signed up to 1:1 careers advice, heard nothing for 3 weeks, then ‘sorry, we can’t do you’, with no explanation. They did connect me with a local organiser, which was great, but didn’t pass on the responses I’d taken some time to think about, so we ended up covering some ground again. Occasionally insular worldview: This comes from being concentrated in a small number of cities and often graduating from top universities. I linked this piece in another post, but it’s very good, so I’m linking it again.Neutral but interesting’Eccentric’ billionaires: Media seem to like this angle but it doesn’t really hold up in practice. The presence of the narrative did lead me to investigate the funding of EA in ways that I might not otherwise have done.
I’m still here, so clearly the good outweighs the rest!