I’ve been working a few hours per week at the Effective Altruism Infrastructure Fund as a Fund Manager since Summer this year.
EA’s reputation is at a bit of a low point. I’ve even heard EA described as the ‘boogeyman’ in certain well-meaning circles. So why do I feel inclined to double down on effective altruism rather than move onto other endeavours? Some shower thoughts:
I generally endorse aiming directly for the thing you actually care about. It seems higher integrity, and usually more efficient. I want to do the most good possible, and this goal already has a name and community attached to it; EA.
I find the core, underlying principles very compelling. The Centre for Effective Altruism highlights scope sensitivity, impartiality, recognition of tradeoffs, and the Scout Mindset. I endorse all of these!
Seems to me that EA has a good track record of important insights on otherwise neglected topics. Existential risk, risks of astronomical suffering, AI safety, wild animal suffering; I attribute a lot of success in these nascent fields to the insights of people with a shared commitment to EA principles and goals.
Of course, there’s been a lot of progress on slightly less neglected cause areas too. The mind boggles at the sheer number of human lives saved and the vast amount of animal suffering reduced by organisations funded by Open Philanthropy, for example.
I have personally benefited massively in achieving my own goals. Beyond some of the above insights, I attribute many improvements in my productivity and epistemics to discussions and recommendations that arose out of the pursuit of EA.
In other roles or projects I’m considering, when I think of questions like “who will actually realistically consider acting on this idea I think is great? Giving up their time or money to make this happen?” the most obvious and easiest answer often looks like some subset of the EA community. Obviously there are some echo chamber-y and bias-related reasons that might feed into this, but I think there are some real and powerful ones too.
Written quickly (15-20 mins), not neatly/well (originally to post on LinkedIn rather than here). There are better takes on this topic (e.g.).
There are some pragmatic, career-focused reasons too of course. I’m better networked inside EA than outside of it. I have long thought grantmaking is a career direction I’d like to try my hand at, and this seemed like a good specific opportunity for me.
Further caveats I didn’t have space to make on LinkedIn: I wrote this quick take as an individual, not for EAIF or my other projects etc; I haven’t checked this with colleagues. There are also identity-related and bias reasons that draw me to stay involved with EA. Seems clear that EA has had a lot of negative impact too. And of course we have deep empirical and moral uncertainty about what’s actually good and useful in the long-run after accounting for indirect effects. I haven’t attempted any sort of overall quantitative analysis of the overall effects.
But in any case, I still expect that overall EA has been and will be a positive force for good. And I’m excited to be contributing to EAIF’s mission. I just wrote a post about Ideas EAIF is excited to receive applications for; please consider checking that out if any of this resonates and/or you have ideas about how to improve EA and the impact of projects making use of EA principles!
I have my responses to the question you raised: “So why do I feel inclined to double down on effective altruism rather than move onto other endeavours?”
I have doubled down a lot over the last ~1.5 years. I am not at all shy about being an EA; it is even on my LinkedIn!
This is partly because of integrity and honesty reasons. Yes, I care about animals and AI and like math and rationality and whatnot. All this is a part of who I am.
Funnily enough, a non-negligible reason why I have doubled down (and am more pro-EA than before) is the sheer quantity of not-so-good critiques. And they keep publishing them.
Another reason is because there are bizarre caricatures of EAs out there. No, we are not robotic utility maximizers. In my personal interactions, when people hopefully realize that “okay this is a just another feel-y human with a bunch of interests who happens to be vegan and feels strongly about donations.”
“I have personally benefited massively in achieving my own goals.” — I hope this experience is more common!
I feel EA/adjacent community epistemics have enormously improved my mental health and decision-making; being in the larger EA-sphere has improved my view of life; I have more agency; I am much more open to newer ideas, even those I vehemently disagree with; I am much more sympathetic to value and normative pluralism than before!
I wish more ever day EAs were louder about their EA-ness.
Given that effective altruism is “a project that aims to find the best ways to help others, and put them into practice”[1] it seems surprisingly rare to me that people actually do the hard work of:
(Systematically) exploring cause areas
Writing up their (working hypothesis of a) ranked or tiered list, with good reasoning transparency
Related things I appreciate, but aren’t quite what I’m envisioning:
Tools and models like those by Rethink Priorities and Mercy For Animals, though they’re less focused on explanation of specific prioritisation decisions.
Longlists of causes by Nuno Sempere and CEARCH, though these don’t provide ratings, rankings, and reasoning.
Various posts pitching a single cause area and giving reasons to consider it a top priority without integrating it into an individual or organisation’s broader prioritisation process.
Of course, this is only one definition. But my impression is that many definitions share some focus on cause prioritisation, or first working out what doing the most good actually means.
I’m a hypocrite of course, because my own thoughts on cause prioritisation are scattered across various docs, spreadsheets, long-forgotten corners of my brain… and not at all systematic or thorough. I think I roughly:
- Came at effective altruism with a hypothesis of a top cause area based on arbitrary and contingent factors from my youth/adolescence (ending factory farming),
- Had that hypothesis worn down by various information and arguments I encountered and changed my views on the top causes
- Didn’t ever go back and do a systemic cause prioritisation exercise from first principles (e.g. evaluating cause candidates from a long-list that includes ‘not-core-EA™-cause-areas’ or based on criteria other than ITN).
I suspect this is pretty common. I also worry people are deferring too much on what is perhaps the most fundamental question of the EA project.
Rough and informal explanations welcome. I’d especially welcome any suggestions that come from a different methodology or set of worldviews & assumptions to 80k and Open Phil. I ask partly because I’d like to be able to share multiple different perspectives when I introduce people to cause prioritisation to avoid creating pressure to defer to a single list.
Thank Jamie, I think cause prioritisation is super important as you say, but I don’t think its as neglected as you think, at least not within the scope of global health and wellbeing. I agree that the substance of your the 3 part list being important, but I wouldn’t consider the list the best measure of how much hard cause prioritisation work has been done. It seems a bit strawman-ish as I think there are good reasons (see below) why those “exact” things aren’t being done.
First I think precise ranking of “cause areas”is nearly impossible as its hard to meaningfully calculate the “cost-effectiveness” of a cause, you can only accurately calculate the cost-effectiveness of an intervention which specifically targets that cause. So if you did want a meaningful rank, you at least need to have an intervention which has probably already been tried and researched to some degree at least.
Secondly I think having public specific rankings has potential to be both meaningless and reputationally dangerous. I think clustering the best interventions we know of and sharing the estimated cost effectiveness is fantastic (like Givewell, 80,000 hours, CEARCH and Copenhagen do), but I don’t think adding ranked specificity is very helpful because....
Uncertainty is so high and confidence intervals so wide in these calculations that specific rankings can be fairly meaningless. When all confidence intervals for interventions overlap, I think providing a specific ranking can be almost dishonest
Specific public rankings for causes/interventions has the potential downside of being inflammatory and unhelpful for the effective altruism movement. We’ve already seen some obvious backlash and downside of the big plug for working towards AI safety being put forward as something like “the most important” intervention. Imagine if orgs were publicly pushing seemingly concrete rankings? Much of the public and intellectual world is likely to misunderstand the purpose of it and criticise, or even understand well and criticizse...
I think that 80,000 hours, Open Phil and CEARCH do the substance what you are looking for pretty well and put a lot of money and hours into it—I don’t think hard work in this area is “surprisingly rare” I’m not sure if adding a whole lot more organisations here would achieve much, but there might be more room for efforts there!
Also I personally think that GiveWell might do the most work which achieves the substance of what you are looking for within global health and wellbeing. They are devoted to finding the most cost effective interventions in the world that exist right now. Their “top charities” page is in some ways a handful of what they think are the “no 1″ ranked interventions. Yes they only consider interventions with a lot of evidence behind them and are fairly conservative but I think it achieves much of the substance of your 3 steps.
I’d be interested to hear what you think might be the upsides of “ranking” specifically vs clustering our best estimates at effective cause areas/interventions.
I’d be interested to get comment from @Joel Tan here as he and the CEARCH team have probably considered this question more than most of us
Very helpful comment, thank you for taking the time to write out this reply and sharing useful reflections and resources!
First I think precise ranking of “cause areas”is nearly impossible as its hard to meaningfully calculate the “cost-effectiveness” of a cause, you can only accurately calculate the cost-effectiveness of an intervention which specifically targets that cause. So if you did want a meaningful rank, you at least need to have an intervention which has probably already been tried and researched to some degree at least.
There’s a lot going on here. I suspect I’m more optimistic than you that sharing uncertain but specific rankings is helpful for clarifying views and making progress? I agree in principle that what we want to do is evaluate specific actions (“interventions”), but I still think you can rank expected cost-effectiveness at a slightly more zoomed-out level, as long as you are comparing across roughly similar levels of abstraction. (Implicitly, you’re evaluating the average intervention in that category, rather than a single intervention.) Given these things, I don’t think I endorse the view that “you at least need to have an intervention which has probably already been tried and researched to some degree at least.”
Secondly I think having public specific rankings has potential to be both meaningless and reputationally dangerous.
I agree with the reputational risks and the potential for people to misunderstand your claim or think that it’s more confident than it is, etc. I somewhat suspect that this will be mitigated by there just being more such rankings though, as well as having clear disclaimers. E.g. at the moment, people might look at 80k and Open Phil rankings and conclude that there must be strong evidence behind the ratings. But if they see that there are 5 different ranked lists with only some amount of overlap, it’s implicitly pretty clear that there’s a lot of subjectivity and difficult decision-making going into this. (I don’t agree with it being “meaningless” or “dishonest”—I think that relates to the points above.)
Also I personally think that GiveWell might do the most work which achieves the substance of what you are looking for within global health and wellbeing. Also like you mentioned the Copenhagen Consensus also does a pretty good job of outlining what they think might be the 12 best interventions (best things first) with much reasoning and calculation behind each one.
Thanks a lot for these pointers! I will look into them more carefully. This is exactly the sort of thing I was hoping to receive in response to this quick take, so thanks a lot for your help. Best Things First sounds great and I’ve added it to my Audible wishlist. Is this what you have in mind for GiveWell? (Context: I’m not very familiar with global health.)
I’d be interested to hear what you think might be the upsides of “ranking” specifically vs clustering our best estimates at effective cause areas/interventions.
Oh this might have just been me using unintentionally specific language. I would have included “tiered” lists as part of “ranked”. Indeed the Open Phil list is tiered rather than numerically ranked. Thank you for highlighting this though, I’ve edited the original post to add the word “tiered”. (Is that what you meant by “clustering our best estimates at effective cause areas/interventions? Lmk if you meant something else.)
I think Nick is fundamentally correct that because uncertainty is so high, sorting isn’t particularly useful. Most grantmaking organizations, to my understanding, prefer to use a cost-effectiveness threshold/funding bar, to decide whether or not to recommend/support a particular cause/intervention/charity.
For ourselves, we use 10x GiveWell for GHD, as (a) most of the money we move is EA and the counterfactual is GiveWell (so to have impact we the ideas we redirect funding/talent to be more cost-effective than GiveWell in expectation, and (b) we have such an aggressive bar because GiveWell is very robust in their discounting relative to us (which takes a lot of time and effort). An aggressive bar helps ensure that even if your estimated cost-effectiveness estimate is too optimistic relative to GiveWell, it can eat a lot of implicit discounts while still ensuring that the true cost-effectiveness is >GiveWell. (so when we say something is >=10x GiveWell it’s not literally so, more of a reasonably high confidence claim that it’s probably more cost-effective (in expectation).
I understand the reasons for ranking relative to a given cost-effectiveness bar (or by a given cost-effectiveness metric). That provides more information than constraining the ranking to a numerical list so I appreciate that.
Btw, if you had 5-10 mins spare I think it’d be really helpful to add explanation notes to the cells in the top row of the spreadsheet. E.g. I don’t know what “MEV” stands for, or what the “cost-effectiveness” or “cause no.” columns are referring to. (Currently these things mean that I probably won’t share the spreadsheet with people because I’d need to do a lot of explaining or caveating to them, whereas I’d be more likely to share it if it was more self-explanatory.)
Hi Jaime, I’ve updated to clarify that the “MEV” column is just “DALYs per USD 100,000″. Have hidden some of the other columns (they’re just for internal administrative/labelling purposes).
I think people/orgs do some amount of this, but it’s kind of a pain to share them publicly. I prefer to share this kind of stuff with specific people in Google Docs, in in-person conversations, or on Slack.
I also worry somewhat about people deferring to random cause prio posts, and I’d guess that on the current margin, more cause prio posts that are around the current median in quality make the situation worse rather than better (though I could see it going either way).
Thanks! When you say “median in quality” what’s the dataset/category that you’re referring to? Is it e.g. the 3 ranked lists I referred to, or something like “anyone who gives this a go privately”?
I had thought a public list that emphasized potential Impact of different interventions and the likely costs associated with discovering the actual impact would be great.
The value of graduate training for EA researchers: researchers seem to think it is worthwhile
Imagine the average “generalist” researcher employed by an effective altruist / longtermist nonprofit with a substantial research component (e.g. Open Philanthropy, Founders’ Pledge, Rethink Priorities, Center on Long-Term Risk). Let’s say that, if they start their research career with an undergraduate/bachelor’s degree in a relevant field but no graduate training, each year of full-time work, they produce one “unit” of impact.
In a short Google Form, posted on the Effective Altruism Researchers and EA Academia Facebook groups, I provided the above paragraph and then asked: “If, as well as an undergraduate/bachelor’s degree, they start their research career at EA nonprofits with a master’s degree in a relevant field, how many “units” of impact do you expect that they would produce each year for the first ~10 years of work?”* The average response, from the 8 respondents, was 1.7.
I also asked: “If, as well as an undergraduate/bachelor’s degree, they start their research career at EA nonprofits with a PhD in a relevant field, how many “units” of impact do you expect that they would produce each year for the first ~10 years of work?”* The average response was 3.9.
I also asked people whether they were a researcher at a nonprofit, in academia, or neither, and whether they had graduate training themselves or not.** Unsurprisingly, researchers in academia rated the value of graduate training more highly than researchers in nonprofits (2.0 and 4.3 for each year with a master’s and a PhD, respectively, compared to 1.2 and 1.7), as did respondents with graduate training themselves, relative to respondents without graduate training (2.0 and 5.2 compared to 1.2 and 1.7).
I asked a free-text response question: “Do you think that the value of graduate training would increase/compound, or decrease/discount, the got further into their career?” 4 respondents wrote that the value of graduate training would decrease/discount the got further into their career, but didn’t provide any explanations for this reasoning. This was also my expectation; my reasoning was that one or more years’ of graduate training, which would likely only be partly relevant to the nonprofit work that you would be doing, would become relatively less important later on, since your knowledge, skills, and connections would have increased through your work in nonprofits.
However, two respondents argued that the value of graduate training would increase/compound. One added: “People without PhDs are sadly often overlooked for good research positions and also under-respected relative to their skill. If they don’t have a PhD they will almost never end up in a senior research position.” The other noted that it would “increase/compound, particularly if they do things other than anonymous research, e.g. they build an impressive CV, get invited to conferences because of their track record. If one doesn’t have a PhD, the extent of this is limited, mostly unless one fits a high-credibility non-academic profile, e.g. founded an organization.”
I did some simple modelling / back of the envelope calculations to estimate the value of different pathways, accounting for 1) the multipliers on the value of your output as discussed in the questions on the form and 2) the time lost on graduate education.*** Tldr; with the multiplier values suggested by the form respondents, graduate education clearly looks worthwhile for early career researchers working in EA nonprofits, assuming they will work in an EA research nonprofit for the rest of their career. It gets a little more complex if you try to work it out in financial terms, e.g. accounting for tuition fees.
For my own situation (with a couple of years of experience in an EA research role, no graduate training), I had guessed multipliers of 1.08 and 1.12 on the value of my research in the ~10 years after completing graduate training, for a master’s and PhD, respectively. For the remaining years of a research career after that, I had estimated 1.01 and 1.02. Under these assumptions, the total output of a nonprofit research career with or without a master’s looks nearly identical for me; the output after completing a PhD looks somewhat worse. However, with the average values from the Google form then the output looks much better with a master’s than without and with a PhD than with just a master’s. Using the more pessimistic values from other EA nonprofit researchers, or respondents without graduate training, the order is still undergrad only < master’s < PhD, though the differences are smaller. In my case, tuition fees seem unlikely to affect these calculations much (see the notes on the rough models sheet).
Of course, which option is best for any individual also depends on numerous other career strategy considerations. For example, let’s think about “option value.” Which options are you likely to pursue if research in EA nonprofits doesn’t work out or you decide to try something else? Pursuing graduate training might enable you to test your fit with academia and pivot towards that path if it seems promising, but if your next best option is some role in a nonprofit that is unrelated to research (e.g. fundraising), then graduate education might not be as valuable.
I decided to post here partly in case others would benefit, and partly because I’m interested in feedback on/critiques of my reasoning, so please feel free to be critical in the comments!
*For both questions, I noted: “There are many complications and moderating factors for the questions below, but answering assuming the “average” for all other unspecified variables could still be helpful.)” and “1 = the same as if they just had a bachelor’s; numbers below 1 represent reduced impact, numbers above 1 represent increased impact.” **These questions were pretty simplified, not permitting people to select multiple options. *** Here, for simplicity, I assumed that: - You would produce no value while doing your graduate training, which seems likely to be false, especially during (the later years of) a PhD. - The value of 1 year after your graduate education was the same as 1 year before retirement, which seems likely to be false.
In a short Google Form, posted on the Effective Altruism Researchers and EA Academia Facebook groups, I provided the above paragraph and then asked: “If, as well as an undergraduate/bachelor’s degree, they start their research career at EA nonprofits with a master’s degree in a relevant field, how many “units” of impact do you expect that they would produce each year for the first ~10 years of work?”* The average response, from the 8 respondents, was 1.7.
The mechanism may not be causal. If you’re conditioning on type of person who can get accepted into graduate programs + get funding + manage to stick with a PhD program, you are implicitly drawing on a very different pool of people than if you don’t condition on this.
That’s a good point—my intention was that it would be the same individual in each instance, just with or without the training, but I didn’t word the survey question clearly to reflect that.
It’s an interesting analysis. Just a thought—since the value of 1 unit is up to the responder if I’ve understood correctly, it might be more meaningful to calculate ratios of the responses for each person and average these rather than average the responses to each part—for the latter, if any responder picked small “unit” sizes and correspondingly gave large numerical values, they would make an outsized contribution. Calculating ratios first cancels out whatever “unit” people have decided on. Though it should only matter much if people’s “units” differ considerably in size.
Hey Jamie, thanks for doing this, I find the results interesting. Just want to point out what I think are two small typos that made it harder to understand what you wrote:
I asked a free-text response question: “Do you think that the value of graduate training would increase/compound, or decrease/discount, the got further into their career?” 4 respondents wrote that the value of graduate training would decrease/discount the got further into their career
Could you correct what you put above?
Also, I’m curious on
1. What Master’s or Ph.D degrees are you considering to take?
2. What do you think would be a good Master’s or Ph.D degree to take for the average “generalist” researcher at an EA / longtermist non-profit (if this is different from what you personally would take)?
Psychology, sociology, (history), (political science). I imagine that that’s an unusually broad range to be considering, but I didn’t want to rule anything out prematurely. My undergraduate was in history but my research in nonprofits has been much more social science-y, and a bit more quantitative.
I imagine that there’s a very broad range that could be on the table. I haven’t thought about this question in general that much for “EA / longtermist” research orgs. For effective animal advocacy research organisations, my main guesses would be the same as the list above, plus economics. But there could be others that I haven’t thought about, related to those options, or an unusually good fit for some individuals etc.
Buying a house will probably save you lots of money, which you can later donate, but it might not make much difference (and may work out as negative) in terms of your ability to do good.
It seems like common sense that buying a house saves you from wasting money on rent and works out better, financially, in the long term. But earlier this year, John Halstead wrote a blogpost providing a bunch of reasons not to buy a house.
I had another look at John’s calculations. I kept the basic calculations the same, but added a few considerations and re-checked the appropriate numbers for London (where I live). I also added various different tabs of the spreadsheet to compare things like variations in interest rates, property prices, timeframes for buying and selling, and other costs. In every scenario, unless there’s a housing crash shortly after you buy, it looks like buying comes out as far, far better, from a financial perspective. In the best guess, realistic scenario, buying came out as about £550,000 better after 10 years. John has also had another look at his calculations since his post and seems more optimistic about buying. I haven’t looked at figures and costs for countries other than the UK, but the differences are so large that I’d quite surprised if investing and renting came out as more favourable in (m)any countries.
This doesn’t address the concerns about buying in John’s blog post (e.g. that you will only be able access the money when you’re older). But if you’re interested in patient philanthropy, and are happy to donate more accumulated wealth in several decades’ time (when you downsize or die) rather than having a strong preference for donating less sooner, then buying a house looks better. (For discussion, see “Giving now vs giving later” and “How becoming a ‘patient philanthropist’ could allow you to do far more good”)
Despite the large raw difference between buying vs. renting and investing, these differences might mean surprisingly little, in terms of ability to do good in the world, if you apply a discount to the value of future money to calculate its net present value. If you apply a high discount rate, then the gains are practically zero. Indeed, some EA orgs express a strong preference for money sooner rather than later. I haven’t worked this bit out properly, but if you take these numbers literally (and reject patient philanthropy) it might be better to just donate sooner rather than to save up for a deposit.
I also live in London, and bought a house in April 2016. So I’ve thought about these calculations a fair bit, and happy to share some thoughts here:
One quick note on your calculations is that stamp duty has been massively, but temporarily, cut due to COVID. You note it’s currently £3k on a £560k flat. Normally it would be £18k. You can look at both sets of rates here.
When I looked at this, the calculation was heavily dependent on how often you expect to move. Every time you sell a home and buy a new one you incur large fixed costs; normally 2-4% of purchase price in stamp duty, 1-3% in estate agent fees, and a few other fixed costs which are minor in the context of the London property market but would be significant if you were looking at somewhere much cheaper (legal fees etc.). All of this seems well accounted for in your spreadsheet, but it means that if you expect to move every 1-3 years then the ongoing saving will be swamped by repeatedly incurring these costs.
There’s also a somewhat fixed time cost; when I bought a home I estimate I spent the equivalent of 1 week of full-time work on the process (not the moving itself), most of which was spent doing things I wouldn’t have needed to do for rented accomodation.
All told, for my personal situation in 2016 I thought I should only buy if I expected to stay in that flat for at least 5 years, and to make the calculation clear I would have wanted that to be more like 10 years. As a result, buying looks much better if you have outside factors already tying you down; a job that is very unlikely to be beaten, kids, a city you and/or your partner loves, etc.
This is a much closer calculation that will come out with your numbers, because I don’t think a 7.5% housing return is a sensible average to use going forward. I had something like a 2% real (~4% nominal, but I generally prefer to think in terms of real) estimate pencilled in for housing, and more like a 5% real (7% nominal) rate pencilled in for stocks. There’s a longer discussion there, but the key point I would make is that interest rates fallen dramatically in recent decades, boosting the value of assets which pay out streams of income, i.e. rent/dividends. It’s unclear to me that the recent trend towards ever lower rates can go much further, and markets don’t expect it to, so I didn’t want to tacitly assume that.
So far, that conservative estimate was much closer, London house prices rose by roughly 1.5% annualised between April 2016 and March 2020. Then a pandemic hit, but happy to exclude that from ‘things I could have reasonably expected’.
No, I didn’t list the “other” pros and cons, this is just the financial perspective.
I don’t have a good sense of how difficult it is to move houses. But my guess is that a decision to move for work or not wouldn’t be that dependent on selling a house. E.g. you either want to stay, come what may, because of reasons like friends, family, partners etc, or you’re personally happy to move, and wouldn’t mind selling then renting?
Thanks for this Jamie. Useful to know that the outcome can differ according to person/location. I reckon I’ll do this exercise for myself at some point. A few quick questions/comments (I haven’t looked at this in detail so apologies if I’ve missed anything):
Have you identified the key difference(s) between your calculation and John’s calculation that leads to the different result? It might be helpful to call this out
E.g. is it mainly driven by higher rental costs in London / the fact that you’ve assumed a smaller deposit for the house etc.
Pretty minor point, but the 3.5% discount rate should decline over time and it doesn’t seem you’ve factored this in (it shouldn’t really change much though as you’re not looking over a very long time scale)
I’m not really sure how useful the 3.5% discount rate is for philanthropists, in particular EA philanthropists. It includes a discount of future utility on account of the future being less morally valuable, which is something that philosophers have pretty much rejected and is quite counter to EA philosophy. There are good reasons for EA philanthropists to discount (more on that here and here) but I don’t there’s a good reason for us to expect it to lead to a 3.5% rate. It could actually be higher or lower depending on an individual’s preferred cause area/underlying ethical views. The general point that you’re making that buying a house only provides access to money when older, and therefore that this becomes subject to discounting is a very useful one though.
Doesn’t John’s calculation also say buying is better? Or am I missing something?
Have you identified the key difference(s) between your calculation and John’s calculation that leads to the different result? It might be helpful to call this out
No, I haven’t gone through and done that. Actually, John’s calculations still come out in favour of buying from a financial perspective, albeit by a much smaller margin than in my calculations; I think he was put off for other reasons.
Pretty minor point, but the 3.5% discount rate should decline over time and it doesn’t seem you’ve factored this in (it shouldn’t really change much though as you’re not looking over a very long time scale)
I’m probably doing the maths completely wrong on that bit… suggestions for correct formula to use are welcome. Commenting on the sheet is currently on if you want to comment on directly.
It could actually be higher or lower depending on an individual’s preferred cause area/underlying ethical views. The general point that you’re making that buying a house only provides access to money when older, and therefore that this becomes subject to discounting is a very useful one though
Yeah I haven’t got my head very thoroughly round the various arguments on this, so thanks for sharing. My impression was also that using 3.5% didn’t make much sense and should probably either go lower than that (for “patient” reasons) or much higher (if you think opportunities for cost-effective giving will diminish rapidly for various reasons.
Some relevant context I probably should have added to the post was that I did this calculation because I was very surprised at John’s overall conclusion and wanted to check it, and, despite this not being very thorough or anywhere near my research “expertise”, I thought other people might benefit from these rough and ready efforts, so decided to share.
How did Nick Bostrom come up with the “Simulation argument”*?
Below is an answer Bostrom gave in 2008. (Though note, Pablo shares a comment below that Bostrom might be misremembering this, and he may have taken the idea from Hans Moravec.)
“In my doctoral work, I had studied so-called self-locating beliefs and developed the first mathematical theory of observation selection effects, which affects such beliefs. I had also for many years been thinking a lot about future technological capabilities and their possible impacts on humanity. If one combines these two areas – observation selection theory and the study of future technological capacities – then the simulation argument is only one small inferential step away.
Before the idea was developed in its final form, I had for a couple of years been running a rudimentary version of it past colleagues at coffee breaks during conferences. Typically, the response would be “yeah, that is kind of interesting” and then the conversation would drift to other topics without anything having been resolved.
I was on my way to the gym one evening and was again pondering the argument when it dawned on me that it was more than just coffee-break material and that it could be developed in a more rigorous form. By the time I had finished the physical workout, I had also worked out the essential structure of the argument (which is actually very simple). I went to my office and wrote it up.
(Are there any lessons in this? That new ideas often spring from the combining of two different areas or cognitive structures, which one has previously mastered at sufficiently a deep level, is a commonplace. But an additional possible moral, which may not be as widely appreciated, is that even when we do vaguely realize something, the breakthrough often eludes us because we fail to take the idea seriously enough.)”
Context for this post:
I’m doing some research on “A History of Robot Rights Research,” which includes digging into some early transhumanist / proto-EA type content. I stumbled across this.
I tend to think of researchers as contributing either more through being detail oriented—digging into sources or generating new empirical data—or being really inventive and creative. I definitely fall into the former camp, and am often amazed/confused by the process of how people in the latter camp do what they do. Having found this example, it seemed worth sharing quickly.
*Definition of the simulation argument: “The simulation argument was set forth in a paper published in 2003. A draft of that paper had previously been circulated for a couple of years. The argument shows that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed. The argument has attracted a considerable amount of attention, among scientists and philosophers as well as in the media.”
Note that Hans Moravec, an Austrian-born roboticist, came up with essentially the same idea back in the 1990s. Bostrom was very familiar with Moravec’s work, so it’s likely he encountered it prior to 2003, but then forgot it by the time he made his rediscovery.
“Cryptomnesia occurs when a forgotten memory returns without its being recognized as such by the subject, who believes it is something new and original. It is a memory bias whereby a person may falsely recall generating a thought, an idea, a tune, a name, or a joke,[1] not deliberately engaging in plagiarism but rather experiencing a memory as if it were a new inspiration.”
I haven’t read Moravec’s book very thoroughly, but I ctrl+f’d for “simulation” and couldn’t see anything very explicitly discussing the idea that we might be living in a simulation. There are a number of instances where Moravec talks about running very detailed simulations (and implying that these would be functionally similar to humans). It’s possible (quite likely?) Bostrom didn’t ever see the 1995 article where Moravec “shrugs and waves his hand as if the idea is too obvious.”
Either way, it seems true that (1) the idea itself predates Bostrom’s discussion in his 2003 article, (2) Bostrom’s discussion of this specific idea is more detailed than Moravec’s.
Bostrom (2003) cited Moravec (1988), but not for this specific idea—it’s only for the idea that “One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we have already understood and whose functionality has been replicated in silico, contrast enhancement in the retina, yields a figure of ~10^14 operations per second for the entire human brain.”
But yeah, his answer to the question “How did you come up with this?” in the 2008 article I linked to in the original post seems misleading, because he doesn’t mention Moravec at all and implies that he came up with the idea himself.
Oh, nice, thanks very much for sharing that. I’ve cited Moravec in the same research report that led me to the Bostrom link I just shared, but hadn’t seen that article and didn’t read Mind Children fully enough to catch that particular idea.
I’ve been working a few hours per week at the Effective Altruism Infrastructure Fund as a Fund Manager since Summer this year.
EA’s reputation is at a bit of a low point. I’ve even heard EA described as the ‘boogeyman’ in certain well-meaning circles. So why do I feel inclined to double down on effective altruism rather than move onto other endeavours? Some shower thoughts:
I generally endorse aiming directly for the thing you actually care about. It seems higher integrity, and usually more efficient. I want to do the most good possible, and this goal already has a name and community attached to it; EA.
I find the core, underlying principles very compelling. The Centre for Effective Altruism highlights scope sensitivity, impartiality, recognition of tradeoffs, and the Scout Mindset. I endorse all of these!
Seems to me that EA has a good track record of important insights on otherwise neglected topics. Existential risk, risks of astronomical suffering, AI safety, wild animal suffering; I attribute a lot of success in these nascent fields to the insights of people with a shared commitment to EA principles and goals.
Of course, there’s been a lot of progress on slightly less neglected cause areas too. The mind boggles at the sheer number of human lives saved and the vast amount of animal suffering reduced by organisations funded by Open Philanthropy, for example.
I have personally benefited massively in achieving my own goals. Beyond some of the above insights, I attribute many improvements in my productivity and epistemics to discussions and recommendations that arose out of the pursuit of EA.
In other roles or projects I’m considering, when I think of questions like “who will actually realistically consider acting on this idea I think is great? Giving up their time or money to make this happen?” the most obvious and easiest answer often looks like some subset of the EA community. Obviously there are some echo chamber-y and bias-related reasons that might feed into this, but I think there are some real and powerful ones too.
Written quickly (15-20 mins), not neatly/well (originally to post on LinkedIn rather than here). There are better takes on this topic (e.g.).
There are some pragmatic, career-focused reasons too of course. I’m better networked inside EA than outside of it. I have long thought grantmaking is a career direction I’d like to try my hand at, and this seemed like a good specific opportunity for me.
Further caveats I didn’t have space to make on LinkedIn: I wrote this quick take as an individual, not for EAIF or my other projects etc; I haven’t checked this with colleagues. There are also identity-related and bias reasons that draw me to stay involved with EA. Seems clear that EA has had a lot of negative impact too. And of course we have deep empirical and moral uncertainty about what’s actually good and useful in the long-run after accounting for indirect effects. I haven’t attempted any sort of overall quantitative analysis of the overall effects.
But in any case, I still expect that overall EA has been and will be a positive force for good. And I’m excited to be contributing to EAIF’s mission. I just wrote a post about Ideas EAIF is excited to receive applications for; please consider checking that out if any of this resonates and/or you have ideas about how to improve EA and the impact of projects making use of EA principles!
I agree with so much here.
I have my responses to the question you raised: “So why do I feel inclined to double down on effective altruism rather than move onto other endeavours?”
I have doubled down a lot over the last ~1.5 years. I am not at all shy about being an EA; it is even on my LinkedIn!
This is partly because of integrity and honesty reasons. Yes, I care about animals and AI and like math and rationality and whatnot. All this is a part of who I am.
Funnily enough, a non-negligible reason why I have doubled down (and am more pro-EA than before) is the sheer quantity of not-so-good critiques. And they keep publishing them.
Another reason is because there are bizarre caricatures of EAs out there. No, we are not robotic utility maximizers. In my personal interactions, when people hopefully realize that “okay this is a just another feel-y human with a bunch of interests who happens to be vegan and feels strongly about donations.”
“I have personally benefited massively in achieving my own goals.” — I hope this experience is more common!
I feel EA/adjacent community epistemics have enormously improved my mental health and decision-making; being in the larger EA-sphere has improved my view of life; I have more agency; I am much more open to newer ideas, even those I vehemently disagree with; I am much more sympathetic to value and normative pluralism than before!
I wish more ever day EAs were louder about their EA-ness.
Given that effective altruism is “a project that aims to find the best ways to help others, and put them into practice”[1] it seems surprisingly rare to me that people actually do the hard work of:
(Systematically) exploring cause areas
Writing up their (working hypothesis of a) ranked or tiered list, with good reasoning transparency
Sharing their list and reasons publicly.[2]
The lists I can think of that do this best are by 80,000 Hours, Open Philanthropy’s, and CEARCH’s list.
Related things I appreciate, but aren’t quite what I’m envisioning:
Tools and models like those by Rethink Priorities and Mercy For Animals, though they’re less focused on explanation of specific prioritisation decisions.
Longlists of causes by Nuno Sempere and CEARCH, though these don’t provide ratings, rankings, and reasoning.
Various posts pitching a single cause area and giving reasons to consider it a top priority without integrating it into an individual or organisation’s broader prioritisation process.
There are also some lists of cause area priorities from outside effective altruism / the importance, neglectedness, tractability framework, although these often lack any explicit methodology, e.g. the UN, World Economic Forum, or the Copenhagen Consensus.
If you know of other public writeups and explanations of ranked lists, please share them in the comments![3]
Of course, this is only one definition. But my impression is that many definitions share some focus on cause prioritisation, or first working out what doing the most good actually means.
I’m a hypocrite of course, because my own thoughts on cause prioritisation are scattered across various docs, spreadsheets, long-forgotten corners of my brain… and not at all systematic or thorough. I think I roughly:
- Came at effective altruism with a hypothesis of a top cause area based on arbitrary and contingent factors from my youth/adolescence (ending factory farming),
- Had that hypothesis worn down by various information and arguments I encountered and changed my views on the top causes
- Didn’t ever go back and do a systemic cause prioritisation exercise from first principles (e.g. evaluating cause candidates from a long-list that includes ‘not-core-EA™-cause-areas’ or based on criteria other than ITN).
I suspect this is pretty common. I also worry people are deferring too much on what is perhaps the most fundamental question of the EA project.
Rough and informal explanations welcome. I’d especially welcome any suggestions that come from a different methodology or set of worldviews & assumptions to 80k and Open Phil. I ask partly because I’d like to be able to share multiple different perspectives when I introduce people to cause prioritisation to avoid creating pressure to defer to a single list.
Thank Jamie, I think cause prioritisation is super important as you say, but I don’t think its as neglected as you think, at least not within the scope of global health and wellbeing. I agree that the substance of your the 3 part list being important, but I wouldn’t consider the list the best measure of how much hard cause prioritisation work has been done. It seems a bit strawman-ish as I think there are good reasons (see below) why those “exact” things aren’t being done.
First I think precise ranking of “cause areas”is nearly impossible as its hard to meaningfully calculate the “cost-effectiveness” of a cause, you can only accurately calculate the cost-effectiveness of an intervention which specifically targets that cause. So if you did want a meaningful rank, you at least need to have an intervention which has probably already been tried and researched to some degree at least.
Secondly I think having public specific rankings has potential to be both meaningless and reputationally dangerous. I think clustering the best interventions we know of and sharing the estimated cost effectiveness is fantastic (like Givewell, 80,000 hours, CEARCH and Copenhagen do), but I don’t think adding ranked specificity is very helpful because....
Uncertainty is so high and confidence intervals so wide in these calculations that specific rankings can be fairly meaningless. When all confidence intervals for interventions overlap, I think providing a specific ranking can be almost dishonest
Specific public rankings for causes/interventions has the potential downside of being inflammatory and unhelpful for the effective altruism movement. We’ve already seen some obvious backlash and downside of the big plug for working towards AI safety being put forward as something like “the most important” intervention. Imagine if orgs were publicly pushing seemingly concrete rankings? Much of the public and intellectual world is likely to misunderstand the purpose of it and criticise, or even understand well and criticizse...
I think that 80,000 hours, Open Phil and CEARCH do the substance what you are looking for pretty well and put a lot of money and hours into it—I don’t think hard work in this area is “surprisingly rare” I’m not sure if adding a whole lot more organisations here would achieve much, but there might be more room for efforts there!
Also I personally think that GiveWell might do the most work which achieves the substance of what you are looking for within global health and wellbeing. They are devoted to finding the most cost effective interventions in the world that exist right now. Their “top charities” page is in some ways a handful of what they think are the “no 1″ ranked interventions. Yes they only consider interventions with a lot of evidence behind them and are fairly conservative but I think it achieves much of the substance of your 3 steps.
Also like you mentioned the Copenhagen Consensus also does a pretty good job of outlining what they think might be the 12 best interventions (best things first) with much reasoning and calculation behind each one. This is not ft off a straight rank
I’d be interested to hear what you think might be the upsides of “ranking” specifically vs clustering our best estimates at effective cause areas/interventions.
I’d be interested to get comment from @Joel Tan here as he and the CEARCH team have probably considered this question more than most of us
Interesting point thanks!
Very helpful comment, thank you for taking the time to write out this reply and sharing useful reflections and resources!
There’s a lot going on here. I suspect I’m more optimistic than you that sharing uncertain but specific rankings is helpful for clarifying views and making progress? I agree in principle that what we want to do is evaluate specific actions (“interventions”), but I still think you can rank expected cost-effectiveness at a slightly more zoomed-out level, as long as you are comparing across roughly similar levels of abstraction. (Implicitly, you’re evaluating the average intervention in that category, rather than a single intervention.) Given these things, I don’t think I endorse the view that “you at least need to have an intervention which has probably already been tried and researched to some degree at least.”
I agree with the reputational risks and the potential for people to misunderstand your claim or think that it’s more confident than it is, etc. I somewhat suspect that this will be mitigated by there just being more such rankings though, as well as having clear disclaimers. E.g. at the moment, people might look at 80k and Open Phil rankings and conclude that there must be strong evidence behind the ratings. But if they see that there are 5 different ranked lists with only some amount of overlap, it’s implicitly pretty clear that there’s a lot of subjectivity and difficult decision-making going into this. (I don’t agree with it being “meaningless” or “dishonest”—I think that relates to the points above.)
Thanks a lot for these pointers! I will look into them more carefully. This is exactly the sort of thing I was hoping to receive in response to this quick take, so thanks a lot for your help. Best Things First sounds great and I’ve added it to my Audible wishlist. Is this what you have in mind for GiveWell? (Context: I’m not very familiar with global health.)
Oh this might have just been me using unintentionally specific language. I would have included “tiered” lists as part of “ranked”. Indeed the Open Phil list is tiered rather than numerically ranked. Thank you for highlighting this though, I’ve edited the original post to add the word “tiered”. (Is that what you meant by “clustering our best estimates at effective cause areas/interventions? Lmk if you meant something else.)
Thanks again!
Thanks for the thoughts, Jaime and Nick!
For what it’s worth, CEARCH’s list of evaluated causes (or more specifically, top interventions in various causes) and their estimated cost-effectiveness is here: https://docs.google.com/spreadsheets/d/14y9IGAyS6s4kbDLGQCI6_qOhqnbn2jhCfF1o2GfyjQg/edit#gid=0
I think Nick is fundamentally correct that because uncertainty is so high, sorting isn’t particularly useful. Most grantmaking organizations, to my understanding, prefer to use a cost-effectiveness threshold/funding bar, to decide whether or not to recommend/support a particular cause/intervention/charity.
For ourselves, we use 10x GiveWell for GHD, as (a) most of the money we move is EA and the counterfactual is GiveWell (so to have impact we the ideas we redirect funding/talent to be more cost-effective than GiveWell in expectation, and (b) we have such an aggressive bar because GiveWell is very robust in their discounting relative to us (which takes a lot of time and effort). An aggressive bar helps ensure that even if your estimated cost-effectiveness estimate is too optimistic relative to GiveWell, it can eat a lot of implicit discounts while still ensuring that the true cost-effectiveness is >GiveWell. (so when we say something is >=10x GiveWell it’s not literally so, more of a reasonably high confidence claim that it’s probably more cost-effective (in expectation).
Thank you!
I understand the reasons for ranking relative to a given cost-effectiveness bar (or by a given cost-effectiveness metric). That provides more information than constraining the ranking to a numerical list so I appreciate that.
Btw, if you had 5-10 mins spare I think it’d be really helpful to add explanation notes to the cells in the top row of the spreadsheet. E.g. I don’t know what “MEV” stands for, or what the “cost-effectiveness” or “cause no.” columns are referring to. (Currently these things mean that I probably won’t share the spreadsheet with people because I’d need to do a lot of explaining or caveating to them, whereas I’d be more likely to share it if it was more self-explanatory.)
Hi Jaime, I’ve updated to clarify that the “MEV” column is just “DALYs per USD 100,000″. Have hidden some of the other columns (they’re just for internal administrative/labelling purposes).
Just dumping this here in case it’s helpful for someone stumbling back across this: Here’s a “Worksheet for choosing the most pressing problem” I made.
I think people/orgs do some amount of this, but it’s kind of a pain to share them publicly. I prefer to share this kind of stuff with specific people in Google Docs, in in-person conversations, or on Slack.
I also worry somewhat about people deferring to random cause prio posts, and I’d guess that on the current margin, more cause prio posts that are around the current median in quality make the situation worse rather than better (though I could see it going either way).
Thanks! When you say “median in quality” what’s the dataset/category that you’re referring to? Is it e.g. the 3 ranked lists I referred to, or something like “anyone who gives this a go privately”?
Sorry, it wasn’t clear. The reference class I had in mind was cause prio focussed resources on the EA forum.
I had thought a public list that emphasized potential Impact of different interventions and the likely costs associated with discovering the actual impact would be great.
The value of graduate training for EA researchers: researchers seem to think it is worthwhile
Imagine the average “generalist” researcher employed by an effective altruist / longtermist nonprofit with a substantial research component (e.g. Open Philanthropy, Founders’ Pledge, Rethink Priorities, Center on Long-Term Risk). Let’s say that, if they start their research career with an undergraduate/bachelor’s degree in a relevant field but no graduate training, each year of full-time work, they produce one “unit” of impact.
In a short Google Form, posted on the Effective Altruism Researchers and EA Academia Facebook groups, I provided the above paragraph and then asked: “If, as well as an undergraduate/bachelor’s degree, they start their research career at EA nonprofits with a master’s degree in a relevant field, how many “units” of impact do you expect that they would produce each year for the first ~10 years of work?”* The average response, from the 8 respondents, was 1.7.
I also asked: “If, as well as an undergraduate/bachelor’s degree, they start their research career at EA nonprofits with a PhD in a relevant field, how many “units” of impact do you expect that they would produce each year for the first ~10 years of work?”* The average response was 3.9.
I also asked people whether they were a researcher at a nonprofit, in academia, or neither, and whether they had graduate training themselves or not.** Unsurprisingly, researchers in academia rated the value of graduate training more highly than researchers in nonprofits (2.0 and 4.3 for each year with a master’s and a PhD, respectively, compared to 1.2 and 1.7), as did respondents with graduate training themselves, relative to respondents without graduate training (2.0 and 5.2 compared to 1.2 and 1.7).
I asked a free-text response question: “Do you think that the value of graduate training would increase/compound, or decrease/discount, the got further into their career?” 4 respondents wrote that the value of graduate training would decrease/discount the got further into their career, but didn’t provide any explanations for this reasoning. This was also my expectation; my reasoning was that one or more years’ of graduate training, which would likely only be partly relevant to the nonprofit work that you would be doing, would become relatively less important later on, since your knowledge, skills, and connections would have increased through your work in nonprofits.
However, two respondents argued that the value of graduate training would increase/compound. One added: “People without PhDs are sadly often overlooked for good research positions and also under-respected relative to their skill. If they don’t have a PhD they will almost never end up in a senior research position.” The other noted that it would “increase/compound, particularly if they do things other than anonymous research, e.g. they build an impressive CV, get invited to conferences because of their track record. If one doesn’t have a PhD, the extent of this is limited, mostly unless one fits a high-credibility non-academic profile, e.g. founded an organization.”
I did some simple modelling / back of the envelope calculations to estimate the value of different pathways, accounting for 1) the multipliers on the value of your output as discussed in the questions on the form and 2) the time lost on graduate education.*** Tldr; with the multiplier values suggested by the form respondents, graduate education clearly looks worthwhile for early career researchers working in EA nonprofits, assuming they will work in an EA research nonprofit for the rest of their career. It gets a little more complex if you try to work it out in financial terms, e.g. accounting for tuition fees.
For my own situation (with a couple of years of experience in an EA research role, no graduate training), I had guessed multipliers of 1.08 and 1.12 on the value of my research in the ~10 years after completing graduate training, for a master’s and PhD, respectively. For the remaining years of a research career after that, I had estimated 1.01 and 1.02. Under these assumptions, the total output of a nonprofit research career with or without a master’s looks nearly identical for me; the output after completing a PhD looks somewhat worse. However, with the average values from the Google form then the output looks much better with a master’s than without and with a PhD than with just a master’s. Using the more pessimistic values from other EA nonprofit researchers, or respondents without graduate training, the order is still undergrad only < master’s < PhD, though the differences are smaller. In my case, tuition fees seem unlikely to affect these calculations much (see the notes on the rough models sheet).
Of course, which option is best for any individual also depends on numerous other career strategy considerations. For example, let’s think about “option value.” Which options are you likely to pursue if research in EA nonprofits doesn’t work out or you decide to try something else? Pursuing graduate training might enable you to test your fit with academia and pivot towards that path if it seems promising, but if your next best option is some role in a nonprofit that is unrelated to research (e.g. fundraising), then graduate education might not be as valuable.
I decided to post here partly in case others would benefit, and partly because I’m interested in feedback on/critiques of my reasoning, so please feel free to be critical in the comments!
*For both questions, I noted: “There are many complications and moderating factors for the questions below, but answering assuming the “average” for all other unspecified variables could still be helpful.)” and “1 = the same as if they just had a bachelor’s; numbers below 1 represent reduced impact, numbers above 1 represent increased impact.”
**These questions were pretty simplified, not permitting people to select multiple options.
*** Here, for simplicity, I assumed that:
- You would produce no value while doing your graduate training, which seems likely to be false, especially during (the later years of) a PhD.
- The value of 1 year after your graduate education was the same as 1 year before retirement, which seems likely to be false.
The mechanism may not be causal. If you’re conditioning on type of person who can get accepted into graduate programs + get funding + manage to stick with a PhD program, you are implicitly drawing on a very different pool of people than if you don’t condition on this.
That’s a good point—my intention was that it would be the same individual in each instance, just with or without the training, but I didn’t word the survey question clearly to reflect that.
It’s an interesting analysis. Just a thought—since the value of 1 unit is up to the responder if I’ve understood correctly, it might be more meaningful to calculate ratios of the responses for each person and average these rather than average the responses to each part—for the latter, if any responder picked small “unit” sizes and correspondingly gave large numerical values, they would make an outsized contribution. Calculating ratios first cancels out whatever “unit” people have decided on. Though it should only matter much if people’s “units” differ considerably in size.
Hey Jamie, thanks for doing this, I find the results interesting. Just want to point out what I think are two small typos that made it harder to understand what you wrote:
Could you correct what you put above?
Also, I’m curious on
1. What Master’s or Ph.D degrees are you considering to take?
2. What do you think would be a good Master’s or Ph.D degree to take for the average “generalist” researcher at an EA / longtermist non-profit (if this is different from what you personally would take)?
Thanks!
Oops, I meant “the further they got”
Psychology, sociology, (history), (political science). I imagine that that’s an unusually broad range to be considering, but I didn’t want to rule anything out prematurely. My undergraduate was in history but my research in nonprofits has been much more social science-y, and a bit more quantitative.
I imagine that there’s a very broad range that could be on the table. I haven’t thought about this question in general that much for “EA / longtermist” research orgs. For effective animal advocacy research organisations, my main guesses would be the same as the list above, plus economics. But there could be others that I haven’t thought about, related to those options, or an unusually good fit for some individuals etc.
Buying a house will probably save you lots of money, which you can later donate, but it might not make much difference (and may work out as negative) in terms of your ability to do good.
It seems like common sense that buying a house saves you from wasting money on rent and works out better, financially, in the long term. But earlier this year, John Halstead wrote a blogpost providing a bunch of reasons not to buy a house.
I had another look at John’s calculations. I kept the basic calculations the same, but added a few considerations and re-checked the appropriate numbers for London (where I live). I also added various different tabs of the spreadsheet to compare things like variations in interest rates, property prices, timeframes for buying and selling, and other costs. In every scenario, unless there’s a housing crash shortly after you buy, it looks like buying comes out as far, far better, from a financial perspective. In the best guess, realistic scenario, buying came out as about £550,000 better after 10 years. John has also had another look at his calculations since his post and seems more optimistic about buying. I haven’t looked at figures and costs for countries other than the UK, but the differences are so large that I’d quite surprised if investing and renting came out as more favourable in (m)any countries.
This doesn’t address the concerns about buying in John’s blog post (e.g. that you will only be able access the money when you’re older). But if you’re interested in patient philanthropy, and are happy to donate more accumulated wealth in several decades’ time (when you downsize or die) rather than having a strong preference for donating less sooner, then buying a house looks better. (For discussion, see “Giving now vs giving later” and “How becoming a ‘patient philanthropist’ could allow you to do far more good”)
Despite the large raw difference between buying vs. renting and investing, these differences might mean surprisingly little, in terms of ability to do good in the world, if you apply a discount to the value of future money to calculate its net present value. If you apply a high discount rate, then the gains are practically zero. Indeed, some EA orgs express a strong preference for money sooner rather than later. I haven’t worked this bit out properly, but if you take these numbers literally (and reject patient philanthropy) it might be better to just donate sooner rather than to save up for a deposit.
I also live in London, and bought a house in April 2016. So I’ve thought about these calculations a fair bit, and happy to share some thoughts here:
One quick note on your calculations is that stamp duty has been massively, but temporarily, cut due to COVID. You note it’s currently £3k on a £560k flat. Normally it would be £18k. You can look at both sets of rates here.
When I looked at this, the calculation was heavily dependent on how often you expect to move. Every time you sell a home and buy a new one you incur large fixed costs; normally 2-4% of purchase price in stamp duty, 1-3% in estate agent fees, and a few other fixed costs which are minor in the context of the London property market but would be significant if you were looking at somewhere much cheaper (legal fees etc.). All of this seems well accounted for in your spreadsheet, but it means that if you expect to move every 1-3 years then the ongoing saving will be swamped by repeatedly incurring these costs.
There’s also a somewhat fixed time cost; when I bought a home I estimate I spent the equivalent of 1 week of full-time work on the process (not the moving itself), most of which was spent doing things I wouldn’t have needed to do for rented accomodation.
All told, for my personal situation in 2016 I thought I should only buy if I expected to stay in that flat for at least 5 years, and to make the calculation clear I would have wanted that to be more like 10 years. As a result, buying looks much better if you have outside factors already tying you down; a job that is very unlikely to be beaten, kids, a city you and/or your partner loves, etc.
This is a much closer calculation that will come out with your numbers, because I don’t think a 7.5% housing return is a sensible average to use going forward. I had something like a 2% real (~4% nominal, but I generally prefer to think in terms of real) estimate pencilled in for housing, and more like a 5% real (7% nominal) rate pencilled in for stocks. There’s a longer discussion there, but the key point I would make is that interest rates fallen dramatically in recent decades, boosting the value of assets which pay out streams of income, i.e. rent/dividends. It’s unclear to me that the recent trend towards ever lower rates can go much further, and markets don’t expect it to, so I didn’t want to tacitly assume that.
So far, that conservative estimate was much closer, London house prices rose by roughly 1.5% annualised between April 2016 and March 2020. Then a pandemic hit, but happy to exclude that from ‘things I could have reasonably expected’.
Does this include how it might limit your ability to move for work, which might be the most important factor in salary/impact?
Good point although I guess there’s always the possibility of moving and renting out your home (and then renting yourself in the place you move to)
No, I didn’t list the “other” pros and cons, this is just the financial perspective.
I don’t have a good sense of how difficult it is to move houses. But my guess is that a decision to move for work or not wouldn’t be that dependent on selling a house. E.g. you either want to stay, come what may, because of reasons like friends, family, partners etc, or you’re personally happy to move, and wouldn’t mind selling then renting?
Thanks for this Jamie. Useful to know that the outcome can differ according to person/location. I reckon I’ll do this exercise for myself at some point. A few quick questions/comments (I haven’t looked at this in detail so apologies if I’ve missed anything):
Have you identified the key difference(s) between your calculation and John’s calculation that leads to the different result? It might be helpful to call this out
E.g. is it mainly driven by higher rental costs in London / the fact that you’ve assumed a smaller deposit for the house etc.
Pretty minor point, but the 3.5% discount rate should decline over time and it doesn’t seem you’ve factored this in (it shouldn’t really change much though as you’re not looking over a very long time scale)
I’m not really sure how useful the 3.5% discount rate is for philanthropists, in particular EA philanthropists. It includes a discount of future utility on account of the future being less morally valuable, which is something that philosophers have pretty much rejected and is quite counter to EA philosophy. There are good reasons for EA philanthropists to discount (more on that here and here) but I don’t there’s a good reason for us to expect it to lead to a 3.5% rate. It could actually be higher or lower depending on an individual’s preferred cause area/underlying ethical views. The general point that you’re making that buying a house only provides access to money when older, and therefore that this becomes subject to discounting is a very useful one though.
Doesn’t John’s calculation also say buying is better? Or am I missing something?
No, I haven’t gone through and done that. Actually, John’s calculations still come out in favour of buying from a financial perspective, albeit by a much smaller margin than in my calculations; I think he was put off for other reasons.
I’m probably doing the maths completely wrong on that bit… suggestions for correct formula to use are welcome. Commenting on the sheet is currently on if you want to comment on directly.
Yeah I haven’t got my head very thoroughly round the various arguments on this, so thanks for sharing. My impression was also that using 3.5% didn’t make much sense and should probably either go lower than that (for “patient” reasons) or much higher (if you think opportunities for cost-effective giving will diminish rapidly for various reasons.
Some relevant context I probably should have added to the post was that I did this calculation because I was very surprised at John’s overall conclusion and wanted to check it, and, despite this not being very thorough or anywhere near my research “expertise”, I thought other people might benefit from these rough and ready efforts, so decided to share.
How did Nick Bostrom come up with the “Simulation argument”*?
Below is an answer Bostrom gave in 2008. (Though note, Pablo shares a comment below that Bostrom might be misremembering this, and he may have taken the idea from Hans Moravec.)
“In my doctoral work, I had studied so-called self-locating beliefs and developed the first mathematical theory of observation selection effects, which affects such beliefs. I had also for many years been thinking a lot about future technological capabilities and their possible impacts on humanity. If one combines these two areas – observation selection theory and the study of future technological capacities – then the simulation argument is only one small inferential step away.
Before the idea was developed in its final form, I had for a couple of years been running a rudimentary version of it past colleagues at coffee breaks during conferences. Typically, the response would be “yeah, that is kind of interesting” and then the conversation would drift to other topics without anything having been resolved.
I was on my way to the gym one evening and was again pondering the argument when it dawned on me that it was more than just coffee-break material and that it could be developed in a more rigorous form. By the time I had finished the physical workout, I had also worked out the essential structure of the argument (which is actually very simple). I went to my office and wrote it up.
(Are there any lessons in this? That new ideas often spring from the combining of two different areas or cognitive structures, which one has previously mastered at sufficiently a deep level, is a commonplace. But an additional possible moral, which may not be as widely appreciated, is that even when we do vaguely realize something, the breakthrough often eludes us because we fail to take the idea seriously enough.)”
Context for this post:
I’m doing some research on “A History of Robot Rights Research,” which includes digging into some early transhumanist / proto-EA type content. I stumbled across this.
I tend to think of researchers as contributing either more through being detail oriented—digging into sources or generating new empirical data—or being really inventive and creative. I definitely fall into the former camp, and am often amazed/confused by the process of how people in the latter camp do what they do. Having found this example, it seemed worth sharing quickly.
*Definition of the simulation argument: “The simulation argument was set forth in a paper published in 2003. A draft of that paper had previously been circulated for a couple of years. The argument shows that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed. The argument has attracted a considerable amount of attention, among scientists and philosophers as well as in the media.”
Note that Hans Moravec, an Austrian-born roboticist, came up with essentially the same idea back in the 1990s. Bostrom was very familiar with Moravec’s work, so it’s likely he encountered it prior to 2003, but then forgot it by the time he made his rediscovery.
It’s quite common:
“Cryptomnesia occurs when a forgotten memory returns without its being recognized as such by the subject, who believes it is something new and original. It is a memory bias whereby a person may falsely recall generating a thought, an idea, a tune, a name, or a joke,[1] not deliberately engaging in plagiarism but rather experiencing a memory as if it were a new inspiration.”
https://en.wikipedia.org/wiki/Cryptomnesia
I haven’t read Moravec’s book very thoroughly, but I ctrl+f’d for “simulation” and couldn’t see anything very explicitly discussing the idea that we might be living in a simulation. There are a number of instances where Moravec talks about running very detailed simulations (and implying that these would be functionally similar to humans). It’s possible (quite likely?) Bostrom didn’t ever see the 1995 article where Moravec “shrugs and waves his hand as if the idea is too obvious.”
Either way, it seems true that (1) the idea itself predates Bostrom’s discussion in his 2003 article, (2) Bostrom’s discussion of this specific idea is more detailed than Moravec’s.
Bostrom (2003) cited Moravec (1988), but not for this specific idea—it’s only for the idea that “One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we have already understood and whose functionality has been replicated in silico, contrast enhancement in the retina, yields a figure of ~10^14 operations per second for the entire human brain.”
But yeah, his answer to the question “How did you come up with this?” in the 2008 article I linked to in the original post seems misleading, because he doesn’t mention Moravec at all and implies that he came up with the idea himself.
Oh, nice, thanks very much for sharing that. I’ve cited Moravec in the same research report that led me to the Bostrom link I just shared, but hadn’t seen that article and didn’t read Mind Children fully enough to catch that particular idea.