I also live in London, and bought a house in April 2016. So I’ve thought about these calculations a fair bit, and happy to share some thoughts here:
One quick note on your calculations is that stamp duty has been massively, but temporarily, cut due to COVID. You note it’s currently £3k on a £560k flat. Normally it would be £18k. You can look at both sets of rates here.
When I looked at this, the calculation was heavily dependent on how often you expect to move. Every time you sell a home and buy a new one you incur large fixed costs; normally 2-4% of purchase price in stamp duty, 1-3% in estate agent fees, and a few other fixed costs which are minor in the context of the London property market but would be significant if you were looking at somewhere much cheaper (legal fees etc.). All of this seems well accounted for in your spreadsheet, but it means that if you expect to move every 1-3 years then the ongoing saving will be swamped by repeatedly incurring these costs.
There’s also a somewhat fixed time cost; when I bought a home I estimate I spent the equivalent of 1 week of full-time work on the process (not the moving itself), most of which was spent doing things I wouldn’t have needed to do for rented accomodation.
All told, for my personal situation in 2016 I thought I should only buy if I expected to stay in that flat for at least 5 years, and to make the calculation clear I would have wanted that to be more like 10 years. As a result, buying looks much better if you have outside factors already tying you down; a job that is very unlikely to be beaten, kids, a city you and/or your partner loves, etc.
This is a much closer calculation that will come out with your numbers, because I don’t think a 7.5% housing return is a sensible average to use going forward. I had something like a 2% real (~4% nominal, but I generally prefer to think in terms of real) estimate pencilled in for housing, and more like a 5% real (7% nominal) rate pencilled in for stocks. There’s a longer discussion there, but the key point I would make is that interest rates fallen dramatically in recent decades, boosting the value of assets which pay out streams of income, i.e. rent/dividends. It’s unclear to me that the recent trend towards ever lower rates can go much further, and markets don’t expect it to, so I didn’t want to tacitly assume that.
So far, that conservative estimate was much closer, London house prices rose by roughly 1.5% annualised between April 2016 and March 2020. Then a pandemic hit, but happy to exclude that from ‘things I could have reasonably expected’.
Thanks for your response.
I didn’t actually interpret Lark’s post as trying to contribute to the “ongoing prosecution-and-defence of Robin’s character or work”, but instead think it is trying to add to the cancel culture conversation more generally, using Robin’s case as a useful example.
Sorry, this is on me. The original draft of that sentence read something like “I agree with Khorton below that little is being served by an ongoing prosecution-and-defence of Robin’s character or work, so I’m not going to weigh in again on those specific points and request others replying to this comment do the same, instead focusing on the question of what rules we do/don’t want in general”.
I then cut the sentence down, but missed that in doing so it could now be read as implying that this was Larks’ objective. That wasn’t intentional, and I don’t think this.
I want to open by saying that there are many things about this post I appreciate, and accordingly I upvoted it despite disagreeing with many particulars. Things I appreciate include, but are not limited to:
-The detailed block-by-block approach to making the case for both cancel culture’s prevalence and its potential harm to the movement.
-An attempt to offer a concrete alternative pathway to CEA and local groups that face similar decisions in future.
-Many attempts throughout the post to imagine the viewpoint of someone who might disagree, and preempt the most obvious responses.
But there’s still a piece I think is missing. I don’t fault Larks for this directly, since the post is already very long and covers a lot of ground, but it’s the area that I always find myself wanting to hear more about in these discussions, and so would like to hear more about from either Larks or others in reply to this comment. It relates to both of these quotes.
Of course, being a prolific producer of premium prioritisation posts doesn’t mean we should give someone a free pass for behaving immorally. For all that EAs are consequentialists, I don’t think we should ignore wrongdoing ‘for the greater good’. We can, I hope, defend the good without giving carte blanche to the bad, even when both exist within the same person.
Rules and standards are very important for organising any sort of society. However, when applied inconsistently they can be used as a weapon to attack unpopular people while letting popular people off the hook.
Given that this post is titled ‘advice for CEA and local groups’, reading this made me hope that this post would end with some suggested ‘rules and standards’ for who we do and do not invite to speak at local events/EAG/etc. Where do we draw the line on ‘behaving immorally’? I strongly agree that whatever rules are being applied should be applied consistently, and think this is most likely to happen when discussed and laid down in a transparent and pre-agreed fashion.
While I have personal views on the Munich case which I have laid out elsewhere, I agree with Khorton below that little is being served by an ongoing prosecution-and-defence of Robin’s character or work. Moreover, my commitment to consistency and transparency is far stronger than my preference for any one set of rules over others. I also expect clear rules about what we will and won’t allow at various levels to naturally insulate against cancel culture. To the extent I agree that cancel culture is an increasing problem, the priority on getting this clear and relying less on ad hoc judgements of individuals has therefore risen, and will likely continue to rise.
So, what rules should we have? What are valid reasons to choose not to invite a speaker?
There was a string of writing on on this topic or closely related topics early in the forum’s life, especially w.r.t. talking about cause prioritisation, so here are some links to those posts. AFAIK, the advice within largely still holds.
Robert Wiblin, Six Ways To Get Along With People Who Are Totally Wrong*
Jess Whittlestone, Supportive Scepticism
Michelle Hutchinson and Jess Whittlestone, Supportive Scepticism in Practice
Owen Cotton-Barratt, Keeping the Effective Altruism movement welcoming
I should also thank Owen for linking to most of these in his comment on the first link, which made collecting these quite a lot easier.
The data I gave is ultimately survey data, the table you post is based on marriage certificates issued. This has advantages but has one large disadvantage, namely ignoring marriages that take place overseas, while possibly counting marriages between two overseas residents that take place locally. It’s mentioned on the ‘Table 12 interpretation’ tab:
These statistics are based on marriages registered in England and Wales. Because no adjustment has been made for marriages taking place abroad, the true proportion of men and women ever married could be higher.
I followed that link to get any context on how big a deal this might be.
In 2017, an estimated 104,000 UK residents went abroad to get married and an estimated 8,000 overseas residents married in the UK.
To put that number in context, there are roughly 240k marriages per year in the UK, presumably involving around 480k people, so that’s a large chunk of the total.
I think survey data is just better for our current use case since we don’t much care about sample noise; apart from the ‘destination wedding’ issue, I definitely want to count two immigrants who arrived in the UK already married, and I think they’ll also appear in the survey but not the certificate-counting.
Source for the UK:22% figure? The ONS figures for 2019 (for married, not ever married) are:
Men 25-29: 15.7%
Women 25-29: 25.4%
Men 30-34: 42.4%
Women 30-34: 52.3%
These groups are all roughly the same size, so a combined 25-34 group would be around 34%. ‘Ever married’ should be 1-4 percentage points higher.
I take the observation to be that 60% of EAs over 45 have married, where we’d expect 85%.
FWIW, and without speaking for Jeff, for Denise and I the original observation was something like ‘percentage of people in nesting relationships around our age range (25-30) anecdotally seems sharply different in our EA versus similar-demographic non-EA circles’.
I consider religion a weak explanation for that, since we’re definitely counting cohabiting couples, but the observation is also less well-founded and I’m far from confident that it generalises across the community well.
How confident are you that the EALF survey respondants were using your relatively-narrow definition of judgment, rather than the dictionary definitions which, as you put it, “seem overly broad, making judgment a central trait almost by definition”?
I ask because scanning the other traits in the survey, they all seem like things where if I use common definitions I consider them useful for some or even many but not all roles, whereas judgment as usually defined is useful ~everywhere, making it unsurprising that it comes out on top. At least, that’s why I’ve never paid attention to this particular part of the EALF survey results in the past.
But I appreciate you’ve probably spoken in person to a number of the EALF people and had a better chance to understand their views, so I’m mostly curious whether you feel those conversations support the idea that the other respondants were thinking of judgment in the narrower way you would use the term.
Thank you for explicitly saying that you think your proposed approach would lead to a larger movement size in the long run, I had missed that. Your actual self-quote is an extremely weak version of this, since ‘this might possibly actually happen’ is not the same as explicitly saying ‘I think this will happen’. The latter certainly does not follow from the former ‘by necessity’.
Still, I could have reasonably inferred that you think the latter based on the rest of your commentary, and should at least have asked if that is in fact what you think, so I apologise for that and will edit my previous post to reflect the same.
That all said, I believe my previous post remains an adequate summary of why I disagree with you on the object level question.
[EDIT: As Oli’s next reponse notes, I’m misinterpreting him here. His claim is that the movement would be overall larger in a world where we lose this group but correspondingly pick up others (like Robin, I assume), or at least that the direction of the effect on movement size is not obvious.]
Thanks for the response. Contrary to your claim that you are proposing a third option, I think your (3) cleanly falls under mine and Ben’s first option, since it’s just a non-numeric write-up of what Ben said:
Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it’s worth the cost
I assume you would give different percentages, like 30% and 2x or something, but the structure of your (3) appears identical.
At that point my disagreement with you on this specific case becomes pretty factual; the number of sexual abuse survivors is large, my expected percentage of them that don’t want to engage with Robin Hanson is high, the number of people in the community with on-the-record statements or behaviour that are comparably or more unpleasant to those people is small, and so I’m generally willing to distance from the latter in order to be open to the former. That’s from a purely cold-blooded ‘maximise community output’ perspective, never mind the human element.
Other than that, I have a number of disagremeents with things you wrote, and for brevity I’m not going to go through them all; you may assume by default that everything you think is obvious I do not think is obvious. But the crux of the disagreement is here I think:
it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you
I disagree with the non-hyperbolic version of this, and think it significantly underestimates the extent to which someone repeatedly saying or doing public things that you find odious is a predictor of them saying or doing unpleasant things to you in person, in a fairly straightforward ‘leopards don’t change their spots’ way.
I can’t speak to the sexual abuse case directly, but if someone has a long history of making overtly racist statements I’m not likely to attend a small-group event that I know they will attend, because I put high probability that they will act in an overtly racist way towards me and I really can’t be bothered to deal with that. I’m definitely not bringing my children to that event. It’s not a matter of being ‘triggered’ per se, I just have better things to do with my evening than cutting some obnoxious racist down to size. But even then, I’m very privileged in a number of ways and so very comfortable defending my corner and arguing back if attacked; not everybody has (or should have) the ability and/or patience to do that.
There’s also a large second-order effect that communities which tolerate such behaviour are much more likely to contain other individuals who hold those views and merely haven’t put them in writing on the internet, which increases the probability of such an experience considerably. Avoidance of such places is the right default policy here, at an individual level at least.
I think you’re unintentionally dodging both Aaron’s and Ben’s points here, by focusing on the generic idea of intellectual diversity and ignoring the specifics of this case. It simply isn’t the case that disagreeing about *anything* can get you no-platformed/cancelled/whatever. Nobody seeks 100% agreement with every speaker at an event; for one thing that sounds like a very dull event to attend! But there are specific areas people are particularly sensitive to, this is one of them, and Aaron gave a stylised example of the kind of person we can lose here immediately after the section you quoted. It really doesn’t sound like what you’re talking about.
> A German survivor of sexual abuse is interested in EA Munich’s events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on “gentle silent rape” and find it viscerally unpleasant. They’ve seen other discussion spaces where ideas like Hanson’s were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.
Like Ben, I understand you as either saying that this person is sufficiently uncommon that their loss is worth the more-valuable conversation, or that we don’t care about someone who would distance themselves from EA for this reason anyway (it’s not an actual ‘loss’). And I’m not sure which it is or (if the first) what percentages you would give.
For posterity, the only data I’ve seen on this question suggests that this has not played out the way the OP and many others (myself included) might have expected. The economist ran an article* which links to this paper**. In short, cities with protests did not record discernible COVID case growth, at least as of a few weeks later. Moreover, quoting the paper (italics in original):
“Second, where there are social distancing effects, they only appear to materialize after the onset of the protests. Specifically, after the outbreak of an urban protest, we find, on average, an increase in stay-at-home behaviors in the primary county encompassing the city. That overall social distancing behavior increases after the mass protests is notable, as this finding contrasts with the general secular decline in sheltering-at-home taking place across the sample period (see Appendix Figure 6). Our findings suggest that any direct decrease in social distancing among the subset of the population participating in the protests is more than offset by increasing social distancing behavior among others who may choose to shelter-at-home and circumvent public places while the protests are underway. ”
In other words, it seems that protestors being outside was more than offset by other people avoiding the protests and staying home.
Pablo already replied, but FWIW I had the same irritation (and similarly had all posts pointed out to me by someone else after complaining to them about it). I think in my case the original assumption was that ‘latest posts’ meant what it sounds like, and on discovering that it wasn’t I (lazily) assumed there wasn’t a way to get what I wanted.
I don’t have a constructive suggestion for a better name though.
I agree with this. I would have assumed they would do (i), and other responses from people who actually read the paper make me think it might effectively be (iii). I don’t think it’s (ii).
If a climate change intervention has a cost-effectiveness of $417 / X per tonne of CO2 averted, then it is X times as effective as cash-transfers.
Wait a second.
I’m very confused by this sentence. Suppose, for the sake of argument, that all the impacts of emitting a tonne of CO2 are on people about as rich as present-day Americans, i.e. emitting a tonne of CO2 now causes people of that level of wealth to lose $417 at some point in the future. There is then no income adjustment necessary (I assume everything is being converted to something like present-day USD for present-day Americans, but I’m not actually sure and following the links didn’t shed any light), so the post-income-adjustment number is still $417. Also suppose for the sake of argument that we can prevent this for $100.
This seems clearly worse than cash transfers to me under usual assumptions about log income being a reasonable approximation to wellbeing (as described in your first appendix), since we are effectively getting a 4.17x multiplier rather than a 50-100x multiplier. Yet the equation in the quote claims it is 4.17x more effective than cash transfers*.
What am I missing?
*Mathematically, I think the equation works iff. the cash transfers in question are to people of comparable wealth to whatever baseline is being used to come up with the $417 figure. So if the baseline is modern-day Americans, that equation calculates how much better it is to avert CO2 emissions than to transfer cash to modern-day Americans.
Quick note on the ‘bunching’ hypothesis. While that particular post and suggestion is mostly an artefact of the US tax code and would lead to years that look like 20%/0%/20%/0%/etc., there’s a similar-looking thing that can happen for non-US GWWC members, namely that their tax year often won’t align with the calendar year (e.g. UK is 6th April − 5th April, Australia is 1st July − 30th June I believe).
In these cases I would expect compliant pledge takers to focus on hitting 10% in their local tax year, and when the EA survey asks about calendar years the effect will be that the average for that group is around 10% but the actual percentage given will range anywhere from 0 − 20% (if ~10% is being given), but often look like 13% one calendar year, 8% the next, 11% the year after that, etc. In other words, they will appear to be meeting the pledge around 50% of the time in your data. Yet the pledge is being kept by all such members continuously through that period. Eyeballing your 2017 graph of the actual distributions of percentages given, there are a lot of people in the 8-10% range, who are the main candidates for this.
Since both most US members and most non-US members have good reasons to not hit 10% in every calendar year, the number I find most compelling is the one in the bunching section that averages 2015 and 2016 donations (and finds 69% compliance when doing so). But that number suffers from not knowing if those people were actually GWWC members in 2015. It just knows they were members when they took the survey in 2017. GWWC had large growth around that time, so that’s a thorny issue. Then the 2018 survey solves the ‘when did they join’ problem but can’t handle any level of donations not exactly aligning with the 2017 calendar year.
My best guess thinking over all this would be that 73% of the GWWC members in this EA survey sample are compliant with the pledge, with extremely wide error bars (90% confidence interval 45% − 88%). I like Jeff’s suggestion below as a way to start to reduce those error bars.
Fair enough. I remain in almost-total agreement, so I guess I’ll just have to try and keep an eye out for what you describe. But based on what I’ve seen within EA, which is evidently very different to what you’ve seen, I’m more worried about little-to-zero quantification than excessive quantification.
I’m feeling confused.
I basically agree with this entire post. Over many years of conversations with Givewell staff or former staff, I can’t readily recall speaking to anyone affiliated with Givewell who I can identify that they would substantively disagree with the suggestions in this post. But you obviously feel that some (reasonably large?) group of people disagrees with some (reasonably large?) part of your post. I understand a reluctance to give names, but focusing on Givewell specifically as much of their thoughts on these matters are public record here, can you identify what specifically in that post or the linked extra reading you disagree with? Or are you talking to EAs-not-at-Givewell? Or do you think Givewell’s blog posts are reasonable but their internal decision-making process nonetheless commits the errors they warn against? Or some possibility I’m not considering?
I particularly note that your first suggestion to ‘entertain multiple models’ sounds extremely similar to ‘cluster thinking’ as described and advocated-for here, and the other suggestions also don’t sound like things I would expect Givewell to disagree with. This leaves me at a bit of a loss as to what you would like to see change, and how you would like to see it change.
>Also, not to mention all the career paths that aren’t earning to give or “work in an EA org”
While I share your concern about the way earning to give is portrayed, I think this issue might be even more pressing.
> But I would argue if you reduce the chance that nuclear war destroys civilization (from which we might not recover), then you increase the chances of getting safe AI and colonization, and therefore you can attribute overwhelming value of mitigating nuclear war.
For clarity’s sake, I don’t disagree with this. This does mean that your argument for overwhelming value of mitigating nuclear war is still predicated on developing a safe AI (or some other way of massively reducing the base rate) at a future date, rather than being a self-contained argument based solely on nuclear war being an x-risk. Which is totally fine and reasonable, but a useful distinction to make in my experience. For example, it would now make sense to compare whether working on safe AI directly or working on nuclear war in order to increase the number of years we have to develop safe AI is generating better returns per effort spent. This in turn I think is going to depend heavily on AI timelines, which (at least to me) was not obviously an important consideration for the value of working on mitigating the fallout of a nuclear war!