Co-founder and project lead at Oxford Biosecurity Group
Creator of Pandemic Interventions Course—a biosecurity interventions intro syllabus
All opinions I post on the forum are my own. Tell me why I’m wrong.
Co-founder and project lead at Oxford Biosecurity Group
Creator of Pandemic Interventions Course—a biosecurity interventions intro syllabus
All opinions I post on the forum are my own. Tell me why I’m wrong.
I found this very interesting to read, not because I agree with everything that was said (some bits I do, some bits I don’t) but because I think someone should be saying it.
I have had thoughts about the seeming divide between ‘longtermism’ vs ‘shorttermism’, when to me there seems to be a large overlap between the two, which kind of goes in line with what you mentioned: x-risk can occur within this lifetime, therefore you do not need to be convinced about longtermism to care about it. Even if future lives had no value (I’m not making that argument), and you only take into account current lives, preventing x-risk has a massive value because that is still nearly 8 billion people we are talking about!
Therefore I like the point that x-risk does not need to be altruistic. But it also very much can be, therefore having it separate from effective altruism is not needed:
1. I want to save as many people as possible therefore stopping extinction is good—altruism, and it is a good idea to do this as effectively as possible
2. I want to save myself from extinction—not altruism, but can lead to the same end therefore there can be a high benefit of promoting it this way to more general audiences
So I do not think that ‘effective altruism’ is the wrong name, as I think everyone so far I’ve come across in EA has been in the first category. EA is also broader, including things like animal welfare and global health and poverty and various other areas to improve the lives on sentient beings (and I think this should be talked about more). I think EA is a good name for all of those things.
But if the goal is to reduce x-risk in any way possible, working with people who fall into the second category, who want to save themselves and their loved ones, is good. If we want large shifts in global policy and people generally to act a certain way, things need to be communicated to a general audience, and people should be encouraged to work on high impact things even if they are ‘not aligned’.
I know I’m a bit late to this post/thread, but I’d like to add a +1 to the above comment. I found reading this very useful (having already been to a few EAGs/EAGxs), however I have sometimes gained as much/more value out of some longer small group conversations. So it’s a case of finding what works for you, and not worrying if you aren’t doing all of the above!
I’ve also found ‘accidental 1:1s’ very useful (but with more variance), i.e. spending e 10-30 minutes speaking to random people you come across in the mornings/while having a snack/having lunch. Of course there’s a larger chance that you won’t have much in common with those that you meet at random, but I’ve also found that some of my most productive meetings at some EAGs have been the chance ones. Because of this, I deliberately leave some time free rather than booking 1:1s both for rest and for chance meetings.
Very interesting read, thanks for writing.
I remember when I first joined EA the thing that I found the most different/beneficial was the community. I’ve met various people who care about having an impact, and about maximising their impact, and about rationality. But it is the community within EA and the concentration of these people all in one place that would be very difficult to replicate.
Who have you spoken to already about this? There is definitely work done to try to make community building as impactful as possible, but I am not sure much variance there is between the strategies of different groups (there seem to be a lot of socials, speaker events and intro fellowships).
I do not have the time right now to do my own research (e.g. speaking to people I know in various places doing active community building, reading past forum posts, reading other web sources), but I am interested to hear about your conclusions.
Sounds like a very interesting concept! Writing was basically my main hobby as a teenager, I’d love to publish a book at some point myself and find it great when other people do.
I don’t have time over the next few weeks, but may be able to read/give feedback after that. When are you planning on publishing this more publicly?
(I have not yet joined the Discord as I’m also not really a Discord person, but may do later)
Well done Dion!
This is powerful and moving just reading it, I bet it would have been even better sitting in the room listening to it and seeing the people raise their hands.
Glad EAGxSingapore went well.
That link doesn’t work for me. Do you have another one, or has it been taken down?
I don’t think I would update at all based just on those memes—particularly as my understanding is that he lived in group houses! (I know a lot of other EAs are vegan, but not everyone is)
I would call the way it has been posted on Twitter a meme, and the main point was on how much to update not on the format (meme or video) this information was presented in! For which I think we are in agreement
Thanks for this post. However, one of the first things that came to mind was the EA Forum itself.
It is completely public, much EA discourse happens here, and a lot of people use their real names/full names (I believe this is even encouraged). Clearly forum communication is not intended to be the same as an interview, and that it can’t be quoted as an interview (I expect/hope—I have no experience with journalism), and I think many people will already be bearing this in mind. It is also hard to prove of course who it is who is actually commenting, and whether people are using their own names, so it is less reliable there than an actual interview.
But the advice for talking to journalists seems to be for everyone in EA thinking about giving an interview, and generally I see it being very easy for a journalist to go on the forum and use that as a source (even including screenshots).
People being able to have discussions is one of the best things about the forum in my mind, and it’s good for people to be able to express their views without self-censoring. But also, anything written on the internet in public is clearly public.
I’m sure there are some nuances here. Does anyone have thoughts on this?
Incidentally, Jonas Vollmer’s comment on this forum post (can’t seem to link it sorry, at time of writing it is the comment above mine) gives example(s) where an EA Forum post has been directly quoted by Forbes.
e.g. https://www.forbes.com/sites/johnhyatt/2022/11/17/disgraced-crypto-trader-sam-bankman-fried-was-a-big-backer-of-effective-altruism-now-that-movement-has-a-big-black-eye/?sh=5e5a531b4ce7
Anyone know what can and can’t be quoted? Is everything quotable? Is there any permission required?
Hi, it’s bad to hear that you feel this way, and I can understand why you have this sort of sentiment. A lot of emotions are running high right now.
But what I have not seen mention of here:
I have not had funding effected by this (apart from reducing future potential funding sources), so I am saying this as someone far less effected than most. But a lot of people had their funding sources part or all from FTX, whose funding is now uncertain. There has been no guarantee that all promised grants will be fulfilled as far as I am aware, even through another funding source like Open Phil as they mentioned raising their criteria. There will be people, as a direct result of this, who do not know whether their work can continue or if they lose their job/business/other project, or whether past money that they spent, having received in good faith while trying to have the highest positive impact they could, may even be clawed back. Some of these people will be students, or people early in their careers, who are less likely to have savings to fall back on, or in the event of a clawback may have their savings wiped out (I have no idea of the actual likelihood of this, but I have seen this discussed on the forum). Who perhaps have not been given a clear answer, by other people in the movement who they may have been expecting an answer from but who are instead remaining silent (although there might have been private communication that I am unaware of). My sympathy lies with them the most, and I do not blame them (or others) for potentially questioning people who perhaps could have known more (not saying anyone did or didn’t, I wouldn’t know beyond what is in the media).
And that’s of course just the people affected within the EA community. That’s not mentioning the hundreds of thousands or millions of customers who were directly stolen from, many of whom lost significant amounts of money or even their life savings. That’s not mentioning people who will be effected in future fallout.
“This has included Will MacAskill and other thought leaders for the grave sin of not magically predicting that someone whose every external action suggested that he wanted to work with us to make the world a better place, would YOLO it and go Bernie Madoff. The hunt has included members of Sam’s family for the grave sin of being related to him.”
I think it is fairly natural to question if certain people knew more than most, when it externally seems like Will MacAskill may have been some form of mentor to SBF for nearly a decade (I know nothing here, just going by the media for this one, and it seems likely that this may have been overstated in media as it makes a good story). Family members can plausibly know better than that, and of course nobody should be threatened, but luckily I have not really seen this within the EA community.
“It has included attributing the cause of Sam’s actions to everything from issues with diversity and inclusivity, lack of transparency in EAG admissions, the pitfalls of caring if we all get eviscerated by a nuke or rogue AI, and, of course, our office spaces and dating habits.”
Dating habits can very much be a conflict of interest (Google it), particularly if it is likely to influence something like the willingness to provide *multi-billion dollar fraudulent loans* due to this person being someone you are (allegedly) dating..
“But why exactly should I help those in the community who believe that the moral thing to do when someone is on their knees is to curb stomp them while yelling “I should have been admitted to EAG 2016!”?”
Who’s actually done this? Source? And the main ‘curb-stomped’ people, if you can call it that, are literally 1) someone who has committed multi-billion dollar fraud more likely than not it seems (and some of his inner circle) and 2) someone who at this point is likely the main public face of the movement who said fraud claimed (in many past interviews) motivated him (and public figures should not be above question). Innocent until proven guilty of course, but even if the odd person takes things too far, that is not indicative of a movement wide ‘witch hunt’.
A lot of people’s emotions are high right now, and remember that when reading other people’s comments, the same way they should remember when reading what you write including this post.
(And in fairness to Twitter, it has been more balanced than I was expecting (considering my base rate for expected Twitter discourse is basically people screaming/a witch hunt, you don’t go to Twitter for reasoned debate). People appear to be defending EA, and that includes people from the public.)
I don’t usually like responding to these sorts of comments as it is rarely worth it, but:
I truly hope your usual method of arguing is not this ad-hominem, and with emotive language used rather than rational argument.
Edit: I understand emotions are running high, and I see your above comment. Personal attacks, particularly to specific individuals in the above way really aren’t appropriate though, which is why I felt I had to say something here.
Reposting my penultimate paragraph as it is important and in case people don’t otherwise read that far:
A lot of people’s emotions are high right now, and remember that when reading other people’s comments, the same way they should remember when reading what you write including this post.
Thanks for writing this, I think it’s a valuable post with actionable suggestions.
Emotions are naturally running very high right now, and this is good both to remind people that yes, it is ok to have strong emotions about it and that these reactions are understandable and normal.
“Almost all social science is wrong” is a very strong assertion without evidence to back it up, and I think such over-generalizations are unhelpful.
Based on your comment I looked this up:
Right now flights from London to San Francisco cost £400-£500, compared to what they may be shorter notice (approx £1500+ in some cases). The difference is 2-4x , and you could buy flights + accommodation for a week now (around 2 months out) for less than just the flights may be around 2 weeks out (which is when the EA Global website says you would hear by). This is a significant difference when acting under the assumption of not being able to receive travel grant funding. I can see this in many cases being the difference between ‘I can afford to go’ and ‘I cannot + will need the travel funding’, particularly as hotels are also likely to get sold out and the remaining ones potentially being more expensive or further away.
For EAGs, there was the policy of if you were accepted into one in a year, you would be accepted into all of them. If this will continue being the same, it feels like perhaps there should be an application round early, so people could know that they would get into future conferences (if they wanted to) and book flights/accommodation in advance accordingly.
(For EAGxs the apply to one get into everything policy did not exist, but those are meant to be regional so the travel costs are significantly less anyway, at least within Europe)
I don’t agree that the conclusions regarding low unmet need for contraception in developing countries, and this being due to access, is correct based on the sources that you have linked (although thanks for providing sources).
I just had a very quick (<5 minute) look at some of the sources regarding the low unmet needs for contraception in developing countries, largely because it goes against what I would expect (lower resource settings having proportionally higher resources in this area than high resource settings). Because I looked very quickly I’ve so far only looked at the abstract/highlights, however I expect that nothing in the main text would contradict this.
The source you gave for ‘low unmet need for contraception in developing countries’: https://pubmed.ncbi.nlm.nih.gov/23489750/ It does say that generally contraceptive prevalence has gone up and unmet needs have gone down (this is a good thing, i.e. progress), unless this was already high or low respectively (not surprising, a low unmet need can only decrease by a lesser degree than a high unmet need).
However: “The absolute number of married women who either use contraception or who have an unmet need for family planning is projected to grow from 900 million (876-922 million) in 2010 to 962 million (927-992 million) in 2015, and will increase in most developing countries.” This suggests that the unmet need is projected to increase more in developing countries compared to others.
The sources on access: https://www.guttmacher.org/sites/default/files/pdfs/pubs/Contraceptive-Technologies.pdf It does suggest that 7 in 10 cases access may not be main the issue: “Seven in 10 women with unmet need in the three regions cite reasons for nonuse that could be rectified with appropriate methods: Twenty-three percent are concerned about health risks or method side effects; 21% have sex infrequently; 17% are postpartum or breast-feeding; and 10% face opposition from their partners or others.” But: “In the short term, women and couples need more information about pregnancy risk and contraceptive methods, as well as better access to high-quality contraceptive services and supplies.” It also says that a quarter of women in developing countries have an unmet need: “In developing countries, one in four sexually active women who want to avoid becoming pregnant have an unmet need for modern contraception.” I would not call that low, and I think this is one of those cases of it being important to put number on it otherwise people may have different definitions of what is/isn’t low.
(A very quick estimate using the first links that come up on Google: 152 developing countries, population approx 6.69 billion total, say therefore around 3.35 billion who are female.
Turns out a quick Google does not bring up the proportion of women who are of childbearing age (15-49), but an interesting 2019 UN source on the need for family planning does come up which breaks down the unmet needs by region and is consistent with saying around 1⁄4 of women in developing countries have unmet needs: https://www.un.org/en/development/desa/population/publications/pdf/popfacts/PopFacts_2019-3.pdf That UN source has a quote: “In 2019, 42 countries, including 23 in sub-Saharan Africa, still had levels of demand satisfied by modern methods below 50 per cent, including three countries of sub-Saharan Africa with levels below 25 per cent ”
Back to that raw numbers estimate I was attempting: 1⁄4 of 3.35 billion is around 840 million for the unmet needs part. Maybe classing 1⁄3 of those women being of childbearing age/benefiting from contraceptives. That’s around 280 million people.)
The second source of access: https://pubmed.ncbi.nlm.nih.gov/24931073/ This has less information than the others as I can by default only see the abstract “Our findings suggest that access to services that provide a range of methods from which to choose, and information and counseling to help women select and effectively use an appropriate method, can be critical in helping women having unmet need overcome obstacles to contraceptive use. ” Suggesting that access is critical, and might imply that this is at least in part a reason for the unmet needs.
Edit: me reading the sources took about 5 minutes, the above writeup including me looking some stuff up (perhaps unsurprisingly) took a bit longer than that. I see having posted that Matt Sharp has also made a reply which says something very similar to what I am, would recommend reading that as well.
This comment is both in response to this post, and in part to a previous comment thread (linked below, as the continued discussion seemed more relevant here than in the evaporative cooling model post here: https://forum.effectivealtruism.org/posts/wgtSCg8cFDRXvZzxS/ea-is-probably-undergoing-evaporative-cooling-right-now?commentId=PQwZQGMdz3uNxnh3D).
To start out:
When it comes to the reactions of individual humans and populations, there is inherently far more variability than there is in e.g. the laws of physics
No model is perfect, and will always be a simplification of reality (particularly when it comes to populations, but also the case in e.g. engineering models)
A model is only as good as its assumptions, and these should really be stated
Just because a model isn’t perfect, does not mean it has no uses
Sometimes there are large data gaps, or you need to create models under a great degree of uncertainty
There are indeed some really bad models that should probably be ignored, but dismissing entire fields is not the way to approach this
Predicting the future with a large degree of certainty is very hard (hence the dart throwing chimpanzee analogy that made the news, and predictions becoming less accurate after around 5 years or so as per Superforecasting), so a large rate of inaccuracies should not be surprising (although of course you want to minimize these)
Being wrong and then new evidence causing you to update your models is how it should work (edited for clarity: as opposed to not updating your models in those situations)
For this post/general:
What I feel is lacking here is some indication of base rates, i.e. how often are people completely/largely without questioning trusting of these models, as opposed to being aware that all models have their limitations and that this should influence how they are applied. And of course ‘people’ is in itself a broad category, with some people being more or less questioning/deferential or more or less likely to jump to conclusions. What I am reading here is a suggestion of ‘we should listen less to these models without question’ without knowing who and how frequently people are doing that to begin with.
Out of the examples given, the minimum wage one was strong (given that there was a lot of debate about this) and I would count the immigration one as a valid example (people again have argued this, but often in a very politically charged way that how intuitive it is depends on the political opinions of the person reading), but many of the other ones seemed less intuitive or did not follow perhaps to the point of being a straw man.
I do believe you may be able to convince some people of any one of those arguments and make it be intuitive to them, if the population you are looking at it for example a typical person on the internet. I am far less convinced that this is true for a typical person within EA, where there is a large emphasis on e.g. reasoning transparency and quantitative reasoning.
There does appear to be a fair bit of deferral within EA, and some people do accept the thoughts of certain people within the community without doing much of their own evaluation (but given this is getting quite long, I’ll leave that for another comment/post). But a lot of people within EA have similar backgrounds in education and work, and the base rate seems to be quantitative reasoning not qualitative, nor accepting social models blindly. In the case of ‘evaporative cooling’, that EA Forum post seemed more like ‘this may be/I think it is likely to be the case’ not ‘I have complete and strong belief that this is the case’.
“even if someone shows me a new macro paper that proposes some new theory and attempts to empirically verify it with both micro and macro data I’ll shrug and eh probably wrong.” Read it first, I hope. Because that sounds like more of a soldier than a scout mindset, to use the EA terminology.
Even if a model does not apply in every situation also does not mean the model should not exist, nor that qualitative methods or thought exercises should not be used. You cannot model human behaviour the same way as you can model the laws of physics, human emotions do not follow mathematical formulas (and are inconsistent between people), creating a model of how any one person will act is not possible unless you know that particular person very well and perhaps not even then. But generally, trying to understand how a population in general could react should be done—after all, if you actually want to implement change it is populations that you need to convince.
I agree with ‘do not assume these models are right on the outset’, that makes sense. But I also think it is unhelpful and potentially harmful to go in with the strong assumption that the model will be wrong, without knowing much about it. Because not being open to potential benefits of a model, or even going as far as publicly dismissing entire fields, means that important perspectives of people with relevant expertise (and different to that of many people within EA) will not be heard.
Commenting as I’d also like to see a response to this. I guess it depends how they define ‘working directly’, perhaps emphasizing certain orgs? I am not focussed on AI myself, but I have spoken to loads of EAs who have an AI focus (even if nobody is doing this outside of EA) that this number seems surprisingly low. Not to say it isn’t neglected!