Co-founder and project lead at Oxford Biosecurity Group
Creator of Pandemic Interventions Course—a biosecurity interventions intro syllabus
All opinions I post on the forum are my own. Tell me why I’m wrong.
Co-founder and project lead at Oxford Biosecurity Group
Creator of Pandemic Interventions Course—a biosecurity interventions intro syllabus
All opinions I post on the forum are my own. Tell me why I’m wrong.
Thanks for this post. However, one of the first things that came to mind was the EA Forum itself.
It is completely public, much EA discourse happens here, and a lot of people use their real names/full names (I believe this is even encouraged). Clearly forum communication is not intended to be the same as an interview, and that it can’t be quoted as an interview (I expect/hope—I have no experience with journalism), and I think many people will already be bearing this in mind. It is also hard to prove of course who it is who is actually commenting, and whether people are using their own names, so it is less reliable there than an actual interview.
But the advice for talking to journalists seems to be for everyone in EA thinking about giving an interview, and generally I see it being very easy for a journalist to go on the forum and use that as a source (even including screenshots).
People being able to have discussions is one of the best things about the forum in my mind, and it’s good for people to be able to express their views without self-censoring. But also, anything written on the internet in public is clearly public.
I’m sure there are some nuances here. Does anyone have thoughts on this?
There seem to have been a lot of responses to your comment, but there are some points which I don’t see being addressed yet.
I would be very interested in seeing another similarly detailed response from an ‘EA leader’ whose work focusses on community building/community health Put on top as this got quite long, rationale below, but first:
I think at least a goal of the post is to get community input (I’ve seen in many previous forum posts) to determine the best suggestions without claiming to have all the answers. Quoted from the original post (intro to ‘Suggested Reforms’):
Below, we have a preliminary non-exhaustive list of suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.
It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!
In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.
This suggests to me that instead of trying to convince the ‘EA leadership’ of any one particular change, they want input from the rest of the community.
From a community building perspective, I can (epistemic status: brainstorming, but plausible) see that a comment like yours can be harmful, and create more negative perception of EA than the post itself. Perhaps new/newer/potential/(and even existing) EAs will read the original post, and they may skim this post/read parts/even read the comments first (I don’t think very many people will have read all 84 minutes and the comments on long posts sometimes point to key/interesting sections). A top post: yours, highly upvoted.
Impressions that they can potentially draw from your response (one or more of the below):
There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)
If the authors of this post are asking for community opinion on which changes are good after giving concerns, the top (for a while at least) comment being criticising this for a lack of theory of change suggests a low regard of the EA leadership to the opinions of the EA community overall (regardless of agreement to any specific element of the original post)
Unless I am very high up and in the core EA group, I am unlikely to be listened to
While EA is open to criticism in theory, it is not open to changing based on criticism as the leadership has already reasoned about this as much as they are going to
I am not saying that any of the above is true, or that it is absolute (i.e. someone would be led to believe in one of these things absolutely instead of it being on a sliding scale). But if I was new to EA, it is plausible that this comment would be far more likely to put me off continuing engaging than anything written in the actual post itself. Perhaps you can see how this may be perceived this way, even if it was not intended this way?
I also think some of the suggestions are likely more relevant and require more thought from people actively working in e.g. community building strategy, than someone who is CTO of an AI alignment research organisation (from your profile)/a technical role more generally, at least in terms of considerations that are required in order to have greatest impact in their work.
I am pretty certain it wasn’t intended that way but:
Some EAs should start an unaffiliated group (“Impact Maximizers”) that tries to avoid these problems. (Somewhat like the “Atheism Plus” split.)
Set off minor alarm bells when reading it, more so than the other bullet points, so I tried to put some thought into why that is (and why I didn’t get the same alarm bells for the other two points).
I think it’s because it (most likely inadvertently) implies “If people already in the movement do not like these power dynamics (around making women feel uncomfortable, up to sexual harrassment etc) then they should leave and start their own movement.”(I am aware this asks for some people, not necessarily women/the specific person concerned by this, to start the group, but this still does not address the potentially lower resources, career and networking opportunities). This can almost be used as an excuse not to fix things, as if people don’t like it they can leave. But, leaving means potentially sacrificing close relationships and career and funding opportunities, at least to some degree. Taken together, this could be taken to mean:
If you are a woman uncomfortable about the current norms on dealing with sexual harrassment, consider leaving/starting your own movement, taking potential career and funding hits to do so.
I fully don’t think you intended this, but please take this as my attempt to put words to why this set off minor alarm bells on first reading, and I would be interested to hear the thoughts of others. (It is also possible that that bullet point was in response to a previous comment, which I may not have read in enough depth).
The first and third bullet point do not have this same issue, as the first one does not explicitly reduce existing opportunities for people (i.e. someone considering joining EA does not have as much if anything already invested in it, although may reduce future opportunities if they would benefit a lot from getting more involved in EA), and the third bullet point speaks about making improvements.
Based on your comment I looked this up:
Right now flights from London to San Francisco cost £400-£500, compared to what they may be shorter notice (approx £1500+ in some cases). The difference is 2-4x , and you could buy flights + accommodation for a week now (around 2 months out) for less than just the flights may be around 2 weeks out (which is when the EA Global website says you would hear by). This is a significant difference when acting under the assumption of not being able to receive travel grant funding. I can see this in many cases being the difference between ‘I can afford to go’ and ‘I cannot + will need the travel funding’, particularly as hotels are also likely to get sold out and the remaining ones potentially being more expensive or further away.
For EAGs, there was the policy of if you were accepted into one in a year, you would be accepted into all of them. If this will continue being the same, it feels like perhaps there should be an application round early, so people could know that they would get into future conferences (if they wanted to) and book flights/accommodation in advance accordingly.
(For EAGxs the apply to one get into everything policy did not exist, but those are meant to be regional so the travel costs are significantly less anyway, at least within Europe)
One of my first thoughts when reading was ‘I hope this is not a grant’. Additional details about grant size and this being for setup may be a fair given further thought, and I cannot properly comment without knowing more details, but given that this was my first reaction says something about potential optics. I am very happy to pay for a markup on t-shirts (if I didn’t want the particular t-shirt I would not buy it at cost anyway, and there seems to be a lot of EA stash going around from EAGs etc which can be used to spot EAs) if this enables more money to go to other projects. This seems like an obvious thing that can be monetised if enough people are keen to buy it.
Incidentally, Jonas Vollmer’s comment on this forum post (can’t seem to link it sorry, at time of writing it is the comment above mine) gives example(s) where an EA Forum post has been directly quoted by Forbes.
e.g. https://www.forbes.com/sites/johnhyatt/2022/11/17/disgraced-crypto-trader-sam-bankman-fried-was-a-big-backer-of-effective-altruism-now-that-movement-has-a-big-black-eye/?sh=5e5a531b4ce7
Anyone know what can and can’t be quoted? Is everything quotable? Is there any permission required?
Thank you for writing this. I have not read the original paper, but I think the points here are very plausible and aren’t discussed enough.
Clicking through the original link, it looks like the paper was initially written in 2021. Is there a particular reason for prioritizing summarizing this paper now?
Thanks for your response!
I don’t think changing “some EAs” to “we” necessarily changes my point of ‘people concerned should not have to move to a different community which may have fewer resources/opportunities’, independent of who actually creates that different community.
Note that my bigger point overall was why the second bullet point set off alarm bells, rather than specific points on the others (mostly included as a reference, and less thought put into the wording). That said:
there are probably people considering joining EA who would find EA a much easier place to get funding than their other best opportunities for trying to do the kind of good they think most needs doing.
I agree with this. I added “although may reduce future opportunities if they would benefit a lot from getting more involved in EA” after “i.e. someone considering joining EA does not have as much if anything already invested in it” a couple of minutes after originally posting my comment to reflect a very similar sentiment (however likely after you had already seen and started writing your response).
However, there is very much a difference between losing something that you have, and not gaining something that you could potentially have. When talking about personal cost, one is significantly higher than the other (agreed that both are bad), as is the toll of potentially broken trust and losing close relationships. It could potentially also have an impact cost ignoring social factors,e.g. if people have built up career/social capital that is very useful within EA, but not ranked as highly outside of EA/is not linked with the relevant people outside of EA, rather e.g. than building up non-EA networks.
That bullet point is also written as ‘someone considering joining’ rather than ‘we should’. ‘Someone considering joining’ may or may not join for a variety of reasons, and is a potential consequence to the community but not an action point. It is the action points/how action is approached that seem more relevant here.
I don’t usually like responding to these sorts of comments as it is rarely worth it, but:
I truly hope your usual method of arguing is not this ad-hominem, and with emotive language used rather than rational argument.
Edit: I understand emotions are running high, and I see your above comment. Personal attacks, particularly to specific individuals in the above way really aren’t appropriate though, which is why I felt I had to say something here.
I don’t think I would update at all based just on those memes—particularly as my understanding is that he lived in group houses! (I know a lot of other EAs are vegan, but not everyone is)
I think this is an important distinction.
People can inadvertently do bad things with very good intentions due to poor judgement, there is even the proverb ‘the road to hell is paved in good intentions’.
EA emphasises doing good with evidence, with reasoning transparency being considered highly important. People are fallible, and in the case of EA often young and of similar backgrounds, and particularly given the potential consequences (working on the world’s biggest issues including x risks) big decisions should be open to scrutiny. And I think it is a good idea to look at what other companies are doing and taking the best bits from the expertise of others.
For example, the Wytham Abbey purchase may (I haven’t seen any numbers myself) make sense from a cost effectiveness perspective, but it really should have been expected that people would ask questions given how grand the venue seems. I think the communication (and at least a basic public cost effectiveness analysis) should have been done more proactively.
I found this very interesting to read, not because I agree with everything that was said (some bits I do, some bits I don’t) but because I think someone should be saying it.
I have had thoughts about the seeming divide between ‘longtermism’ vs ‘shorttermism’, when to me there seems to be a large overlap between the two, which kind of goes in line with what you mentioned: x-risk can occur within this lifetime, therefore you do not need to be convinced about longtermism to care about it. Even if future lives had no value (I’m not making that argument), and you only take into account current lives, preventing x-risk has a massive value because that is still nearly 8 billion people we are talking about!
Therefore I like the point that x-risk does not need to be altruistic. But it also very much can be, therefore having it separate from effective altruism is not needed:
1. I want to save as many people as possible therefore stopping extinction is good—altruism, and it is a good idea to do this as effectively as possible
2. I want to save myself from extinction—not altruism, but can lead to the same end therefore there can be a high benefit of promoting it this way to more general audiences
So I do not think that ‘effective altruism’ is the wrong name, as I think everyone so far I’ve come across in EA has been in the first category. EA is also broader, including things like animal welfare and global health and poverty and various other areas to improve the lives on sentient beings (and I think this should be talked about more). I think EA is a good name for all of those things.
But if the goal is to reduce x-risk in any way possible, working with people who fall into the second category, who want to save themselves and their loved ones, is good. If we want large shifts in global policy and people generally to act a certain way, things need to be communicated to a general audience, and people should be encouraged to work on high impact things even if they are ‘not aligned’.
I find the way your example is written is a bit unclear (perhaps a less confusing phrasing is e.g. “two strong upvotes (each worth +6)”). I understood what you were trying to say but I had to read it a few times. By my understanding: that you are not convinced that just 5 strong votes should not be worth the same as 11 votes which include some strong votes, or fifteen regular votes.
To make things perhaps more confusing I’m not sure everyone has the same multiplier between regular and strong votes. Initially a regular vote is worth 1 and a strong vote 2, now my regular votes are worth 1 and strong votes are worth 3. So at the very least the 3x multiplier is not the same for newer users.
I can also see the potential of people trying to balance out the fact that their votes are worth less by doing strong votes, or generally people not being completely consistent between what they count as a strong vote and what they just count as a regular vote (e.g. between days or depending on the mood, or on the type of post) . I expect between people there is also likely variation of how strongly they need to agree to something before they strong vote it.
Adding a cost to the strong votes would certainly make people less likely to strong vote, but having it as a ‘fixed fine’ of 1 will make no real difference if you have a lot of karma, but if you have less karma it will make more of one (e.g. I believe you need a certain amount of karma to add coauthors to posts). So for newer users this would be both having their votes be worth less in comparison and higher costs for them using the strong votes, which I do not think is a good idea.
This comment is both in response to this post, and in part to a previous comment thread (linked below, as the continued discussion seemed more relevant here than in the evaporative cooling model post here: https://forum.effectivealtruism.org/posts/wgtSCg8cFDRXvZzxS/ea-is-probably-undergoing-evaporative-cooling-right-now?commentId=PQwZQGMdz3uNxnh3D).
To start out:
When it comes to the reactions of individual humans and populations, there is inherently far more variability than there is in e.g. the laws of physics
No model is perfect, and will always be a simplification of reality (particularly when it comes to populations, but also the case in e.g. engineering models)
A model is only as good as its assumptions, and these should really be stated
Just because a model isn’t perfect, does not mean it has no uses
Sometimes there are large data gaps, or you need to create models under a great degree of uncertainty
There are indeed some really bad models that should probably be ignored, but dismissing entire fields is not the way to approach this
Predicting the future with a large degree of certainty is very hard (hence the dart throwing chimpanzee analogy that made the news, and predictions becoming less accurate after around 5 years or so as per Superforecasting), so a large rate of inaccuracies should not be surprising (although of course you want to minimize these)
Being wrong and then new evidence causing you to update your models is how it should work (edited for clarity: as opposed to not updating your models in those situations)
For this post/general:
What I feel is lacking here is some indication of base rates, i.e. how often are people completely/largely without questioning trusting of these models, as opposed to being aware that all models have their limitations and that this should influence how they are applied. And of course ‘people’ is in itself a broad category, with some people being more or less questioning/deferential or more or less likely to jump to conclusions. What I am reading here is a suggestion of ‘we should listen less to these models without question’ without knowing who and how frequently people are doing that to begin with.
Out of the examples given, the minimum wage one was strong (given that there was a lot of debate about this) and I would count the immigration one as a valid example (people again have argued this, but often in a very politically charged way that how intuitive it is depends on the political opinions of the person reading), but many of the other ones seemed less intuitive or did not follow perhaps to the point of being a straw man.
I do believe you may be able to convince some people of any one of those arguments and make it be intuitive to them, if the population you are looking at it for example a typical person on the internet. I am far less convinced that this is true for a typical person within EA, where there is a large emphasis on e.g. reasoning transparency and quantitative reasoning.
There does appear to be a fair bit of deferral within EA, and some people do accept the thoughts of certain people within the community without doing much of their own evaluation (but given this is getting quite long, I’ll leave that for another comment/post). But a lot of people within EA have similar backgrounds in education and work, and the base rate seems to be quantitative reasoning not qualitative, nor accepting social models blindly. In the case of ‘evaporative cooling’, that EA Forum post seemed more like ‘this may be/I think it is likely to be the case’ not ‘I have complete and strong belief that this is the case’.
“even if someone shows me a new macro paper that proposes some new theory and attempts to empirically verify it with both micro and macro data I’ll shrug and eh probably wrong.” Read it first, I hope. Because that sounds like more of a soldier than a scout mindset, to use the EA terminology.
Even if a model does not apply in every situation also does not mean the model should not exist, nor that qualitative methods or thought exercises should not be used. You cannot model human behaviour the same way as you can model the laws of physics, human emotions do not follow mathematical formulas (and are inconsistent between people), creating a model of how any one person will act is not possible unless you know that particular person very well and perhaps not even then. But generally, trying to understand how a population in general could react should be done—after all, if you actually want to implement change it is populations that you need to convince.
I agree with ‘do not assume these models are right on the outset’, that makes sense. But I also think it is unhelpful and potentially harmful to go in with the strong assumption that the model will be wrong, without knowing much about it. Because not being open to potential benefits of a model, or even going as far as publicly dismissing entire fields, means that important perspectives of people with relevant expertise (and different to that of many people within EA) will not be heard.
Are there stats on how many people up voted (rather than the karma), as some people have interacted with the forum more therefore their vote by default may be worth e.g. 5+ x as much as that of other people. Particularly interested to see how this looks in the defense posts.
Also would be interested to see with the number of people rather than karma how much something was strong voted rather than more individual people agreeing/disagreeing
I don’t agree that the conclusions regarding low unmet need for contraception in developing countries, and this being due to access, is correct based on the sources that you have linked (although thanks for providing sources).
I just had a very quick (<5 minute) look at some of the sources regarding the low unmet needs for contraception in developing countries, largely because it goes against what I would expect (lower resource settings having proportionally higher resources in this area than high resource settings). Because I looked very quickly I’ve so far only looked at the abstract/highlights, however I expect that nothing in the main text would contradict this.
The source you gave for ‘low unmet need for contraception in developing countries’: https://pubmed.ncbi.nlm.nih.gov/23489750/ It does say that generally contraceptive prevalence has gone up and unmet needs have gone down (this is a good thing, i.e. progress), unless this was already high or low respectively (not surprising, a low unmet need can only decrease by a lesser degree than a high unmet need).
However: “The absolute number of married women who either use contraception or who have an unmet need for family planning is projected to grow from 900 million (876-922 million) in 2010 to 962 million (927-992 million) in 2015, and will increase in most developing countries.” This suggests that the unmet need is projected to increase more in developing countries compared to others.
The sources on access: https://www.guttmacher.org/sites/default/files/pdfs/pubs/Contraceptive-Technologies.pdf It does suggest that 7 in 10 cases access may not be main the issue: “Seven in 10 women with unmet need in the three regions cite reasons for nonuse that could be rectified with appropriate methods: Twenty-three percent are concerned about health risks or method side effects; 21% have sex infrequently; 17% are postpartum or breast-feeding; and 10% face opposition from their partners or others.” But: “In the short term, women and couples need more information about pregnancy risk and contraceptive methods, as well as better access to high-quality contraceptive services and supplies.” It also says that a quarter of women in developing countries have an unmet need: “In developing countries, one in four sexually active women who want to avoid becoming pregnant have an unmet need for modern contraception.” I would not call that low, and I think this is one of those cases of it being important to put number on it otherwise people may have different definitions of what is/isn’t low.
(A very quick estimate using the first links that come up on Google: 152 developing countries, population approx 6.69 billion total, say therefore around 3.35 billion who are female.
Turns out a quick Google does not bring up the proportion of women who are of childbearing age (15-49), but an interesting 2019 UN source on the need for family planning does come up which breaks down the unmet needs by region and is consistent with saying around 1⁄4 of women in developing countries have unmet needs: https://www.un.org/en/development/desa/population/publications/pdf/popfacts/PopFacts_2019-3.pdf That UN source has a quote: “In 2019, 42 countries, including 23 in sub-Saharan Africa, still had levels of demand satisfied by modern methods below 50 per cent, including three countries of sub-Saharan Africa with levels below 25 per cent ”
Back to that raw numbers estimate I was attempting: 1⁄4 of 3.35 billion is around 840 million for the unmet needs part. Maybe classing 1⁄3 of those women being of childbearing age/benefiting from contraceptives. That’s around 280 million people.)
The second source of access: https://pubmed.ncbi.nlm.nih.gov/24931073/ This has less information than the others as I can by default only see the abstract “Our findings suggest that access to services that provide a range of methods from which to choose, and information and counseling to help women select and effectively use an appropriate method, can be critical in helping women having unmet need overcome obstacles to contraceptive use. ” Suggesting that access is critical, and might imply that this is at least in part a reason for the unmet needs.
Edit: me reading the sources took about 5 minutes, the above writeup including me looking some stuff up (perhaps unsurprisingly) took a bit longer than that. I see having posted that Matt Sharp has also made a reply which says something very similar to what I am, would recommend reading that as well.
Hi, it’s bad to hear that you feel this way, and I can understand why you have this sort of sentiment. A lot of emotions are running high right now.
But what I have not seen mention of here:
I have not had funding effected by this (apart from reducing future potential funding sources), so I am saying this as someone far less effected than most. But a lot of people had their funding sources part or all from FTX, whose funding is now uncertain. There has been no guarantee that all promised grants will be fulfilled as far as I am aware, even through another funding source like Open Phil as they mentioned raising their criteria. There will be people, as a direct result of this, who do not know whether their work can continue or if they lose their job/business/other project, or whether past money that they spent, having received in good faith while trying to have the highest positive impact they could, may even be clawed back. Some of these people will be students, or people early in their careers, who are less likely to have savings to fall back on, or in the event of a clawback may have their savings wiped out (I have no idea of the actual likelihood of this, but I have seen this discussed on the forum). Who perhaps have not been given a clear answer, by other people in the movement who they may have been expecting an answer from but who are instead remaining silent (although there might have been private communication that I am unaware of). My sympathy lies with them the most, and I do not blame them (or others) for potentially questioning people who perhaps could have known more (not saying anyone did or didn’t, I wouldn’t know beyond what is in the media).
And that’s of course just the people affected within the EA community. That’s not mentioning the hundreds of thousands or millions of customers who were directly stolen from, many of whom lost significant amounts of money or even their life savings. That’s not mentioning people who will be effected in future fallout.
“This has included Will MacAskill and other thought leaders for the grave sin of not magically predicting that someone whose every external action suggested that he wanted to work with us to make the world a better place, would YOLO it and go Bernie Madoff. The hunt has included members of Sam’s family for the grave sin of being related to him.”
I think it is fairly natural to question if certain people knew more than most, when it externally seems like Will MacAskill may have been some form of mentor to SBF for nearly a decade (I know nothing here, just going by the media for this one, and it seems likely that this may have been overstated in media as it makes a good story). Family members can plausibly know better than that, and of course nobody should be threatened, but luckily I have not really seen this within the EA community.
“It has included attributing the cause of Sam’s actions to everything from issues with diversity and inclusivity, lack of transparency in EAG admissions, the pitfalls of caring if we all get eviscerated by a nuke or rogue AI, and, of course, our office spaces and dating habits.”
Dating habits can very much be a conflict of interest (Google it), particularly if it is likely to influence something like the willingness to provide *multi-billion dollar fraudulent loans* due to this person being someone you are (allegedly) dating..
“But why exactly should I help those in the community who believe that the moral thing to do when someone is on their knees is to curb stomp them while yelling “I should have been admitted to EAG 2016!”?”
Who’s actually done this? Source? And the main ‘curb-stomped’ people, if you can call it that, are literally 1) someone who has committed multi-billion dollar fraud more likely than not it seems (and some of his inner circle) and 2) someone who at this point is likely the main public face of the movement who said fraud claimed (in many past interviews) motivated him (and public figures should not be above question). Innocent until proven guilty of course, but even if the odd person takes things too far, that is not indicative of a movement wide ‘witch hunt’.
A lot of people’s emotions are high right now, and remember that when reading other people’s comments, the same way they should remember when reading what you write including this post.
(And in fairness to Twitter, it has been more balanced than I was expecting (considering my base rate for expected Twitter discourse is basically people screaming/a witch hunt, you don’t go to Twitter for reasoned debate). People appear to be defending EA, and that includes people from the public.)
I don’t think the point is that all of the proposals are inherently correct or should be implemented. I don’t agree with all of the suggestions (agree with quite a few, don’t agree with some others), but in the introduction to the ‘Suggested Reforms’ section they literally say:
Picking out in particular the parts you don’t agree with may seem almost like strawmanning in this case, and people might be reading the comments not the full thing (was very surprised by how long this was when I clicked on it, I don’t think I’ve seen an 84 minute forum post before). But I’m not claiming this was intentional on either of your parts.