Co-founder, executive director and project lead at Oxford Biosecurity Group
Creator of Pandemic Interventions Course—a biosecurity interventions intro syllabus
All opinions I post on the forum are my own. Tell me why I’m wrong.
Co-founder, executive director and project lead at Oxford Biosecurity Group
Creator of Pandemic Interventions Course—a biosecurity interventions intro syllabus
All opinions I post on the forum are my own. Tell me why I’m wrong.
I believe you are conflating several things here. But first, a little tip on phrasing responses: putting the word ‘just’ in front of a critical response makes it more dismissive than you might have intended.
If you think the movement has serious flaws that make it not a good means for doing the most good, then you should not be trying to work for an EA org in the first place, and the access to those opportunities is irrelevant.
Agreed to that as stated, but I think this is a straw man. Things can both be bad in some ways, and better than some other options, but that doesn’t mean any flaws should be dismissed. This could even go to the extreme of (hypothetically) ‘I know I can have the highest impact if I work here, so I will bear the inappropriate attention of my colleagues/will leave and not have the highest impact I can’.
People should not be using the movement for career advancement independent of the goal of doing the most good they can do with their careers (and in most cases, can’t do that even if they intend to, because EA org jobs that are high-status within the movement are not similarly high-status outside of it) [..] I find the EA movement a useful source of ideas and a useful place to find potential collaborators for some of my projects, but I have no interest in working for an EA org because that’s not where I expect I’d have the biggest impact.
Some people may think that working at an EA org is the highest impact thing they could be doing (even if just for the short term), and career paths are very dependent on the individual. EA basically brands itself as the way to do the most good, so it should not be surprising if people hold this view. I was writing up my first comment it was with the broad assumption of ‘connections/opportunities within EA = connections/opportunities that help you do the most good’ (given the EA forum audience), not as a judgement of ‘EA is the only way of having a high impact’ (which is a different conversation).
I think the movement as a whole would be more successful, and a lot of younger EAs would be a lot happier, if they approached the movement with this level of detachment.
I also have thoughts on this one, but this again is a different conversation. EA is not the only way to have a very high impact, but this should not be used as an excuse for avoiding improvements.
Thanks for your response!
I don’t think changing “some EAs” to “we” necessarily changes my point of ‘people concerned should not have to move to a different community which may have fewer resources/opportunities’, independent of who actually creates that different community.
Note that my bigger point overall was why the second bullet point set off alarm bells, rather than specific points on the others (mostly included as a reference, and less thought put into the wording). That said:
there are probably people considering joining EA who would find EA a much easier place to get funding than their other best opportunities for trying to do the kind of good they think most needs doing.
I agree with this. I added “although may reduce future opportunities if they would benefit a lot from getting more involved in EA” after “i.e. someone considering joining EA does not have as much if anything already invested in it” a couple of minutes after originally posting my comment to reflect a very similar sentiment (however likely after you had already seen and started writing your response).
However, there is very much a difference between losing something that you have, and not gaining something that you could potentially have. When talking about personal cost, one is significantly higher than the other (agreed that both are bad), as is the toll of potentially broken trust and losing close relationships. It could potentially also have an impact cost ignoring social factors,e.g. if people have built up career/social capital that is very useful within EA, but not ranked as highly outside of EA/is not linked with the relevant people outside of EA, rather e.g. than building up non-EA networks.
That bullet point is also written as ‘someone considering joining’ rather than ‘we should’. ‘Someone considering joining’ may or may not join for a variety of reasons, and is a potential consequence to the community but not an action point. It is the action points/how action is approached that seem more relevant here.
I am pretty certain it wasn’t intended that way but:
Some EAs should start an unaffiliated group (“Impact Maximizers”) that tries to avoid these problems. (Somewhat like the “Atheism Plus” split.)
Set off minor alarm bells when reading it, more so than the other bullet points, so I tried to put some thought into why that is (and why I didn’t get the same alarm bells for the other two points).
I think it’s because it (most likely inadvertently) implies “If people already in the movement do not like these power dynamics (around making women feel uncomfortable, up to sexual harrassment etc) then they should leave and start their own movement.”(I am aware this asks for some people, not necessarily women/the specific person concerned by this, to start the group, but this still does not address the potentially lower resources, career and networking opportunities). This can almost be used as an excuse not to fix things, as if people don’t like it they can leave. But, leaving means potentially sacrificing close relationships and career and funding opportunities, at least to some degree. Taken together, this could be taken to mean:
If you are a woman uncomfortable about the current norms on dealing with sexual harrassment, consider leaving/starting your own movement, taking potential career and funding hits to do so.
I fully don’t think you intended this, but please take this as my attempt to put words to why this set off minor alarm bells on first reading, and I would be interested to hear the thoughts of others. (It is also possible that that bullet point was in response to a previous comment, which I may not have read in enough depth).
The first and third bullet point do not have this same issue, as the first one does not explicitly reduce existing opportunities for people (i.e. someone considering joining EA does not have as much if anything already invested in it, although may reduce future opportunities if they would benefit a lot from getting more involved in EA), and the third bullet point speaks about making improvements.
If organisations were privately informed of their tier, then the additional work of asking (even in the email) whether they would want to opt into tier sharing would be low/negligible.
Of course people may dispute their tier or only be happy to share if they are in a high tier, but this should at least slightly go against the argument of it being a lot of additional work to ask people for consent for the public list.
They’d have the information of upvotes and downvotes already (to calculate the overall karma). I don’t know how the forum is coded, but I expect they could do this without too much difficulty if they wanted to. So if you hover, it would say something like: “This comment has x overall karma, (y upvotes and z downvotes).” So the user interface/experience would not change much (unless I have misinterpreted what you meant there).
It’ll give extra information. Weighting some users higher due to contribution to the forum may make sense with the argument that these are the people who have contributed more, but even if this is the case it would be good to also see how many people overall think it is valuable or agree or disagree.
Current information:
How many votes
How valuable these voters found it adjusted by their karma/overall Forum contribution
New potential information:
How many votes
How valuable these voters found it adjusted by their karma/overall Forum contribution
How many overall voters found this valuable
e.g. 2 people strongly agreeing and 3 people weakly disagreeing may update me differently to 5 people weakly agreeing. One is unanimous, the other people have more of a divided opinion of, and it would be good for me to know that as it might be useful to ask why (when drawing conclusions based on what other people have written, or when getting feedback on my own writing).
I would like to see this implemented, as the cost seems small, but there is a fair bit of extra information value.
This does not give a complete picture though.
Say something has 5 karma and 5 votes. First obvious thought: 5 users upvoted the post, each with a karma of 1. But that’s not the only option:
1 user upvotes (value +9), 4 users downvote (each value −1)
2 users upvote (values +4 and +6), 3 users downvote (values −1, −1 and −3)
3 users upvote (values +1 and +2 and +10), 2 users downvote (values −1 and −7)
Or a whole range of other permutations one can think of that add up to 5, given that different users’ votes have different values (and in some cases strong up/downvoting). Hovering just shows the overall karma and overall number of people who have voted, unless I am missing a feature that shows this in more detail?
There seem to have been a lot of responses to your comment, but there are some points which I don’t see being addressed yet.
I would be very interested in seeing another similarly detailed response from an ‘EA leader’ whose work focusses on community building/community health Put on top as this got quite long, rationale below, but first:
I think at least a goal of the post is to get community input (I’ve seen in many previous forum posts) to determine the best suggestions without claiming to have all the answers. Quoted from the original post (intro to ‘Suggested Reforms’):
Below, we have a preliminary non-exhaustive list of suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.
It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!
In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.
This suggests to me that instead of trying to convince the ‘EA leadership’ of any one particular change, they want input from the rest of the community.
From a community building perspective, I can (epistemic status: brainstorming, but plausible) see that a comment like yours can be harmful, and create more negative perception of EA than the post itself. Perhaps new/newer/potential/(and even existing) EAs will read the original post, and they may skim this post/read parts/even read the comments first (I don’t think very many people will have read all 84 minutes and the comments on long posts sometimes point to key/interesting sections). A top post: yours, highly upvoted.
Impressions that they can potentially draw from your response (one or more of the below):
There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)
If the authors of this post are asking for community opinion on which changes are good after giving concerns, the top (for a while at least) comment being criticising this for a lack of theory of change suggests a low regard of the EA leadership to the opinions of the EA community overall (regardless of agreement to any specific element of the original post)
Unless I am very high up and in the core EA group, I am unlikely to be listened to
While EA is open to criticism in theory, it is not open to changing based on criticism as the leadership has already reasoned about this as much as they are going to
I am not saying that any of the above is true, or that it is absolute (i.e. someone would be led to believe in one of these things absolutely instead of it being on a sliding scale). But if I was new to EA, it is plausible that this comment would be far more likely to put me off continuing engaging than anything written in the actual post itself. Perhaps you can see how this may be perceived this way, even if it was not intended this way?
I also think some of the suggestions are likely more relevant and require more thought from people actively working in e.g. community building strategy, than someone who is CTO of an AI alignment research organisation (from your profile)/a technical role more generally, at least in terms of considerations that are required in order to have greatest impact in their work.
I don’t think the point is that all of the proposals are inherently correct or should be implemented. I don’t agree with all of the suggestions (agree with quite a few, don’t agree with some others), but in the introduction to the ‘Suggested Reforms’ section they literally say:
Below, we have a preliminary non-exhaustive list of suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.
It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!
In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.
Picking out in particular the parts you don’t agree with may seem almost like strawmanning in this case, and people might be reading the comments not the full thing (was very surprised by how long this was when I clicked on it, I don’t think I’ve seen an 84 minute forum post before). But I’m not claiming this was intentional on either of your parts.
The data I would be most interested to see (if you plan to do further research on this) is of when people started following the page (rather than overall numbers of followers). I believe you mentioned this briefly in the limitations footnote.
A lot of people follow a lot of pages, and may have followed something years ago. If their interests change, but the page doesn’t post, it seems relatively unlikely that someone will go out of their way to unfollow it. Perhaps they’ve even forgotten that they’ve followed it to begin with!
That was my first thought (intuition, no evidence) when you mentioned that the correlation between followers and date since last post was steeper when organisations that have not posted in the last 2 months were removed, i.e. this cuts out the ‘followed this years ago and forgot, no new posts to remind me’ followers.
I find the way your example is written is a bit unclear (perhaps a less confusing phrasing is e.g. “two strong upvotes (each worth +6)”). I understood what you were trying to say but I had to read it a few times. By my understanding: that you are not convinced that just 5 strong votes should not be worth the same as 11 votes which include some strong votes, or fifteen regular votes.
To make things perhaps more confusing I’m not sure everyone has the same multiplier between regular and strong votes. Initially a regular vote is worth 1 and a strong vote 2, now my regular votes are worth 1 and strong votes are worth 3. So at the very least the 3x multiplier is not the same for newer users.
I can also see the potential of people trying to balance out the fact that their votes are worth less by doing strong votes, or generally people not being completely consistent between what they count as a strong vote and what they just count as a regular vote (e.g. between days or depending on the mood, or on the type of post) . I expect between people there is also likely variation of how strongly they need to agree to something before they strong vote it.
Adding a cost to the strong votes would certainly make people less likely to strong vote, but having it as a ‘fixed fine’ of 1 will make no real difference if you have a lot of karma, but if you have less karma it will make more of one (e.g. I believe you need a certain amount of karma to add coauthors to posts). So for newer users this would be both having their votes be worth less in comparison and higher costs for them using the strong votes, which I do not think is a good idea.
I think whether or not it makes sense to give senior forumers more weight, the fact that this is the case needs to be considered when drawing conclusions:
e.g. Higher agreement karma does not imply that more people agree with a comment. It may be that, but it may also be that fewer-than-that people who have interacted with the forum more agree, or a certain group of people agree more strongly.
You get some idea when you look at the number of votes as well as the karma, but for example in this post something like this could have been discussed but wasn’t.
My guess (without looking at specific examples) is started by people within the EA community, or those that reference EA in their explanations for what they do, or that started the project through EA funding sources (less certain about this one, starting through EA funding is probably more likely to be EA, but there are organisations that get EA funding that are not considered EA).
Are there stats on how many people up voted (rather than the karma), as some people have interacted with the forum more therefore their vote by default may be worth e.g. 5+ x as much as that of other people. Particularly interested to see how this looks in the defense posts.
Also would be interested to see with the number of people rather than karma how much something was strong voted rather than more individual people agreeing/disagreeing
One of my first thoughts when reading was ‘I hope this is not a grant’. Additional details about grant size and this being for setup may be a fair given further thought, and I cannot properly comment without knowing more details, but given that this was my first reaction says something about potential optics. I am very happy to pay for a markup on t-shirts (if I didn’t want the particular t-shirt I would not buy it at cost anyway, and there seems to be a lot of EA stash going around from EAGs etc which can be used to spot EAs) if this enables more money to go to other projects. This seems like an obvious thing that can be monetised if enough people are keen to buy it.
Agreed, particularly as bad bureaucracy could have bad results even if everyone has good intention and good judgement. For example, if someone makes the best decision possible given the information they have available, but it has unintended negative consequences as due to the way the organisation/system was set up they are missing key information which would have led to a different conclusion.
I think this is an important distinction.
People can inadvertently do bad things with very good intentions due to poor judgement, there is even the proverb ‘the road to hell is paved in good intentions’.
EA emphasises doing good with evidence, with reasoning transparency being considered highly important. People are fallible, and in the case of EA often young and of similar backgrounds, and particularly given the potential consequences (working on the world’s biggest issues including x risks) big decisions should be open to scrutiny. And I think it is a good idea to look at what other companies are doing and taking the best bits from the expertise of others.
For example, the Wytham Abbey purchase may (I haven’t seen any numbers myself) make sense from a cost effectiveness perspective, but it really should have been expected that people would ask questions given how grand the venue seems. I think the communication (and at least a basic public cost effectiveness analysis) should have been done more proactively.
Thanks for clarifying further, and some of that rationale does make sense (e.g. it’s important to critically look at the assumptions in models, and how data was collected).
I still think your conclusion/dismissal is too strong, particularly given social science is very broad (much more so than the economics examples given here), some things are inherently harder to model accurately than others, and if experts in a given field have certain approaches the first question I would ask is ‘why’.
It’s better to approach these things with humility and an open mind, particularly given how important the problems are that EA is trying to tackle.
I’ve just commented on your EA forum post, and there’s quite a lot of overlap and further comments seemed more relevant there compared to this post: https://forum.effectivealtruism.org/posts/WYktRSxq4Edw9zsH9/be-less-trusting-of-intuitive-arguments-about-social?commentId=GATZcZbh9kKSQ6QPu
This comment is both in response to this post, and in part to a previous comment thread (linked below, as the continued discussion seemed more relevant here than in the evaporative cooling model post here: https://forum.effectivealtruism.org/posts/wgtSCg8cFDRXvZzxS/ea-is-probably-undergoing-evaporative-cooling-right-now?commentId=PQwZQGMdz3uNxnh3D).
To start out:
When it comes to the reactions of individual humans and populations, there is inherently far more variability than there is in e.g. the laws of physics
No model is perfect, and will always be a simplification of reality (particularly when it comes to populations, but also the case in e.g. engineering models)
A model is only as good as its assumptions, and these should really be stated
Just because a model isn’t perfect, does not mean it has no uses
Sometimes there are large data gaps, or you need to create models under a great degree of uncertainty
There are indeed some really bad models that should probably be ignored, but dismissing entire fields is not the way to approach this
Predicting the future with a large degree of certainty is very hard (hence the dart throwing chimpanzee analogy that made the news, and predictions becoming less accurate after around 5 years or so as per Superforecasting), so a large rate of inaccuracies should not be surprising (although of course you want to minimize these)
Being wrong and then new evidence causing you to update your models is how it should work (edited for clarity: as opposed to not updating your models in those situations)
For this post/general:
What I feel is lacking here is some indication of base rates, i.e. how often are people completely/largely without questioning trusting of these models, as opposed to being aware that all models have their limitations and that this should influence how they are applied. And of course ‘people’ is in itself a broad category, with some people being more or less questioning/deferential or more or less likely to jump to conclusions. What I am reading here is a suggestion of ‘we should listen less to these models without question’ without knowing who and how frequently people are doing that to begin with.
Out of the examples given, the minimum wage one was strong (given that there was a lot of debate about this) and I would count the immigration one as a valid example (people again have argued this, but often in a very politically charged way that how intuitive it is depends on the political opinions of the person reading), but many of the other ones seemed less intuitive or did not follow perhaps to the point of being a straw man.
I do believe you may be able to convince some people of any one of those arguments and make it be intuitive to them, if the population you are looking at it for example a typical person on the internet. I am far less convinced that this is true for a typical person within EA, where there is a large emphasis on e.g. reasoning transparency and quantitative reasoning.
There does appear to be a fair bit of deferral within EA, and some people do accept the thoughts of certain people within the community without doing much of their own evaluation (but given this is getting quite long, I’ll leave that for another comment/post). But a lot of people within EA have similar backgrounds in education and work, and the base rate seems to be quantitative reasoning not qualitative, nor accepting social models blindly. In the case of ‘evaporative cooling’, that EA Forum post seemed more like ‘this may be/I think it is likely to be the case’ not ‘I have complete and strong belief that this is the case’.
“even if someone shows me a new macro paper that proposes some new theory and attempts to empirically verify it with both micro and macro data I’ll shrug and eh probably wrong.” Read it first, I hope. Because that sounds like more of a soldier than a scout mindset, to use the EA terminology.
Even if a model does not apply in every situation also does not mean the model should not exist, nor that qualitative methods or thought exercises should not be used. You cannot model human behaviour the same way as you can model the laws of physics, human emotions do not follow mathematical formulas (and are inconsistent between people), creating a model of how any one person will act is not possible unless you know that particular person very well and perhaps not even then. But generally, trying to understand how a population in general could react should be done—after all, if you actually want to implement change it is populations that you need to convince.
I agree with ‘do not assume these models are right on the outset’, that makes sense. But I also think it is unhelpful and potentially harmful to go in with the strong assumption that the model will be wrong, without knowing much about it. Because not being open to potential benefits of a model, or even going as far as publicly dismissing entire fields, means that important perspectives of people with relevant expertise (and different to that of many people within EA) will not be heard.
I don’t agree that the conclusions regarding low unmet need for contraception in developing countries, and this being due to access, is correct based on the sources that you have linked (although thanks for providing sources).
I just had a very quick (<5 minute) look at some of the sources regarding the low unmet needs for contraception in developing countries, largely because it goes against what I would expect (lower resource settings having proportionally higher resources in this area than high resource settings). Because I looked very quickly I’ve so far only looked at the abstract/highlights, however I expect that nothing in the main text would contradict this.
The source you gave for ‘low unmet need for contraception in developing countries’: https://pubmed.ncbi.nlm.nih.gov/23489750/ It does say that generally contraceptive prevalence has gone up and unmet needs have gone down (this is a good thing, i.e. progress), unless this was already high or low respectively (not surprising, a low unmet need can only decrease by a lesser degree than a high unmet need).
However: “The absolute number of married women who either use contraception or who have an unmet need for family planning is projected to grow from 900 million (876-922 million) in 2010 to 962 million (927-992 million) in 2015, and will increase in most developing countries.” This suggests that the unmet need is projected to increase more in developing countries compared to others.
The sources on access: https://www.guttmacher.org/sites/default/files/pdfs/pubs/Contraceptive-Technologies.pdf It does suggest that 7 in 10 cases access may not be main the issue: “Seven in 10 women with unmet need in the three regions cite reasons for nonuse that could be rectified with appropriate methods: Twenty-three percent are concerned about health risks or method side effects; 21% have sex infrequently; 17% are postpartum or breast-feeding; and 10% face opposition from their partners or others.” But: “In the short term, women and couples need more information about pregnancy risk and contraceptive methods, as well as better access to high-quality contraceptive services and supplies.” It also says that a quarter of women in developing countries have an unmet need: “In developing countries, one in four sexually active women who want to avoid becoming pregnant have an unmet need for modern contraception.” I would not call that low, and I think this is one of those cases of it being important to put number on it otherwise people may have different definitions of what is/isn’t low.
(A very quick estimate using the first links that come up on Google: 152 developing countries, population approx 6.69 billion total, say therefore around 3.35 billion who are female.
Turns out a quick Google does not bring up the proportion of women who are of childbearing age (15-49), but an interesting 2019 UN source on the need for family planning does come up which breaks down the unmet needs by region and is consistent with saying around 1⁄4 of women in developing countries have unmet needs: https://www.un.org/en/development/desa/population/publications/pdf/popfacts/PopFacts_2019-3.pdf That UN source has a quote: “In 2019, 42 countries, including 23 in sub-Saharan Africa, still had levels of demand satisfied by modern methods below 50 per cent, including three countries of sub-Saharan Africa with levels below 25 per cent ”
Back to that raw numbers estimate I was attempting: 1⁄4 of 3.35 billion is around 840 million for the unmet needs part. Maybe classing 1⁄3 of those women being of childbearing age/benefiting from contraceptives. That’s around 280 million people.)
The second source of access: https://pubmed.ncbi.nlm.nih.gov/24931073/ This has less information than the others as I can by default only see the abstract “Our findings suggest that access to services that provide a range of methods from which to choose, and information and counseling to help women select and effectively use an appropriate method, can be critical in helping women having unmet need overcome obstacles to contraceptive use. ” Suggesting that access is critical, and might imply that this is at least in part a reason for the unmet needs.
Edit: me reading the sources took about 5 minutes, the above writeup including me looking some stuff up (perhaps unsurprisingly) took a bit longer than that. I see having posted that Matt Sharp has also made a reply which says something very similar to what I am, would recommend reading that as well.
That’s good to hear re in favour of efforts to make EA better (edited for clarity). Thanks for your engagement on this.
Agreed with the necessity for awareness around power dynamics with the nuance of fixing this not having to fall on the people impacted by it. I found it good to see that post when it came out as it points out things people may not have been aware of.