Co-founder, executive director and project lead at Oxford Biosecurity Group
Creator of Pandemic Interventions Course—a biosecurity interventions intro syllabus
All opinions I post on the forum are my own. Tell me why I’m wrong.
Co-founder, executive director and project lead at Oxford Biosecurity Group
Creator of Pandemic Interventions Course—a biosecurity interventions intro syllabus
All opinions I post on the forum are my own. Tell me why I’m wrong.
The data I would be most interested to see (if you plan to do further research on this) is of when people started following the page (rather than overall numbers of followers). I believe you mentioned this briefly in the limitations footnote.
A lot of people follow a lot of pages, and may have followed something years ago. If their interests change, but the page doesn’t post, it seems relatively unlikely that someone will go out of their way to unfollow it. Perhaps they’ve even forgotten that they’ve followed it to begin with!
That was my first thought (intuition, no evidence) when you mentioned that the correlation between followers and date since last post was steeper when organisations that have not posted in the last 2 months were removed, i.e. this cuts out the ‘followed this years ago and forgot, no new posts to remind me’ followers.
I find the way your example is written is a bit unclear (perhaps a less confusing phrasing is e.g. “two strong upvotes (each worth +6)”). I understood what you were trying to say but I had to read it a few times. By my understanding: that you are not convinced that just 5 strong votes should not be worth the same as 11 votes which include some strong votes, or fifteen regular votes.
To make things perhaps more confusing I’m not sure everyone has the same multiplier between regular and strong votes. Initially a regular vote is worth 1 and a strong vote 2, now my regular votes are worth 1 and strong votes are worth 3. So at the very least the 3x multiplier is not the same for newer users.
I can also see the potential of people trying to balance out the fact that their votes are worth less by doing strong votes, or generally people not being completely consistent between what they count as a strong vote and what they just count as a regular vote (e.g. between days or depending on the mood, or on the type of post) . I expect between people there is also likely variation of how strongly they need to agree to something before they strong vote it.
Adding a cost to the strong votes would certainly make people less likely to strong vote, but having it as a ‘fixed fine’ of 1 will make no real difference if you have a lot of karma, but if you have less karma it will make more of one (e.g. I believe you need a certain amount of karma to add coauthors to posts). So for newer users this would be both having their votes be worth less in comparison and higher costs for them using the strong votes, which I do not think is a good idea.
I think whether or not it makes sense to give senior forumers more weight, the fact that this is the case needs to be considered when drawing conclusions:
e.g. Higher agreement karma does not imply that more people agree with a comment. It may be that, but it may also be that fewer-than-that people who have interacted with the forum more agree, or a certain group of people agree more strongly.
You get some idea when you look at the number of votes as well as the karma, but for example in this post something like this could have been discussed but wasn’t.
My guess (without looking at specific examples) is started by people within the EA community, or those that reference EA in their explanations for what they do, or that started the project through EA funding sources (less certain about this one, starting through EA funding is probably more likely to be EA, but there are organisations that get EA funding that are not considered EA).
Are there stats on how many people up voted (rather than the karma), as some people have interacted with the forum more therefore their vote by default may be worth e.g. 5+ x as much as that of other people. Particularly interested to see how this looks in the defense posts.
Also would be interested to see with the number of people rather than karma how much something was strong voted rather than more individual people agreeing/disagreeing
One of my first thoughts when reading was ‘I hope this is not a grant’. Additional details about grant size and this being for setup may be a fair given further thought, and I cannot properly comment without knowing more details, but given that this was my first reaction says something about potential optics. I am very happy to pay for a markup on t-shirts (if I didn’t want the particular t-shirt I would not buy it at cost anyway, and there seems to be a lot of EA stash going around from EAGs etc which can be used to spot EAs) if this enables more money to go to other projects. This seems like an obvious thing that can be monetised if enough people are keen to buy it.
Agreed, particularly as bad bureaucracy could have bad results even if everyone has good intention and good judgement. For example, if someone makes the best decision possible given the information they have available, but it has unintended negative consequences as due to the way the organisation/system was set up they are missing key information which would have led to a different conclusion.
I think this is an important distinction.
People can inadvertently do bad things with very good intentions due to poor judgement, there is even the proverb ‘the road to hell is paved in good intentions’.
EA emphasises doing good with evidence, with reasoning transparency being considered highly important. People are fallible, and in the case of EA often young and of similar backgrounds, and particularly given the potential consequences (working on the world’s biggest issues including x risks) big decisions should be open to scrutiny. And I think it is a good idea to look at what other companies are doing and taking the best bits from the expertise of others.
For example, the Wytham Abbey purchase may (I haven’t seen any numbers myself) make sense from a cost effectiveness perspective, but it really should have been expected that people would ask questions given how grand the venue seems. I think the communication (and at least a basic public cost effectiveness analysis) should have been done more proactively.
Thanks for clarifying further, and some of that rationale does make sense (e.g. it’s important to critically look at the assumptions in models, and how data was collected).
I still think your conclusion/dismissal is too strong, particularly given social science is very broad (much more so than the economics examples given here), some things are inherently harder to model accurately than others, and if experts in a given field have certain approaches the first question I would ask is ‘why’.
It’s better to approach these things with humility and an open mind, particularly given how important the problems are that EA is trying to tackle.
I’ve just commented on your EA forum post, and there’s quite a lot of overlap and further comments seemed more relevant there compared to this post: https://forum.effectivealtruism.org/posts/WYktRSxq4Edw9zsH9/be-less-trusting-of-intuitive-arguments-about-social?commentId=GATZcZbh9kKSQ6QPu
This comment is both in response to this post, and in part to a previous comment thread (linked below, as the continued discussion seemed more relevant here than in the evaporative cooling model post here: https://forum.effectivealtruism.org/posts/wgtSCg8cFDRXvZzxS/ea-is-probably-undergoing-evaporative-cooling-right-now?commentId=PQwZQGMdz3uNxnh3D).
To start out:
When it comes to the reactions of individual humans and populations, there is inherently far more variability than there is in e.g. the laws of physics
No model is perfect, and will always be a simplification of reality (particularly when it comes to populations, but also the case in e.g. engineering models)
A model is only as good as its assumptions, and these should really be stated
Just because a model isn’t perfect, does not mean it has no uses
Sometimes there are large data gaps, or you need to create models under a great degree of uncertainty
There are indeed some really bad models that should probably be ignored, but dismissing entire fields is not the way to approach this
Predicting the future with a large degree of certainty is very hard (hence the dart throwing chimpanzee analogy that made the news, and predictions becoming less accurate after around 5 years or so as per Superforecasting), so a large rate of inaccuracies should not be surprising (although of course you want to minimize these)
Being wrong and then new evidence causing you to update your models is how it should work (edited for clarity: as opposed to not updating your models in those situations)
For this post/general:
What I feel is lacking here is some indication of base rates, i.e. how often are people completely/largely without questioning trusting of these models, as opposed to being aware that all models have their limitations and that this should influence how they are applied. And of course ‘people’ is in itself a broad category, with some people being more or less questioning/deferential or more or less likely to jump to conclusions. What I am reading here is a suggestion of ‘we should listen less to these models without question’ without knowing who and how frequently people are doing that to begin with.
Out of the examples given, the minimum wage one was strong (given that there was a lot of debate about this) and I would count the immigration one as a valid example (people again have argued this, but often in a very politically charged way that how intuitive it is depends on the political opinions of the person reading), but many of the other ones seemed less intuitive or did not follow perhaps to the point of being a straw man.
I do believe you may be able to convince some people of any one of those arguments and make it be intuitive to them, if the population you are looking at it for example a typical person on the internet. I am far less convinced that this is true for a typical person within EA, where there is a large emphasis on e.g. reasoning transparency and quantitative reasoning.
There does appear to be a fair bit of deferral within EA, and some people do accept the thoughts of certain people within the community without doing much of their own evaluation (but given this is getting quite long, I’ll leave that for another comment/post). But a lot of people within EA have similar backgrounds in education and work, and the base rate seems to be quantitative reasoning not qualitative, nor accepting social models blindly. In the case of ‘evaporative cooling’, that EA Forum post seemed more like ‘this may be/I think it is likely to be the case’ not ‘I have complete and strong belief that this is the case’.
“even if someone shows me a new macro paper that proposes some new theory and attempts to empirically verify it with both micro and macro data I’ll shrug and eh probably wrong.” Read it first, I hope. Because that sounds like more of a soldier than a scout mindset, to use the EA terminology.
Even if a model does not apply in every situation also does not mean the model should not exist, nor that qualitative methods or thought exercises should not be used. You cannot model human behaviour the same way as you can model the laws of physics, human emotions do not follow mathematical formulas (and are inconsistent between people), creating a model of how any one person will act is not possible unless you know that particular person very well and perhaps not even then. But generally, trying to understand how a population in general could react should be done—after all, if you actually want to implement change it is populations that you need to convince.
I agree with ‘do not assume these models are right on the outset’, that makes sense. But I also think it is unhelpful and potentially harmful to go in with the strong assumption that the model will be wrong, without knowing much about it. Because not being open to potential benefits of a model, or even going as far as publicly dismissing entire fields, means that important perspectives of people with relevant expertise (and different to that of many people within EA) will not be heard.
I don’t agree that the conclusions regarding low unmet need for contraception in developing countries, and this being due to access, is correct based on the sources that you have linked (although thanks for providing sources).
I just had a very quick (<5 minute) look at some of the sources regarding the low unmet needs for contraception in developing countries, largely because it goes against what I would expect (lower resource settings having proportionally higher resources in this area than high resource settings). Because I looked very quickly I’ve so far only looked at the abstract/highlights, however I expect that nothing in the main text would contradict this.
The source you gave for ‘low unmet need for contraception in developing countries’: https://pubmed.ncbi.nlm.nih.gov/23489750/ It does say that generally contraceptive prevalence has gone up and unmet needs have gone down (this is a good thing, i.e. progress), unless this was already high or low respectively (not surprising, a low unmet need can only decrease by a lesser degree than a high unmet need).
However: “The absolute number of married women who either use contraception or who have an unmet need for family planning is projected to grow from 900 million (876-922 million) in 2010 to 962 million (927-992 million) in 2015, and will increase in most developing countries.” This suggests that the unmet need is projected to increase more in developing countries compared to others.
The sources on access: https://www.guttmacher.org/sites/default/files/pdfs/pubs/Contraceptive-Technologies.pdf It does suggest that 7 in 10 cases access may not be main the issue: “Seven in 10 women with unmet need in the three regions cite reasons for nonuse that could be rectified with appropriate methods: Twenty-three percent are concerned about health risks or method side effects; 21% have sex infrequently; 17% are postpartum or breast-feeding; and 10% face opposition from their partners or others.” But: “In the short term, women and couples need more information about pregnancy risk and contraceptive methods, as well as better access to high-quality contraceptive services and supplies.” It also says that a quarter of women in developing countries have an unmet need: “In developing countries, one in four sexually active women who want to avoid becoming pregnant have an unmet need for modern contraception.” I would not call that low, and I think this is one of those cases of it being important to put number on it otherwise people may have different definitions of what is/isn’t low.
(A very quick estimate using the first links that come up on Google: 152 developing countries, population approx 6.69 billion total, say therefore around 3.35 billion who are female.
Turns out a quick Google does not bring up the proportion of women who are of childbearing age (15-49), but an interesting 2019 UN source on the need for family planning does come up which breaks down the unmet needs by region and is consistent with saying around 1⁄4 of women in developing countries have unmet needs: https://www.un.org/en/development/desa/population/publications/pdf/popfacts/PopFacts_2019-3.pdf That UN source has a quote: “In 2019, 42 countries, including 23 in sub-Saharan Africa, still had levels of demand satisfied by modern methods below 50 per cent, including three countries of sub-Saharan Africa with levels below 25 per cent ”
Back to that raw numbers estimate I was attempting: 1⁄4 of 3.35 billion is around 840 million for the unmet needs part. Maybe classing 1⁄3 of those women being of childbearing age/benefiting from contraceptives. That’s around 280 million people.)
The second source of access: https://pubmed.ncbi.nlm.nih.gov/24931073/ This has less information than the others as I can by default only see the abstract “Our findings suggest that access to services that provide a range of methods from which to choose, and information and counseling to help women select and effectively use an appropriate method, can be critical in helping women having unmet need overcome obstacles to contraceptive use. ” Suggesting that access is critical, and might imply that this is at least in part a reason for the unmet needs.
Edit: me reading the sources took about 5 minutes, the above writeup including me looking some stuff up (perhaps unsurprisingly) took a bit longer than that. I see having posted that Matt Sharp has also made a reply which says something very similar to what I am, would recommend reading that as well.
Based on your comment I looked this up:
Right now flights from London to San Francisco cost £400-£500, compared to what they may be shorter notice (approx £1500+ in some cases). The difference is 2-4x , and you could buy flights + accommodation for a week now (around 2 months out) for less than just the flights may be around 2 weeks out (which is when the EA Global website says you would hear by). This is a significant difference when acting under the assumption of not being able to receive travel grant funding. I can see this in many cases being the difference between ‘I can afford to go’ and ‘I cannot + will need the travel funding’, particularly as hotels are also likely to get sold out and the remaining ones potentially being more expensive or further away.
For EAGs, there was the policy of if you were accepted into one in a year, you would be accepted into all of them. If this will continue being the same, it feels like perhaps there should be an application round early, so people could know that they would get into future conferences (if they wanted to) and book flights/accommodation in advance accordingly.
(For EAGxs the apply to one get into everything policy did not exist, but those are meant to be regional so the travel costs are significantly less anyway, at least within Europe)
“Almost all social science is wrong” is a very strong assertion without evidence to back it up, and I think such over-generalizations are unhelpful.
Or perhaps a ‘shorter’ version of longermism (which would also be easier to model):
Your lifetime, that of your children, grandchildren and x generations in the future, which is a given rather than requiring assumptions to reach the higher numbers and therefore more open to dispute e.g. of humanity spreading to the stars.
Thanks for writing this, I think it’s a valuable post with actionable suggestions.
Emotions are naturally running very high right now, and this is good both to remind people that yes, it is ok to have strong emotions about it and that these reactions are understandable and normal.
Reposting my penultimate paragraph as it is important and in case people don’t otherwise read that far:
A lot of people’s emotions are high right now, and remember that when reading other people’s comments, the same way they should remember when reading what you write including this post.
I don’t usually like responding to these sorts of comments as it is rarely worth it, but:
I truly hope your usual method of arguing is not this ad-hominem, and with emotive language used rather than rational argument.
Edit: I understand emotions are running high, and I see your above comment. Personal attacks, particularly to specific individuals in the above way really aren’t appropriate though, which is why I felt I had to say something here.
Hi, it’s bad to hear that you feel this way, and I can understand why you have this sort of sentiment. A lot of emotions are running high right now.
But what I have not seen mention of here:
I have not had funding effected by this (apart from reducing future potential funding sources), so I am saying this as someone far less effected than most. But a lot of people had their funding sources part or all from FTX, whose funding is now uncertain. There has been no guarantee that all promised grants will be fulfilled as far as I am aware, even through another funding source like Open Phil as they mentioned raising their criteria. There will be people, as a direct result of this, who do not know whether their work can continue or if they lose their job/business/other project, or whether past money that they spent, having received in good faith while trying to have the highest positive impact they could, may even be clawed back. Some of these people will be students, or people early in their careers, who are less likely to have savings to fall back on, or in the event of a clawback may have their savings wiped out (I have no idea of the actual likelihood of this, but I have seen this discussed on the forum). Who perhaps have not been given a clear answer, by other people in the movement who they may have been expecting an answer from but who are instead remaining silent (although there might have been private communication that I am unaware of). My sympathy lies with them the most, and I do not blame them (or others) for potentially questioning people who perhaps could have known more (not saying anyone did or didn’t, I wouldn’t know beyond what is in the media).
And that’s of course just the people affected within the EA community. That’s not mentioning the hundreds of thousands or millions of customers who were directly stolen from, many of whom lost significant amounts of money or even their life savings. That’s not mentioning people who will be effected in future fallout.
“This has included Will MacAskill and other thought leaders for the grave sin of not magically predicting that someone whose every external action suggested that he wanted to work with us to make the world a better place, would YOLO it and go Bernie Madoff. The hunt has included members of Sam’s family for the grave sin of being related to him.”
I think it is fairly natural to question if certain people knew more than most, when it externally seems like Will MacAskill may have been some form of mentor to SBF for nearly a decade (I know nothing here, just going by the media for this one, and it seems likely that this may have been overstated in media as it makes a good story). Family members can plausibly know better than that, and of course nobody should be threatened, but luckily I have not really seen this within the EA community.
“It has included attributing the cause of Sam’s actions to everything from issues with diversity and inclusivity, lack of transparency in EAG admissions, the pitfalls of caring if we all get eviscerated by a nuke or rogue AI, and, of course, our office spaces and dating habits.”
Dating habits can very much be a conflict of interest (Google it), particularly if it is likely to influence something like the willingness to provide *multi-billion dollar fraudulent loans* due to this person being someone you are (allegedly) dating..
“But why exactly should I help those in the community who believe that the moral thing to do when someone is on their knees is to curb stomp them while yelling “I should have been admitted to EAG 2016!”?”
Who’s actually done this? Source? And the main ‘curb-stomped’ people, if you can call it that, are literally 1) someone who has committed multi-billion dollar fraud more likely than not it seems (and some of his inner circle) and 2) someone who at this point is likely the main public face of the movement who said fraud claimed (in many past interviews) motivated him (and public figures should not be above question). Innocent until proven guilty of course, but even if the odd person takes things too far, that is not indicative of a movement wide ‘witch hunt’.
A lot of people’s emotions are high right now, and remember that when reading other people’s comments, the same way they should remember when reading what you write including this post.
(And in fairness to Twitter, it has been more balanced than I was expecting (considering my base rate for expected Twitter discourse is basically people screaming/a witch hunt, you don’t go to Twitter for reasoned debate). People appear to be defending EA, and that includes people from the public.)
Incidentally, Jonas Vollmer’s comment on this forum post (can’t seem to link it sorry, at time of writing it is the comment above mine) gives example(s) where an EA Forum post has been directly quoted by Forbes.
e.g. https://www.forbes.com/sites/johnhyatt/2022/11/17/disgraced-crypto-trader-sam-bankman-fried-was-a-big-backer-of-effective-altruism-now-that-movement-has-a-big-black-eye/?sh=5e5a531b4ce7
Anyone know what can and can’t be quoted? Is everything quotable? Is there any permission required?
I don’t think the point is that all of the proposals are inherently correct or should be implemented. I don’t agree with all of the suggestions (agree with quite a few, don’t agree with some others), but in the introduction to the ‘Suggested Reforms’ section they literally say:
Picking out in particular the parts you don’t agree with may seem almost like strawmanning in this case, and people might be reading the comments not the full thing (was very surprised by how long this was when I clicked on it, I don’t think I’ve seen an 84 minute forum post before). But I’m not claiming this was intentional on either of your parts.