I’ve been saying this in various comments for a long time, and I was glad to see the point laid out here in more detail.
My comments often look like: “When you say that ‘EA should do X’, which people and organizations in EA are you referring to? What should we do more/less of in order to do less/more of X? What are cases where X would clearly be useful?”
I’m a big fan of the two-paper rule, and will try to remember to apply it when I respond to methodology-driven posts in the future.
Regarding this claim:
EA has made exactly one major methodological step forward since its beginnings, which was identifying the optimizer’s curse about eight years ago, something which had the benefit of a mathematical proof.
I appreciate that you went on to qualify this statement, but I’d still have appreciated some more justification. Namely, what are some popular ideas that many people thought were a step forward, but that you believe were not?
If methodological ideas generally haven’t been popular, EA wouldn’t be emphasizing methodology; if they were popular, I’d be curious to see any other writing you’ve done on reasons you don’t think they helped. (I realize that would be a lot of work, and it may not be a good use of your time to satisfy my curiosity.)
When I look at the top ~50 Forum posts of all time (sorted by karma), I only see one that is about methodology, and it’s not as much prescriptive as it is descriptive (“EA is biased towards some methodologies, other methodologies exist, but I’m not actively recommending any particular alternatives”). Almost all the posts are about object-level research or community work, at least as far as I understand the term “object-level”.
I can only think of a few cases when established EA orgs/researchers explicitly recommended semi-novel approaches to methodology, and I’m not sure whether my examples (cluster thinking, epistemic modesty) even count. People who recommend, say, using anthropological methods in EA generally haven’t gotten much attention (as far as I can recall).
I am also thinking of how there has been more back-and-forth about the optimizer’s curse, people saying it needs to be taken more seriously etc.
I don’t think that the prescriptive vs descriptive nature really changes things, descriptive philosophizing about methodology is arguably not as good as just telling EAs what to do differently and why.
Methodology comment:
I’ve been saying this in various comments for a long time, and I was glad to see the point laid out here in more detail.
My comments often look like: “When you say that ‘EA should do X’, which people and organizations in EA are you referring to? What should we do more/less of in order to do less/more of X? What are cases where X would clearly be useful?”
I’m a big fan of the two-paper rule, and will try to remember to apply it when I respond to methodology-driven posts in the future.
Regarding this claim:
I appreciate that you went on to qualify this statement, but I’d still have appreciated some more justification. Namely, what are some popular ideas that many people thought were a step forward, but that you believe were not?
If methodological ideas generally haven’t been popular, EA wouldn’t be emphasizing methodology; if they were popular, I’d be curious to see any other writing you’ve done on reasons you don’t think they helped. (I realize that would be a lot of work, and it may not be a good use of your time to satisfy my curiosity.)
When I look at the top ~50 Forum posts of all time (sorted by karma), I only see one that is about methodology, and it’s not as much prescriptive as it is descriptive (“EA is biased towards some methodologies, other methodologies exist, but I’m not actively recommending any particular alternatives”). Almost all the posts are about object-level research or community work, at least as far as I understand the term “object-level”.
I can only think of a few cases when established EA orgs/researchers explicitly recommended semi-novel approaches to methodology, and I’m not sure whether my examples (cluster thinking, epistemic modesty) even count. People who recommend, say, using anthropological methods in EA generally haven’t gotten much attention (as far as I can recall).
I am also thinking of how there has been more back-and-forth about the optimizer’s curse, people saying it needs to be taken more seriously etc.
I don’t think that the prescriptive vs descriptive nature really changes things, descriptive philosophizing about methodology is arguably not as good as just telling EAs what to do differently and why.
I grant that #3 on this list is the rarest out of the 4. The established EA groups are generally doing fine here AFAIK. There is a CSER writeup on methodology here which is perfectly good: https://www.cser.ac.uk/resources/probabilities-methodologies-and-evidence-base-existential-risk-assessments-cccr2018/ it’s about a specific domain that they know, rather than EA stuff in general.