For what it’s worth, I’m (at least partly) sympathetic to Oli’s position here. If nothing else, from my end I’m not confident that the combined time usage of:
[Oli producing book-length critique of CSER/Leverhulme, or me personally, depending] + [me producing presumably book-length response] + [further back and forth] + [a whole lot of forum readers trying to unpick the disagreements]
is overall worth it, particularly given (a) it seems likely to me there are some worldview/cultural differences that would take time to unpick and (b) I will be limited in what I can say on certain matters by professional constraints/norms.
And as to the claim “I also wouldn’t be surprised if Sean’s takes were ultimately responsible for a good chunk of associated pressure and attacks on people’s intellectual integrity” it seems like some of this is based on my online comments/writing. I don’t believe I’ve ever deleted anything on the EA forum, LW, or very much on twitter/linkedin (the online mediums I use), my papers are all online, and so again a decent place to start is to search for my username and come to their own conclusions.
I think this might be one of the LTFF writeups Oli mentions (apologies if wrong), and seems like a good place to start
Yep, that’s the one I was thinking about. I’ve changed my mind on some of the things in that section in the (many) years since I wrote it, but it still seems like a decent starting point.
For what it’s worth, I’m (at least partly) sympathetic to Oli’s position here. If nothing else, from my end I’m not confident that the combined time usage of:
[Oli producing book-length critique of CSER/Leverhulme, or me personally, depending] +
[me producing presumably book-length response] +
[further back and forth] +
[a whole lot of forum readers trying to unpick the disagreements]
is overall worth it, particularly given (a) it seems likely to me there are some worldview/cultural differences that would take time to unpick and (b) I will be limited in what I can say on certain matters by professional constraints/norms.
I think this might be one of the LTFF writeups Oli mentions (apologies if wrong), and seems like a good place to start:
https://forum.effectivealtruism.org/posts/an9GrNXrdMwBJpHeC/long-term-future-fund-august-2019-grant-recommendations-1#Addendum__Thoughts_on_a_Strategy_Article_by_the_Leadership_of_Leverhulme_CFI_and_CSER
And as to the claim “I also wouldn’t be surprised if Sean’s takes were ultimately responsible for a good chunk of associated pressure and attacks on people’s intellectual integrity” it seems like some of this is based on my online comments/writing. I don’t believe I’ve ever deleted anything on the EA forum, LW, or very much on twitter/linkedin (the online mediums I use), my papers are all online, and so again a decent place to start is to search for my username and come to their own conclusions.
Yep, that’s the one I was thinking about. I’ve changed my mind on some of the things in that section in the (many) years since I wrote it, but it still seems like a decent starting point.