Hey Arepo, thanks for the comment. I wasn’t trying to deliberately misrepresent Nuño, but I may have made inaccurate inferences, and I’m going to make some edits to clear up confusion I might have introduced. Some quick points of note:
On the comments/Nuño’s stance, I only looked at the direct comments (and parent comment where I could) in the ones he linked to in the ‘EA Forum Stewardship’ post, so I appreciate the added context. And having read that though, I can’t really square a disagreement about moderation policy with “I disagree with the EA Forum’s approach to life”—like the latter seems so out-of-distribution to me as a response to the former.
On the not going far-enough paragraph, I think that’s a spelling/grammar mistake on my part—I think we’re actually agreeing? I think many people’s take away from SBF/FTX is the danger of reckless/narrow optimisation, but to me it seems that Nuno is pointing in the other direction. Noted on the hypotheticals, I’ll edit those to make that clearer, but I think they’re definitely pointing to an actually-held position that may be more moderate, but still directionally opposite to many who would criticise ‘institutional EA’ post FTX.
On the EA machine/OpenPhil conflation—I somewhat get your take here, but on the other hand the post isn’t titled “Unflattering things about the EA machine/OpenPhil-Industrial-Complex’, it’s titled “Unflattering thins about EA”. Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to ‘the EA machine’, which seems to further reduce to OpenPhil. I think footnote 4 also scans as saying that the definition of an EA is someone who ‘pledge[s] allegiance to the EA machine’ - which doesn’t seem right to me either. I think, again, that Michael’s post raised similar concerns about OpenPhil dominance in a clearer way to me.
Finally, on the maximisation above all point, I’m not really sure of this point? Like I think the ‘constraint the rank and file’ is just wrong? Maybe because I’m reading it as “this is OpenPhil’s intentional strategy with this messaging” and not “functionally, this is the effect of the messaging even if unintended”. I think the points I agree with Nuño here is about lack of responsiveness to feedback some EA leadership shows, not in terms of ‘deliberate constraints’.
Perhaps part of this is that, while I did read some of Nuño’s other blogs/posts/comments, there’s a lot of context which is (at least from my background) missing in this post. For some people it really seems to have captured their experience and so they can self-provide that context, but I don’t, so I’ve had trouble doing that here.
Unflattering things about the EA machine/OpenPhil-Industrial-Complex’, it’s titled “Unflattering thins about EA”. Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to ‘the EA machine’, which seems to further reduce to OpenPhil
I think this reduction is correct. Like, in practice, I think some people start with the abstract ideas but then suffer a switcheroo where it’s like: oh well, I guess I’m now optimizing for getting funding from Open Phil/getting hired at this limited set of institutions/etc. I think the switcheroo is bad. And I think that conceptualizing EA as a set of beliefs is just unhelpful for noticing this dynamic.
But I’m repeating myself, because this is one of the main threads in the post. I have the weird feeling that I’m not being your interlocutor here.
I’ve updated my original comment, hopefully to make it more fair and reflective of the feedback you and Arepo gave.
I think we actually agree in lots of ways. I think that the ‘switcheroo’ you mention is problematic, and a lot of the ‘EA machinery’ should get better at improving its feedback loops both internally and with the community.
I think at some level we just disagree with what we mean by EA. I agree that thinking of it as a set of ideas might not be helpful for this dynamic you’re pointing to, but to me that dynamic isn’t EA.[1]
As for not being an interlocutor here, I was originally going to respond on your blog, but on reflection I think I need to read (or re-read) the blog posts you’ve linked in this post to understand your position better and look at the examples/evidence you provide in more detail. Your post didn’t connect with me, but it did for a lot of people, so I think it’s on me to go away and try harder to see things from your perspective and do my bit to close that ‘inferential distance’.
I wasn’t intentionally trying to misrepresent you or be hostile, and to the extent I did I apologise. I very much value your perspectives I hope to keep reading them in the future, and for the EA community to reflect on them and improve.
To me EA is not the EA machine, and the EA machine is not (though may be 90% funded by) OpenPhil. They’re obviously connected, but not the same thing. In my edited comment, the ‘what if Dustin shut down OpenPhil’ scenario proves this.
I see saying that I disagree with the EA Forum’s “approach to life” rubbed you the wrong way. It seemed low cost, so I’ve changed it to something more wordy.
Hey Arepo, thanks for the comment. I wasn’t trying to deliberately misrepresent Nuño, but I may have made inaccurate inferences, and I’m going to make some edits to clear up confusion I might have introduced. Some quick points of note:
On the comments/Nuño’s stance, I only looked at the direct comments (and parent comment where I could) in the ones he linked to in the ‘EA Forum Stewardship’ post, so I appreciate the added context. And having read that though, I can’t really square a disagreement about moderation policy with “I disagree with the EA Forum’s approach to life”—like the latter seems so out-of-distribution to me as a response to the former.
On the not going far-enough paragraph, I think that’s a spelling/grammar mistake on my part—I think we’re actually agreeing? I think many people’s take away from SBF/FTX is the danger of reckless/narrow optimisation, but to me it seems that Nuno is pointing in the other direction. Noted on the hypotheticals, I’ll edit those to make that clearer, but I think they’re definitely pointing to an actually-held position that may be more moderate, but still directionally opposite to many who would criticise ‘institutional EA’ post FTX.
On the EA machine/OpenPhil conflation—I somewhat get your take here, but on the other hand the post isn’t titled “Unflattering things about the EA machine/OpenPhil-Industrial-Complex’, it’s titled “Unflattering thins about EA”. Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to ‘the EA machine’, which seems to further reduce to OpenPhil. I think footnote 4 also scans as saying that the definition of an EA is someone who ‘pledge[s] allegiance to the EA machine’ - which doesn’t seem right to me either. I think, again, that Michael’s post raised similar concerns about OpenPhil dominance in a clearer way to me.
Finally, on the maximisation above all point, I’m not really sure of this point? Like I think the ‘constraint the rank and file’ is just wrong? Maybe because I’m reading it as “this is OpenPhil’s intentional strategy with this messaging” and not “functionally, this is the effect of the messaging even if unintended”. I think the points I agree with Nuño here is about lack of responsiveness to feedback some EA leadership shows, not in terms of ‘deliberate constraints’.
Perhaps part of this is that, while I did read some of Nuño’s other blogs/posts/comments, there’s a lot of context which is (at least from my background) missing in this post. For some people it really seems to have captured their experience and so they can self-provide that context, but I don’t, so I’ve had trouble doing that here.
I think this reduction is correct. Like, in practice, I think some people start with the abstract ideas but then suffer a switcheroo where it’s like: oh well, I guess I’m now optimizing for getting funding from Open Phil/getting hired at this limited set of institutions/etc. I think the switcheroo is bad. And I think that conceptualizing EA as a set of beliefs is just unhelpful for noticing this dynamic.
But I’m repeating myself, because this is one of the main threads in the post. I have the weird feeling that I’m not being your interlocutor here.
Hey Nuño,
I’ve updated my original comment, hopefully to make it more fair and reflective of the feedback you and Arepo gave.
I think we actually agree in lots of ways. I think that the ‘switcheroo’ you mention is problematic, and a lot of the ‘EA machinery’ should get better at improving its feedback loops both internally and with the community.
I think at some level we just disagree with what we mean by EA. I agree that thinking of it as a set of ideas might not be helpful for this dynamic you’re pointing to, but to me that dynamic isn’t EA.[1]
As for not being an interlocutor here, I was originally going to respond on your blog, but on reflection I think I need to read (or re-read) the blog posts you’ve linked in this post to understand your position better and look at the examples/evidence you provide in more detail. Your post didn’t connect with me, but it did for a lot of people, so I think it’s on me to go away and try harder to see things from your perspective and do my bit to close that ‘inferential distance’.
I wasn’t intentionally trying to misrepresent you or be hostile, and to the extent I did I apologise. I very much value your perspectives I hope to keep reading them in the future, and for the EA community to reflect on them and improve.
To me EA is not the EA machine, and the EA machine is not (though may be 90% funded by) OpenPhil. They’re obviously connected, but not the same thing. In my edited comment, the ‘what if Dustin shut down OpenPhil’ scenario proves this.
I see saying that I disagree with the EA Forum’s “approach to life” rubbed you the wrong way. It seemed low cost, so I’ve changed it to something more wordy.