Hey Arepo, thanks for the comment. I wasnât trying to deliberately misrepresent Nuño, but I may have made inaccurate inferences, and Iâm going to make some edits to clear up confusion I might have introduced. Some quick points of note:
On the comments/âNuñoâs stance, I only looked at the direct comments (and parent comment where I could) in the ones he linked to in the âEA Forum Stewardshipâ post, so I appreciate the added context. And having read that though, I canât really square a disagreement about moderation policy with âI disagree with the EA Forumâs approach to lifeââlike the latter seems so out-of-distribution to me as a response to the former.
On the not going far-enough paragraph, I think thatâs a spelling/âgrammar mistake on my partâI think weâre actually agreeing? I think many peopleâs take away from SBF/âFTX is the danger of reckless/ânarrow optimisation, but to me it seems that Nuno is pointing in the other direction. Noted on the hypotheticals, Iâll edit those to make that clearer, but I think theyâre definitely pointing to an actually-held position that may be more moderate, but still directionally opposite to many who would criticise âinstitutional EAâ post FTX.
On the EA machine/âOpenPhil conflationâI somewhat get your take here, but on the other hand the post isnât titled âUnflattering things about the EA machine/âOpenPhil-Industrial-Complexâ, itâs titled âUnflattering thins about EAâ. Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to âthe EA machineâ, which seems to further reduce to OpenPhil. I think footnote 4 also scans as saying that the definition of an EA is someone who âpledge[s] allegiance to the EA machineâ - which doesnât seem right to me either. I think, again, that Michaelâs post raised similar concerns about OpenPhil dominance in a clearer way to me.
Finally, on the maximisation above all point, Iâm not really sure of this point? Like I think the âconstraint the rank and fileâ is just wrong? Maybe because Iâm reading it as âthis is OpenPhilâs intentional strategy with this messagingâ and not âfunctionally, this is the effect of the messaging even if unintendedâ. I think the points I agree with Nuño here is about lack of responsiveness to feedback some EA leadership shows, not in terms of âdeliberate constraintsâ.
Perhaps part of this is that, while I did read some of Nuñoâs other blogs/âposts/âcomments, thereâs a lot of context which is (at least from my background) missing in this post. For some people it really seems to have captured their experience and so they can self-provide that context, but I donât, so Iâve had trouble doing that here.
Unflattering things about the EA machine/âOpenPhil-Industrial-Complexâ, itâs titled âUnflattering thins about EAâ. Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to âthe EA machineâ, which seems to further reduce to OpenPhil
I think this reduction is correct. Like, in practice, I think some people start with the abstract ideas but then suffer a switcheroo where itâs like: oh well, I guess Iâm now optimizing for getting funding from Open Phil/âgetting hired at this limited set of institutions/âetc. I think the switcheroo is bad. And I think that conceptualizing EA as a set of beliefs is just unhelpful for noticing this dynamic.
But Iâm repeating myself, because this is one of the main threads in the post. I have the weird feeling that Iâm not being your interlocutor here.
Iâve updated my original comment, hopefully to make it more fair and reflective of the feedback you and Arepo gave.
I think we actually agree in lots of ways. I think that the âswitcherooâ you mention is problematic, and a lot of the âEA machineryâ should get better at improving its feedback loops both internally and with the community.
I think at some level we just disagree with what we mean by EA. I agree that thinking of it as a set of ideas might not be helpful for this dynamic youâre pointing to, but to me that dynamic isnât EA.[1]
As for not being an interlocutor here, I was originally going to respond on your blog, but on reflection I think I need to read (or re-read) the blog posts youâve linked in this post to understand your position better and look at the examples/âevidence you provide in more detail. Your post didnât connect with me, but it did for a lot of people, so I think itâs on me to go away and try harder to see things from your perspective and do my bit to close that âinferential distanceâ.
I wasnât intentionally trying to misrepresent you or be hostile, and to the extent I did I apologise. I very much value your perspectives I hope to keep reading them in the future, and for the EA community to reflect on them and improve.
To me EA is not the EA machine, and the EA machine is not (though may be 90% funded by) OpenPhil. Theyâre obviously connected, but not the same thing. In my edited comment, the âwhat if Dustin shut down OpenPhilâ scenario proves this.
I see saying that I disagree with the EA Forumâs âapproach to lifeâ rubbed you the wrong way. It seemed low cost, so Iâve changed it to something more wordy.
Hey Arepo, thanks for the comment. I wasnât trying to deliberately misrepresent Nuño, but I may have made inaccurate inferences, and Iâm going to make some edits to clear up confusion I might have introduced. Some quick points of note:
On the comments/âNuñoâs stance, I only looked at the direct comments (and parent comment where I could) in the ones he linked to in the âEA Forum Stewardshipâ post, so I appreciate the added context. And having read that though, I canât really square a disagreement about moderation policy with âI disagree with the EA Forumâs approach to lifeââlike the latter seems so out-of-distribution to me as a response to the former.
On the not going far-enough paragraph, I think thatâs a spelling/âgrammar mistake on my partâI think weâre actually agreeing? I think many peopleâs take away from SBF/âFTX is the danger of reckless/ânarrow optimisation, but to me it seems that Nuno is pointing in the other direction. Noted on the hypotheticals, Iâll edit those to make that clearer, but I think theyâre definitely pointing to an actually-held position that may be more moderate, but still directionally opposite to many who would criticise âinstitutional EAâ post FTX.
On the EA machine/âOpenPhil conflationâI somewhat get your take here, but on the other hand the post isnât titled âUnflattering things about the EA machine/âOpenPhil-Industrial-Complexâ, itâs titled âUnflattering thins about EAâ. Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to âthe EA machineâ, which seems to further reduce to OpenPhil. I think footnote 4 also scans as saying that the definition of an EA is someone who âpledge[s] allegiance to the EA machineâ - which doesnât seem right to me either. I think, again, that Michaelâs post raised similar concerns about OpenPhil dominance in a clearer way to me.
Finally, on the maximisation above all point, Iâm not really sure of this point? Like I think the âconstraint the rank and fileâ is just wrong? Maybe because Iâm reading it as âthis is OpenPhilâs intentional strategy with this messagingâ and not âfunctionally, this is the effect of the messaging even if unintendedâ. I think the points I agree with Nuño here is about lack of responsiveness to feedback some EA leadership shows, not in terms of âdeliberate constraintsâ.
Perhaps part of this is that, while I did read some of Nuñoâs other blogs/âposts/âcomments, thereâs a lot of context which is (at least from my background) missing in this post. For some people it really seems to have captured their experience and so they can self-provide that context, but I donât, so Iâve had trouble doing that here.
I think this reduction is correct. Like, in practice, I think some people start with the abstract ideas but then suffer a switcheroo where itâs like: oh well, I guess Iâm now optimizing for getting funding from Open Phil/âgetting hired at this limited set of institutions/âetc. I think the switcheroo is bad. And I think that conceptualizing EA as a set of beliefs is just unhelpful for noticing this dynamic.
But Iâm repeating myself, because this is one of the main threads in the post. I have the weird feeling that Iâm not being your interlocutor here.
Hey Nuño,
Iâve updated my original comment, hopefully to make it more fair and reflective of the feedback you and Arepo gave.
I think we actually agree in lots of ways. I think that the âswitcherooâ you mention is problematic, and a lot of the âEA machineryâ should get better at improving its feedback loops both internally and with the community.
I think at some level we just disagree with what we mean by EA. I agree that thinking of it as a set of ideas might not be helpful for this dynamic youâre pointing to, but to me that dynamic isnât EA.[1]
As for not being an interlocutor here, I was originally going to respond on your blog, but on reflection I think I need to read (or re-read) the blog posts youâve linked in this post to understand your position better and look at the examples/âevidence you provide in more detail. Your post didnât connect with me, but it did for a lot of people, so I think itâs on me to go away and try harder to see things from your perspective and do my bit to close that âinferential distanceâ.
I wasnât intentionally trying to misrepresent you or be hostile, and to the extent I did I apologise. I very much value your perspectives I hope to keep reading them in the future, and for the EA community to reflect on them and improve.
To me EA is not the EA machine, and the EA machine is not (though may be 90% funded by) OpenPhil. Theyâre obviously connected, but not the same thing. In my edited comment, the âwhat if Dustin shut down OpenPhilâ scenario proves this.
I see saying that I disagree with the EA Forumâs âapproach to lifeâ rubbed you the wrong way. It seemed low cost, so Iâve changed it to something more wordy.