Important Update: Iâve made some changes to this comment given the feedback by Nuño & Arepo.[1] I was originally using strikethroughs, but this seemed to make it very hard to read, so Iâve instead edited it inline. Thus, the comment now is therefore fairly different from the original one (though I think thatâs for the better).
On reflection, I think that Nuño and I are very different people, with different backgrounds, experiences with EA, and approaches to communication. This leads to a large âinferential distanceâ between us. For example:
Nuño wrote: âI disagree with the EA Forumâs approach to lifeâ
They meant: âI have a large number of disagreements with the EA Forumâs approach to moderation, curation, aesthetics or cost-effectivenessâ
I interpreted: âI have significant issues with the kind of people who run and use the EA Forumâ
They might mean:[2]âOpenPhilâs messaging has been inconsistent with their actions, and theyâre squandering the potential of the rank and file EA membershipâ
I interpreted: âOpenPhil staff are deliberately and knowingly deceptive in their communications when making non-maximalist/âworldview diversification arguments, and are intentionally using them to maintain control of the community in a malicious wayâ
I think while some of my interpretations were obviously not what Nuño intended to communicate, I think this is partly due to Nuñoâs bellicose framings (his words, see Unflattering aspects of Effective Altruism footnote-3) which were unhelpful for productive communication on a charged issue. I still maintain that EA is primarily a set of ideas,[3] not institutions, and itâs important to make this distinction when criticising EA organisations (or âThe EA Machineâ). In retrospect, I wonder if it should have been titled something like âUnflattering Aspects of how EA is structuredâ or something like that, which Iâd have a lot of agreement with in many respects.
I wasnât sure what to make of this, personally. I appreciate a valued member of the community offering criticism of the establishment/âorthodoxy, but some of this just seemed⊠off to me. Iâve weakly down-voted, and Iâll try to explain some of the reasons why below:
Nuñoâs criticism of the EA Forum seems to be:
i) not worth the cost (which, fair),
ii) not as lean a web interface as it could be (which is more personal preference than a reason to step away from all EA)
and iii) overly heavy handed on personal moderation.
But the examples Nuño gives in theBrief thoughts on CEAâs stewardship of the EA Forumpost, especially Sabs, seem to be people being incredibly rude and not contributing to the Forum in helpful or especially truth-seeking ways to me. Nuño does provide some explanation (see the links Arepo provides), but not in the âUnflattering Aspectsâ post, and I think that causes confusion. Even in Nuñoâs comment on another chain, I donât understand summarising their disagreement as âI disagree with the EA Forumâs approach to lifeâ. Nuño has since changed that phrasing, and I think the new wording is better.
Still, it seemed like a very odd turn of phrase to use initially, and one that was unproductive to getting their point across, which is one of my other main concerns about the post. To me, some of the language in Unflattering aspects of Effective Altruismappeared to me as hostile and not providing much context for readers. For example:[4] I donât think the Forum is ânow more of a vehicle for pushing ideas CEA wants you to know aboutâ, I donât think OpenPhil uses âworries about the dangers of maximization to constrain the rank and file in a hypocritical wayâ. I donât think that one needs to âpledge allegiance to the EA machineâ in order to be considered an EA. Itâs just not the EA Iâve been involved with, and Iâm definitely not part of the âinner circleâ and have no special access to OpenPhilâs attention or money. I think people will find a very similar criticsm expressed more clearly and helpfully in Michael Plantâs What is Effective Altruism? How could it be improved?post.
There are some parts of the essay where Nuño and I very much agree. I think the points about the leadership not making itself accountable the community are very valid, and a key part of what Third Wave Effective Altruism should be. I think depicting it as a âleadership without consentâ is pointing at something real, and in the comments on Nuñoâs blog Austin Chen is saying a lot that makes sense. I agree with Nuño that the âOpenPhil switcherooâ phenomenon is concerning and bad when it happens. Maybe this is just a semantic difference by what Nuño and I mean by âEAâ, but to me EA is more than OpenPhil. If tomorrow Dustin decided to wind down OpenPhil in itâs entirety, I donât think the arguments in Famine, Affluence, or Morality lose their force, or that factory farming becomes any less of a moral catastrophe, or that we should not act prudently on our duties toward future generations.
Furthermore, while criticising OpenPhil and EA leadership, Nuño appears to claim that these organisations need to do more âunconstrainedâ consequentialist reasoning,[5] whereas my intuition is that many in the community see the failure of SBF/âFTX as a case where that form of unconstrained consequentialism went disastrously wrong. While many of you may be very critical of EA leadership and OpenPhil, I suspect many of you will be critiquing that orthodoxy from exactly the opposite direction. This is probably the weakest concern I have with the piece though, especially on reflection.
I think itâs likely that Nuño means something very different with this phrasing that I do, but I think the mix of ambiguity/âhostility can led these extracts to be read in this way
Not to say that Nuño condones SBF or his actions in any way. I think this is just another case of where someoneâs choice to get off the âtrain to crazy townâ can be viewed as anotherâs âcop-outâ.
Hey, thanks for the comment. Indeed something I was worried about with the later post was whether I was a bit unhinged (but the converse is, am I afraid to point out dynamics that I think are correct?). I dealt with this by first asking friends for feedback, then posting it but distributing it not very widely, then once I got some comments (some of which private) saying that this also corresponded to other peopleâs impressions, I decided to share it more widely.
The examples Nuño gives...
You are picking on the weakest example. The strongest one might be Sapphire. A more recent one might have been John Halstead, who had a bad day, and despite his longstanding contributions to the community was treated with very little empathy and left the forum.
Furthermore, while criticising OpenPhil/âEA âleadershipâ, NuĆo doesnât make points that EA is too naĂŻve/âconsequentialist/âignoring common-sense enough. Instead, they donât think weâve gone far enough into that direction.[1] See in Alternate Visions of EA, the claim âif you arenât producing an SBF or two, then your movement isnât being ambitious enoughâ. In a comment reply in Why are we not harder, better, faster, stronger?, they say âThere is a perspective from which having a few SBFs is a healthy sign.â While many of you may be very critical of EA leadetship and OpenPhil, I suspect many of you will be critiquing that orthodoxy from exactly the opposite direction. Be aware of this if youâre upvoting.
I think this paragraph misrepresents me:
I donât claim that âif you arenât producing an SBF or two, then your movement isnât being ambitious enoughâ. I explore different ways EA could look, and then write âFrom the creators of âif you havenât missed a flight, you are spending too much time in airportsâ comes âif you arenât producing an SBF or two, then your movement isnât being ambitious enough.ââ. The first is a bold assertion, the second one is reasonable to present in the context of exploring varied possibilities.
The full context for the other quote is âThere is a perspective from which having a few SBFs is a healthy sign. Sure, you would rather have zero, but the extreme in which one of your members scams billions seems better than one in which your followers are bogged down writing boring plays, or never organize to do meaningful action. Iâm not actually sure that I do agree with this perspective, but I think there is something to it.â (bold mine). Another way to word this less provocatively is: even with SBF, I think the EA community has had positive impact.
In general, I think picking quotes out of context just seems deeply hostile.
First, their priorities are different from mineâ (so what?)
So if leadership has priorities different from the rest of the movement, the rest of the movement should be more reluctant to follow. But this is for people to decide individually, I think.
âthe EA machine has been making some weird and mediocre movesâ (would like some actual examples on the object-level of this)
but without evidence to back this up
You can see some examples in section 5.
A view towards maximisation above all,
I think the strongest version of my current beliefs is that quantification is underdeployed on the margin and it can unearth Pareto improvements. This is joined with an impression that we should generally be much more ambitious. This doesnât require me to believe that more maximization will always be good, rather than, at the current margin, more ambition is.
Really appreciate your reply Nuno, and apologies if Iâve misrepresented you, or if Iâm coming across as overly hostile. Iâll edit my original comment given your & Arepoâs response. I think part of why I posted my comment (even though I was nervous to), is that youâre a highly valued member of the community[1], and your criticisms are listening to and carrying weight. I am/âwas just trying to do my part to kick the tires, and distinguish criticisms I think are valid/âsupported from those which are less so.
On the object level claims, Iâm going to come over to your home turf (blog) and discuss it there, given you expressed a preference for it! Though if you donât think itâll be valuable for you, then by all means feel free to not engage. I think there are actually lots of points where we agree (at least directionally), so I hope it may be productive, or at least useful for you if I can provide good/âconstructive criticism.
I think much of this criticism is off. There are things I would disagree with Nuno on, but most of what youâre highlighting doesnât seem to fairly represent his actual concerns.
Nuño never argues for why the comments they link to shouldnât be moderated
Hedoes. Also, I suspect his main concern is with people being banned rather than having their posts moderated.
NuĆo doesnât make points that EA is too naĂŻve/âconsequentialist/âignoring common-sense enough. Instead, they donât think weâve gone far enough into that direction. See in Alternate Visions of EA, the claim âif you arenât producing an SBF or two, then your movement isnât being ambitious enoughâ. In a comment reply in Why are we not harder, better, faster, stronger?, they say âThere is a perspective from which having a few SBFs is a healthy sign.â
I donât know what Nuno actually believes, but he carefully couches both of these as hypotheticals, so I donât think you should cite them as things he believes. (in the same section, he hypothetically imagines âWhat if EA goes (Âżcontinues to go?) in the direction of being a belief that is like an attire, without further consequences. People sip their wine and participate in the petty drama, but they donât act differently.â - which I donât think he advocates).
Also, youâre equivocating the claim that EA is too naive (which he certainly seems to believe), too consequentialist (which I suspect but donât know he believes), ignores common sense (which I imagine he believes), what heâs actually said he believesâthat he thinks it should optimise more vigorouslyâwhat the hypothetical you quote.
âthe EA machine has been making some weird and mediocre movesâ (would like some actual examples on the object-level of this)
Iâm not sure what you want hereâhis blog is full of criticisms of EA organisations, including those linked in the OP.
âFirst, their priorities are different from mineâ (so what?)
a conflation of Effective Altruism with Open Philanthropy with a disregarding of things that fall in the former but not the latter
I donât think itâs reasonable to assert that he conflates them in a post that estimates the degree to which OP money dominates the EA sphere, that includes the header âThat the structural organization of the movement is distinct from the philosophyâ, and that states âI think it makes sense for the rank and file EAs to more often do something different from EAâąâ. I read his criticism as being precisely that EA, the non-OP part of the movement has a lot of potential value, which is being curtailed by relying too much on OP.
A view towards maximisation above all, and paranoia that concerns about unconstrained maximisation are examples of âEA leadership uses worries about the dangers of maximization to constrain the rank and file in a hypocritical wayâ but without evidence to back this up as an actual strategy.
I think youâre mispresenting the exact sentence you quote, which contains the modifier âto constrain the rank and file in a hypocritical wayâ. I donât know how in favour of maximisation Nuno is, but what he actually writes about in that section is the ways OP has pursued maximising strategies of their own that donât seem to respect the concerns they profess.
You donât have to agree with him on any of these points, but in general I donât think heâs saying what you think heâs saying.
Hey Arepo, thanks for the comment. I wasnât trying to deliberately misrepresent Nuño, but I may have made inaccurate inferences, and Iâm going to make some edits to clear up confusion I might have introduced. Some quick points of note:
On the comments/âNuñoâs stance, I only looked at the direct comments (and parent comment where I could) in the ones he linked to in the âEA Forum Stewardshipâ post, so I appreciate the added context. And having read that though, I canât really square a disagreement about moderation policy with âI disagree with the EA Forumâs approach to lifeââlike the latter seems so out-of-distribution to me as a response to the former.
On the not going far-enough paragraph, I think thatâs a spelling/âgrammar mistake on my partâI think weâre actually agreeing? I think many peopleâs take away from SBF/âFTX is the danger of reckless/ânarrow optimisation, but to me it seems that Nuno is pointing in the other direction. Noted on the hypotheticals, Iâll edit those to make that clearer, but I think theyâre definitely pointing to an actually-held position that may be more moderate, but still directionally opposite to many who would criticise âinstitutional EAâ post FTX.
On the EA machine/âOpenPhil conflationâI somewhat get your take here, but on the other hand the post isnât titled âUnflattering things about the EA machine/âOpenPhil-Industrial-Complexâ, itâs titled âUnflattering thins about EAâ. Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to âthe EA machineâ, which seems to further reduce to OpenPhil. I think footnote 4 also scans as saying that the definition of an EA is someone who âpledge[s] allegiance to the EA machineâ - which doesnât seem right to me either. I think, again, that Michaelâs post raised similar concerns about OpenPhil dominance in a clearer way to me.
Finally, on the maximisation above all point, Iâm not really sure of this point? Like I think the âconstraint the rank and fileâ is just wrong? Maybe because Iâm reading it as âthis is OpenPhilâs intentional strategy with this messagingâ and not âfunctionally, this is the effect of the messaging even if unintendedâ. I think the points I agree with Nuño here is about lack of responsiveness to feedback some EA leadership shows, not in terms of âdeliberate constraintsâ.
Perhaps part of this is that, while I did read some of Nuñoâs other blogs/âposts/âcomments, thereâs a lot of context which is (at least from my background) missing in this post. For some people it really seems to have captured their experience and so they can self-provide that context, but I donât, so Iâve had trouble doing that here.
Unflattering things about the EA machine/âOpenPhil-Industrial-Complexâ, itâs titled âUnflattering thins about EAâ. Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to âthe EA machineâ, which seems to further reduce to OpenPhil
I think this reduction is correct. Like, in practice, I think some people start with the abstract ideas but then suffer a switcheroo where itâs like: oh well, I guess Iâm now optimizing for getting funding from Open Phil/âgetting hired at this limited set of institutions/âetc. I think the switcheroo is bad. And I think that conceptualizing EA as a set of beliefs is just unhelpful for noticing this dynamic.
But Iâm repeating myself, because this is one of the main threads in the post. I have the weird feeling that Iâm not being your interlocutor here.
Iâve updated my original comment, hopefully to make it more fair and reflective of the feedback you and Arepo gave.
I think we actually agree in lots of ways. I think that the âswitcherooâ you mention is problematic, and a lot of the âEA machineryâ should get better at improving its feedback loops both internally and with the community.
I think at some level we just disagree with what we mean by EA. I agree that thinking of it as a set of ideas might not be helpful for this dynamic youâre pointing to, but to me that dynamic isnât EA.[1]
As for not being an interlocutor here, I was originally going to respond on your blog, but on reflection I think I need to read (or re-read) the blog posts youâve linked in this post to understand your position better and look at the examples/âevidence you provide in more detail. Your post didnât connect with me, but it did for a lot of people, so I think itâs on me to go away and try harder to see things from your perspective and do my bit to close that âinferential distanceâ.
I wasnât intentionally trying to misrepresent you or be hostile, and to the extent I did I apologise. I very much value your perspectives I hope to keep reading them in the future, and for the EA community to reflect on them and improve.
To me EA is not the EA machine, and the EA machine is not (though may be 90% funded by) OpenPhil. Theyâre obviously connected, but not the same thing. In my edited comment, the âwhat if Dustin shut down OpenPhilâ scenario proves this.
I see saying that I disagree with the EA Forumâs âapproach to lifeâ rubbed you the wrong way. It seemed low cost, so Iâve changed it to something more wordy.
I think people will find a very similar criticsm expressed more clearly and helpfully in Michael Plantâs What is Effective Altruism? How could it be improved? post
I disagree with this. I think that by reducing the ideas in my post to those of that previous one, you are missing something important in the reduction.
Important Update: Iâve made some changes to this comment given the feedback by Nuño & Arepo.[1] I was originally using strikethroughs, but this seemed to make it very hard to read, so Iâve instead edited it inline. Thus, the comment now is therefore fairly different from the original one (though I think thatâs for the better).
On reflection, I think that Nuño and I are very different people, with different backgrounds, experiences with EA, and approaches to communication. This leads to a large âinferential distanceâ between us. For example:
or
I think while some of my interpretations were obviously not what Nuño intended to communicate, I think this is partly due to Nuñoâs bellicose framings (his words, see Unflattering aspects of Effective Altruism footnote-3) which were unhelpful for productive communication on a charged issue. I still maintain that EA is primarily a set of ideas,[3] not institutions, and itâs important to make this distinction when criticising EA organisations (or âThe EA Machineâ). In retrospect, I wonder if it should have been titled something like âUnflattering Aspects of how EA is structuredâ or something like that, which Iâd have a lot of agreement with in many respects.
I wasnât sure what to make of this, personally. I appreciate a valued member of the community offering criticism of the establishment/âorthodoxy, but some of this just seemed⊠off to me. Iâve weakly down-voted, and Iâll try to explain some of the reasons why below:
Nuñoâs criticism of the EA Forum seems to be:
But the examples Nuño gives in the Brief thoughts on CEAâs stewardship of the EA Forum post, especially Sabs, seem to be people being incredibly rude and not contributing to the Forum in helpful or especially truth-seeking ways to me. Nuño does provide some explanation (see the links Arepo provides), but not in the âUnflattering Aspectsâ post, and I think that causes confusion. Even in Nuñoâs comment on another chain, I donât understand summarising their disagreement as âI disagree with the EA Forumâs approach to lifeâ. Nuño has since changed that phrasing, and I think the new wording is better.
Still, it seemed like a very odd turn of phrase to use initially, and one that was unproductive to getting their point across, which is one of my other main concerns about the post. To me, some of the language in Unflattering aspects of Effective Altruism appeared to me as hostile and not providing much context for readers. For example:[4] I donât think the Forum is ânow more of a vehicle for pushing ideas CEA wants you to know aboutâ, I donât think OpenPhil uses âworries about the dangers of maximization to constrain the rank and file in a hypocritical wayâ. I donât think that one needs to âpledge allegiance to the EA machineâ in order to be considered an EA. Itâs just not the EA Iâve been involved with, and Iâm definitely not part of the âinner circleâ and have no special access to OpenPhilâs attention or money. I think people will find a very similar criticsm expressed more clearly and helpfully in Michael Plantâs What is Effective Altruism? How could it be improved? post.
There are some parts of the essay where Nuño and I very much agree. I think the points about the leadership not making itself accountable the community are very valid, and a key part of what Third Wave Effective Altruism should be. I think depicting it as a âleadership without consentâ is pointing at something real, and in the comments on Nuñoâs blog Austin Chen is saying a lot that makes sense. I agree with Nuño that the âOpenPhil switcherooâ phenomenon is concerning and bad when it happens. Maybe this is just a semantic difference by what Nuño and I mean by âEAâ, but to me EA is more than OpenPhil. If tomorrow Dustin decided to wind down OpenPhil in itâs entirety, I donât think the arguments in Famine, Affluence, or Morality lose their force, or that factory farming becomes any less of a moral catastrophe, or that we should not act prudently on our duties toward future generations.
Furthermore, while criticising OpenPhil and EA leadership, Nuño appears to claim that these organisations need to do more âunconstrainedâ consequentialist reasoning,[5] whereas my intuition is that many in the community see the failure of SBF/âFTX as a case where that form of unconstrained consequentialism went disastrously wrong. While many of you may be very critical of EA leadership and OpenPhil, I suspect many of you will be critiquing that orthodoxy from exactly the opposite direction. This is probably the weakest concern I have with the piece though, especially on reflection.
The edits are still under constructionâIâd appreciate everyoneâs patience while I finish them up.
Iâm actually not sure what the right interpretation is
And perhaps the actions they lead to if you buy moral internalism
I think itâs likely that Nuño means something very different with this phrasing that I do, but I think the mix of ambiguity/âhostility can led these extracts to be read in this way
Not to say that Nuño condones SBF or his actions in any way. I think this is just another case of where someoneâs choice to get off the âtrain to crazy townâ can be viewed as anotherâs âcop-outâ.
Hey, thanks for the comment. Indeed something I was worried about with the later post was whether I was a bit unhinged (but the converse is, am I afraid to point out dynamics that I think are correct?). I dealt with this by first asking friends for feedback, then posting it but distributing it not very widely, then once I got some comments (some of which private) saying that this also corresponded to other peopleâs impressions, I decided to share it more widely.
You are picking on the weakest example. The strongest one might be Sapphire. A more recent one might have been John Halstead, who had a bad day, and despite his longstanding contributions to the community was treated with very little empathy and left the forum.
I think this paragraph misrepresents me:
I donât claim that âif you arenât producing an SBF or two, then your movement isnât being ambitious enoughâ. I explore different ways EA could look, and then write âFrom the creators of âif you havenât missed a flight, you are spending too much time in airportsâ comes âif you arenât producing an SBF or two, then your movement isnât being ambitious enough.ââ. The first is a bold assertion, the second one is reasonable to present in the context of exploring varied possibilities.
The full context for the other quote is âThere is a perspective from which having a few SBFs is a healthy sign. Sure, you would rather have zero, but the extreme in which one of your members scams billions seems better than one in which your followers are bogged down writing boring plays, or never organize to do meaningful action. Iâm not actually sure that I do agree with this perspective, but I think there is something to it.â (bold mine). Another way to word this less provocatively is: even with SBF, I think the EA community has had positive impact.
In general, I think picking quotes out of context just seems deeply hostile.
So if leadership has priorities different from the rest of the movement, the rest of the movement should be more reluctant to follow. But this is for people to decide individually, I think.
You can see some examples in section 5.
I think the strongest version of my current beliefs is that quantification is underdeployed on the margin and it can unearth Pareto improvements. This is joined with an impression that we should generally be much more ambitious. This doesnât require me to believe that more maximization will always be good, rather than, at the current margin, more ambition is.
Really appreciate your reply Nuno, and apologies if Iâve misrepresented you, or if Iâm coming across as overly hostile. Iâll edit my original comment given your & Arepoâs response. I think part of why I posted my comment (even though I was nervous to), is that youâre a highly valued member of the community[1], and your criticisms are listening to and carrying weight. I am/âwas just trying to do my part to kick the tires, and distinguish criticisms I think are valid/âsupported from those which are less so.
On the object level claims, Iâm going to come over to your home turf (blog) and discuss it there, given you expressed a preference for it! Though if you donât think itâll be valuable for you, then by all means feel free to not engage. I think there are actually lots of points where we agree (at least directionally), so I hope it may be productive, or at least useful for you if I can provide good/âconstructive criticism.
I very much value you and your work, even if I disagree
I think much of this criticism is off. There are things I would disagree with Nuno on, but most of what youâre highlighting doesnât seem to fairly represent his actual concerns.
He does. Also, I suspect his main concern is with people being banned rather than having their posts moderated.
I donât know what Nuno actually believes, but he carefully couches both of these as hypotheticals, so I donât think you should cite them as things he believes. (in the same section, he hypothetically imagines âWhat if EA goes (Âżcontinues to go?) in the direction of being a belief that is like an attire, without further consequences. People sip their wine and participate in the petty drama, but they donât act differently.â - which I donât think he advocates).
Also, youâre equivocating the claim that EA is too naive (which he certainly seems to believe), too consequentialist (which I suspect but donât know he believes), ignores common sense (which I imagine he believes), what heâs actually said he believesâthat he thinks it should optimise more vigorouslyâwhat the hypothetical you quote.
Iâm not sure what you want hereâhis blog is full of criticisms of EA organisations, including those linked in the OP.
He literally links to why he thinks their priorities are bad in the same sentence.
I donât think itâs reasonable to assert that he conflates them in a post that estimates the degree to which OP money dominates the EA sphere, that includes the header âThat the structural organization of the movement is distinct from the philosophyâ, and that states âI think it makes sense for the rank and file EAs to more often do something different from EAâąâ. I read his criticism as being precisely that EA, the non-OP part of the movement has a lot of potential value, which is being curtailed by relying too much on OP.
I think youâre mispresenting the exact sentence you quote, which contains the modifier âto constrain the rank and file in a hypocritical wayâ. I donât know how in favour of maximisation Nuno is, but what he actually writes about in that section is the ways OP has pursued maximising strategies of their own that donât seem to respect the concerns they profess.
You donât have to agree with him on any of these points, but in general I donât think heâs saying what you think heâs saying.
Hey Arepo, thanks for the comment. I wasnât trying to deliberately misrepresent Nuño, but I may have made inaccurate inferences, and Iâm going to make some edits to clear up confusion I might have introduced. Some quick points of note:
On the comments/âNuñoâs stance, I only looked at the direct comments (and parent comment where I could) in the ones he linked to in the âEA Forum Stewardshipâ post, so I appreciate the added context. And having read that though, I canât really square a disagreement about moderation policy with âI disagree with the EA Forumâs approach to lifeââlike the latter seems so out-of-distribution to me as a response to the former.
On the not going far-enough paragraph, I think thatâs a spelling/âgrammar mistake on my partâI think weâre actually agreeing? I think many peopleâs take away from SBF/âFTX is the danger of reckless/ânarrow optimisation, but to me it seems that Nuno is pointing in the other direction. Noted on the hypotheticals, Iâll edit those to make that clearer, but I think theyâre definitely pointing to an actually-held position that may be more moderate, but still directionally opposite to many who would criticise âinstitutional EAâ post FTX.
On the EA machine/âOpenPhil conflationâI somewhat get your take here, but on the other hand the post isnât titled âUnflattering things about the EA machine/âOpenPhil-Industrial-Complexâ, itâs titled âUnflattering thins about EAâ. Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to âthe EA machineâ, which seems to further reduce to OpenPhil. I think footnote 4 also scans as saying that the definition of an EA is someone who âpledge[s] allegiance to the EA machineâ - which doesnât seem right to me either. I think, again, that Michaelâs post raised similar concerns about OpenPhil dominance in a clearer way to me.
Finally, on the maximisation above all point, Iâm not really sure of this point? Like I think the âconstraint the rank and fileâ is just wrong? Maybe because Iâm reading it as âthis is OpenPhilâs intentional strategy with this messagingâ and not âfunctionally, this is the effect of the messaging even if unintendedâ. I think the points I agree with Nuño here is about lack of responsiveness to feedback some EA leadership shows, not in terms of âdeliberate constraintsâ.
Perhaps part of this is that, while I did read some of Nuñoâs other blogs/âposts/âcomments, thereâs a lot of context which is (at least from my background) missing in this post. For some people it really seems to have captured their experience and so they can self-provide that context, but I donât, so Iâve had trouble doing that here.
I think this reduction is correct. Like, in practice, I think some people start with the abstract ideas but then suffer a switcheroo where itâs like: oh well, I guess Iâm now optimizing for getting funding from Open Phil/âgetting hired at this limited set of institutions/âetc. I think the switcheroo is bad. And I think that conceptualizing EA as a set of beliefs is just unhelpful for noticing this dynamic.
But Iâm repeating myself, because this is one of the main threads in the post. I have the weird feeling that Iâm not being your interlocutor here.
Hey Nuño,
Iâve updated my original comment, hopefully to make it more fair and reflective of the feedback you and Arepo gave.
I think we actually agree in lots of ways. I think that the âswitcherooâ you mention is problematic, and a lot of the âEA machineryâ should get better at improving its feedback loops both internally and with the community.
I think at some level we just disagree with what we mean by EA. I agree that thinking of it as a set of ideas might not be helpful for this dynamic youâre pointing to, but to me that dynamic isnât EA.[1]
As for not being an interlocutor here, I was originally going to respond on your blog, but on reflection I think I need to read (or re-read) the blog posts youâve linked in this post to understand your position better and look at the examples/âevidence you provide in more detail. Your post didnât connect with me, but it did for a lot of people, so I think itâs on me to go away and try harder to see things from your perspective and do my bit to close that âinferential distanceâ.
I wasnât intentionally trying to misrepresent you or be hostile, and to the extent I did I apologise. I very much value your perspectives I hope to keep reading them in the future, and for the EA community to reflect on them and improve.
To me EA is not the EA machine, and the EA machine is not (though may be 90% funded by) OpenPhil. Theyâre obviously connected, but not the same thing. In my edited comment, the âwhat if Dustin shut down OpenPhilâ scenario proves this.
I see saying that I disagree with the EA Forumâs âapproach to lifeâ rubbed you the wrong way. It seemed low cost, so Iâve changed it to something more wordy.
I disagree with this. I think that by reducing the ideas in my post to those of that previous one, you are missing something important in the reduction.