Important Update: I’ve made some changes to this comment given the feedback by Nuño & Arepo.[1] I was originally using strikethroughs, but this seemed to make it very hard to read, so I’ve instead edited it inline. Thus, the comment now is therefore fairly different from the original one (though I think that’s for the better).
On reflection, I think that Nuño and I are very different people, with different backgrounds, experiences with EA, and approaches to communication. This leads to a large ‘inferential distance’ between us. For example:
Nuño wrote: “I disagree with the EA Forum’s approach to life”
They meant: “I have a large number of disagreements with the EA Forum’s approach to moderation, curation, aesthetics or cost-effectiveness”
I interpreted: “I have significant issues with the kind of people who run and use the EA Forum”
They might mean:[2]“OpenPhil’s messaging has been inconsistent with their actions, and they’re squandering the potential of the rank and file EA membership”
I interpreted: “OpenPhil staff are deliberately and knowingly deceptive in their communications when making non-maximalist/worldview diversification arguments, and are intentionally using them to maintain control of the community in a malicious way”
I think while some of my interpretations were obviously not what Nuño intended to communicate, I think this is partly due to Nuño’s bellicose framings (his words, see Unflattering aspects of Effective Altruism footnote-3) which were unhelpful for productive communication on a charged issue. I still maintain that EA is primarily a set of ideas,[3] not institutions, and it’s important to make this distinction when criticising EA organisations (or ‘The EA Machine’). In retrospect, I wonder if it should have been titled something like “Unflattering Aspects of how EA is structured” or something like that, which I’d have a lot of agreement with in many respects.
I wasn’t sure what to make of this, personally. I appreciate a valued member of the community offering criticism of the establishment/orthodoxy, but some of this just seemed… off to me. I’ve weakly down-voted, and I’ll try to explain some of the reasons why below:
Nuño’s criticism of the EA Forum seems to be:
i) not worth the cost (which, fair),
ii) not as lean a web interface as it could be (which is more personal preference than a reason to step away from all EA)
and iii) overly heavy handed on personal moderation.
But the examples Nuño gives in theBrief thoughts on CEA’s stewardship of the EA Forumpost, especially Sabs, seem to be people being incredibly rude and not contributing to the Forum in helpful or especially truth-seeking ways to me. Nuño does provide some explanation (see the links Arepo provides), but not in the ‘Unflattering Aspects’ post, and I think that causes confusion. Even in Nuño’s comment on another chain, I don’t understand summarising their disagreement as “I disagree with the EA Forum’s approach to life”. Nuño has since changed that phrasing, and I think the new wording is better.
Still, it seemed like a very odd turn of phrase to use initially, and one that was unproductive to getting their point across, which is one of my other main concerns about the post. To me, some of the language in Unflattering aspects of Effective Altruismappeared to me as hostile and not providing much context for readers. For example:[4] I don’t think the Forum is “now more of a vehicle for pushing ideas CEA wants you to know about”, I don’t think OpenPhil uses “worries about the dangers of maximization to constrain the rank and file in a hypocritical way”. I don’t think that one needs to “pledge allegiance to the EA machine” in order to be considered an EA. It’s just not the EA I’ve been involved with, and I’m definitely not part of the ‘inner circle’ and have no special access to OpenPhil’s attention or money. I think people will find a very similar criticsm expressed more clearly and helpfully in Michael Plant’s What is Effective Altruism? How could it be improved?post.
There are some parts of the essay where Nuño and I very much agree. I think the points about the leadership not making itself accountable the community are very valid, and a key part of what Third Wave Effective Altruism should be. I think depicting it as a “leadership without consent” is pointing at something real, and in the comments on Nuño’s blog Austin Chen is saying a lot that makes sense. I agree with Nuño that the ‘OpenPhil switcheroo’ phenomenon is concerning and bad when it happens. Maybe this is just a semantic difference by what Nuño and I mean by ‘EA’, but to me EA is more than OpenPhil. If tomorrow Dustin decided to wind down OpenPhil in it’s entirety, I don’t think the arguments in Famine, Affluence, or Morality lose their force, or that factory farming becomes any less of a moral catastrophe, or that we should not act prudently on our duties toward future generations.
Furthermore, while criticising OpenPhil and EA leadership, Nuño appears to claim that these organisations need to do more ‘unconstrained’ consequentialist reasoning,[5] whereas my intuition is that many in the community see the failure of SBF/FTX as a case where that form of unconstrained consequentialism went disastrously wrong. While many of you may be very critical of EA leadership and OpenPhil, I suspect many of you will be critiquing that orthodoxy from exactly the opposite direction. This is probably the weakest concern I have with the piece though, especially on reflection.
I think it’s likely that Nuño means something very different with this phrasing that I do, but I think the mix of ambiguity/hostility can led these extracts to be read in this way
Not to say that Nuño condones SBF or his actions in any way. I think this is just another case of where someone’s choice to get off the ‘train to crazy town’ can be viewed as another’s ‘cop-out’.
Hey, thanks for the comment. Indeed something I was worried about with the later post was whether I was a bit unhinged (but the converse is, am I afraid to point out dynamics that I think are correct?). I dealt with this by first asking friends for feedback, then posting it but distributing it not very widely, then once I got some comments (some of which private) saying that this also corresponded to other people’s impressions, I decided to share it more widely.
The examples Nuño gives...
You are picking on the weakest example. The strongest one might be Sapphire. A more recent one might have been John Halstead, who had a bad day, and despite his longstanding contributions to the community was treated with very little empathy and left the forum.
Furthermore, while criticising OpenPhil/EA ‘leadership’, Nuňo doesn’t make points that EA is too naïve/consequentialist/ignoring common-sense enough. Instead, they don’t think we’ve gone far enough into that direction.[1] See in Alternate Visions of EA, the claim “if you aren’t producing an SBF or two, then your movement isn’t being ambitious enough”. In a comment reply in Why are we not harder, better, faster, stronger?, they say “There is a perspective from which having a few SBFs is a healthy sign.” While many of you may be very critical of EA leadetship and OpenPhil, I suspect many of you will be critiquing that orthodoxy from exactly the opposite direction. Be aware of this if you’re upvoting.
I think this paragraph misrepresents me:
I don’t claim that “if you aren’t producing an SBF or two, then your movement isn’t being ambitious enough”. I explore different ways EA could look, and then write “From the creators of “if you haven’t missed a flight, you are spending too much time in airports” comes “if you aren’t producing an SBF or two, then your movement isn’t being ambitious enough.””. The first is a bold assertion, the second one is reasonable to present in the context of exploring varied possibilities.
The full context for the other quote is “There is a perspective from which having a few SBFs is a healthy sign. Sure, you would rather have zero, but the extreme in which one of your members scams billions seems better than one in which your followers are bogged down writing boring plays, or never organize to do meaningful action. I’m not actually sure that I do agree with this perspective, but I think there is something to it.” (bold mine). Another way to word this less provocatively is: even with SBF, I think the EA community has had positive impact.
In general, I think picking quotes out of context just seems deeply hostile.
First, their priorities are different from mine” (so what?)
So if leadership has priorities different from the rest of the movement, the rest of the movement should be more reluctant to follow. But this is for people to decide individually, I think.
“the EA machine has been making some weird and mediocre moves” (would like some actual examples on the object-level of this)
but without evidence to back this up
You can see some examples in section 5.
A view towards maximisation above all,
I think the strongest version of my current beliefs is that quantification is underdeployed on the margin and it can unearth Pareto improvements. This is joined with an impression that we should generally be much more ambitious. This doesn’t require me to believe that more maximization will always be good, rather than, at the current margin, more ambition is.
Really appreciate your reply Nuno, and apologies if I’ve misrepresented you, or if I’m coming across as overly hostile. I’ll edit my original comment given your & Arepo’s response. I think part of why I posted my comment (even though I was nervous to), is that you’re a highly valued member of the community[1], and your criticisms are listening to and carrying weight. I am/was just trying to do my part to kick the tires, and distinguish criticisms I think are valid/supported from those which are less so.
On the object level claims, I’m going to come over to your home turf (blog) and discuss it there, given you expressed a preference for it! Though if you don’t think it’ll be valuable for you, then by all means feel free to not engage. I think there are actually lots of points where we agree (at least directionally), so I hope it may be productive, or at least useful for you if I can provide good/constructive criticism.
I think much of this criticism is off. There are things I would disagree with Nuno on, but most of what you’re highlighting doesn’t seem to fairly represent his actual concerns.
Nuño never argues for why the comments they link to shouldn’t be moderated
Hedoes. Also, I suspect his main concern is with people being banned rather than having their posts moderated.
Nuňo doesn’t make points that EA is too naïve/consequentialist/ignoring common-sense enough. Instead, they don’t think we’ve gone far enough into that direction. See in Alternate Visions of EA, the claim “if you aren’t producing an SBF or two, then your movement isn’t being ambitious enough”. In a comment reply in Why are we not harder, better, faster, stronger?, they say “There is a perspective from which having a few SBFs is a healthy sign.”
I don’t know what Nuno actually believes, but he carefully couches both of these as hypotheticals, so I don’t think you should cite them as things he believes. (in the same section, he hypothetically imagines ‘What if EA goes (¿continues to go?) in the direction of being a belief that is like an attire, without further consequences. People sip their wine and participate in the petty drama, but they don’t act differently.’ - which I don’t think he advocates).
Also, you’re equivocating the claim that EA is too naive (which he certainly seems to believe), too consequentialist (which I suspect but don’t know he believes), ignores common sense (which I imagine he believes), what he’s actually said he believes—that he thinks it should optimise more vigorously—what the hypothetical you quote.
“the EA machine has been making some weird and mediocre moves” (would like some actual examples on the object-level of this)
I’m not sure what you want here—his blog is full of criticisms of EA organisations, including those linked in the OP.
“First, their priorities are different from mine” (so what?)
a conflation of Effective Altruism with Open Philanthropy with a disregarding of things that fall in the former but not the latter
I don’t think it’s reasonable to assert that he conflates them in a post that estimates the degree to which OP money dominates the EA sphere, that includes the header ‘That the structural organization of the movement is distinct from the philosophy’, and that states ‘I think it makes sense for the rank and file EAs to more often do something different from EA™’. I read his criticism as being precisely that EA, the non-OP part of the movement has a lot of potential value, which is being curtailed by relying too much on OP.
A view towards maximisation above all, and paranoia that concerns about unconstrained maximisation are examples of “EA leadership uses worries about the dangers of maximization to constrain the rank and file in a hypocritical way” but without evidence to back this up as an actual strategy.
I think you’re mispresenting the exact sentence you quote, which contains the modifier ‘to constrain the rank and file in a hypocritical way’. I don’t know how in favour of maximisation Nuno is, but what he actually writes about in that section is the ways OP has pursued maximising strategies of their own that don’t seem to respect the concerns they profess.
You don’t have to agree with him on any of these points, but in general I don’t think he’s saying what you think he’s saying.
Hey Arepo, thanks for the comment. I wasn’t trying to deliberately misrepresent Nuño, but I may have made inaccurate inferences, and I’m going to make some edits to clear up confusion I might have introduced. Some quick points of note:
On the comments/Nuño’s stance, I only looked at the direct comments (and parent comment where I could) in the ones he linked to in the ‘EA Forum Stewardship’ post, so I appreciate the added context. And having read that though, I can’t really square a disagreement about moderation policy with “I disagree with the EA Forum’s approach to life”—like the latter seems so out-of-distribution to me as a response to the former.
On the not going far-enough paragraph, I think that’s a spelling/grammar mistake on my part—I think we’re actually agreeing? I think many people’s take away from SBF/FTX is the danger of reckless/narrow optimisation, but to me it seems that Nuno is pointing in the other direction. Noted on the hypotheticals, I’ll edit those to make that clearer, but I think they’re definitely pointing to an actually-held position that may be more moderate, but still directionally opposite to many who would criticise ‘institutional EA’ post FTX.
On the EA machine/OpenPhil conflation—I somewhat get your take here, but on the other hand the post isn’t titled “Unflattering things about the EA machine/OpenPhil-Industrial-Complex’, it’s titled “Unflattering thins about EA”. Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to ‘the EA machine’, which seems to further reduce to OpenPhil. I think footnote 4 also scans as saying that the definition of an EA is someone who ‘pledge[s] allegiance to the EA machine’ - which doesn’t seem right to me either. I think, again, that Michael’s post raised similar concerns about OpenPhil dominance in a clearer way to me.
Finally, on the maximisation above all point, I’m not really sure of this point? Like I think the ‘constraint the rank and file’ is just wrong? Maybe because I’m reading it as “this is OpenPhil’s intentional strategy with this messaging” and not “functionally, this is the effect of the messaging even if unintended”. I think the points I agree with Nuño here is about lack of responsiveness to feedback some EA leadership shows, not in terms of ‘deliberate constraints’.
Perhaps part of this is that, while I did read some of Nuño’s other blogs/posts/comments, there’s a lot of context which is (at least from my background) missing in this post. For some people it really seems to have captured their experience and so they can self-provide that context, but I don’t, so I’ve had trouble doing that here.
Unflattering things about the EA machine/OpenPhil-Industrial-Complex’, it’s titled “Unflattering thins about EA”. Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to ‘the EA machine’, which seems to further reduce to OpenPhil
I think this reduction is correct. Like, in practice, I think some people start with the abstract ideas but then suffer a switcheroo where it’s like: oh well, I guess I’m now optimizing for getting funding from Open Phil/getting hired at this limited set of institutions/etc. I think the switcheroo is bad. And I think that conceptualizing EA as a set of beliefs is just unhelpful for noticing this dynamic.
But I’m repeating myself, because this is one of the main threads in the post. I have the weird feeling that I’m not being your interlocutor here.
I’ve updated my original comment, hopefully to make it more fair and reflective of the feedback you and Arepo gave.
I think we actually agree in lots of ways. I think that the ‘switcheroo’ you mention is problematic, and a lot of the ‘EA machinery’ should get better at improving its feedback loops both internally and with the community.
I think at some level we just disagree with what we mean by EA. I agree that thinking of it as a set of ideas might not be helpful for this dynamic you’re pointing to, but to me that dynamic isn’t EA.[1]
As for not being an interlocutor here, I was originally going to respond on your blog, but on reflection I think I need to read (or re-read) the blog posts you’ve linked in this post to understand your position better and look at the examples/evidence you provide in more detail. Your post didn’t connect with me, but it did for a lot of people, so I think it’s on me to go away and try harder to see things from your perspective and do my bit to close that ‘inferential distance’.
I wasn’t intentionally trying to misrepresent you or be hostile, and to the extent I did I apologise. I very much value your perspectives I hope to keep reading them in the future, and for the EA community to reflect on them and improve.
To me EA is not the EA machine, and the EA machine is not (though may be 90% funded by) OpenPhil. They’re obviously connected, but not the same thing. In my edited comment, the ‘what if Dustin shut down OpenPhil’ scenario proves this.
I see saying that I disagree with the EA Forum’s “approach to life” rubbed you the wrong way. It seemed low cost, so I’ve changed it to something more wordy.
I think people will find a very similar criticsm expressed more clearly and helpfully in Michael Plant’s What is Effective Altruism? How could it be improved? post
I disagree with this. I think that by reducing the ideas in my post to those of that previous one, you are missing something important in the reduction.
Important Update: I’ve made some changes to this comment given the feedback by Nuño & Arepo.[1] I was originally using strikethroughs, but this seemed to make it very hard to read, so I’ve instead edited it inline. Thus, the comment now is therefore fairly different from the original one (though I think that’s for the better).
On reflection, I think that Nuño and I are very different people, with different backgrounds, experiences with EA, and approaches to communication. This leads to a large ‘inferential distance’ between us. For example:
or
I think while some of my interpretations were obviously not what Nuño intended to communicate, I think this is partly due to Nuño’s bellicose framings (his words, see Unflattering aspects of Effective Altruism footnote-3) which were unhelpful for productive communication on a charged issue. I still maintain that EA is primarily a set of ideas,[3] not institutions, and it’s important to make this distinction when criticising EA organisations (or ‘The EA Machine’). In retrospect, I wonder if it should have been titled something like “Unflattering Aspects of how EA is structured” or something like that, which I’d have a lot of agreement with in many respects.
I wasn’t sure what to make of this, personally. I appreciate a valued member of the community offering criticism of the establishment/orthodoxy, but some of this just seemed… off to me. I’ve weakly down-voted, and I’ll try to explain some of the reasons why below:
Nuño’s criticism of the EA Forum seems to be:
But the examples Nuño gives in the Brief thoughts on CEA’s stewardship of the EA Forum post, especially Sabs, seem to be people being incredibly rude and not contributing to the Forum in helpful or especially truth-seeking ways to me. Nuño does provide some explanation (see the links Arepo provides), but not in the ‘Unflattering Aspects’ post, and I think that causes confusion. Even in Nuño’s comment on another chain, I don’t understand summarising their disagreement as “I disagree with the EA Forum’s approach to life”. Nuño has since changed that phrasing, and I think the new wording is better.
Still, it seemed like a very odd turn of phrase to use initially, and one that was unproductive to getting their point across, which is one of my other main concerns about the post. To me, some of the language in Unflattering aspects of Effective Altruism appeared to me as hostile and not providing much context for readers. For example:[4] I don’t think the Forum is “now more of a vehicle for pushing ideas CEA wants you to know about”, I don’t think OpenPhil uses “worries about the dangers of maximization to constrain the rank and file in a hypocritical way”. I don’t think that one needs to “pledge allegiance to the EA machine” in order to be considered an EA. It’s just not the EA I’ve been involved with, and I’m definitely not part of the ‘inner circle’ and have no special access to OpenPhil’s attention or money. I think people will find a very similar criticsm expressed more clearly and helpfully in Michael Plant’s What is Effective Altruism? How could it be improved? post.
There are some parts of the essay where Nuño and I very much agree. I think the points about the leadership not making itself accountable the community are very valid, and a key part of what Third Wave Effective Altruism should be. I think depicting it as a “leadership without consent” is pointing at something real, and in the comments on Nuño’s blog Austin Chen is saying a lot that makes sense. I agree with Nuño that the ‘OpenPhil switcheroo’ phenomenon is concerning and bad when it happens. Maybe this is just a semantic difference by what Nuño and I mean by ‘EA’, but to me EA is more than OpenPhil. If tomorrow Dustin decided to wind down OpenPhil in it’s entirety, I don’t think the arguments in Famine, Affluence, or Morality lose their force, or that factory farming becomes any less of a moral catastrophe, or that we should not act prudently on our duties toward future generations.
Furthermore, while criticising OpenPhil and EA leadership, Nuño appears to claim that these organisations need to do more ‘unconstrained’ consequentialist reasoning,[5] whereas my intuition is that many in the community see the failure of SBF/FTX as a case where that form of unconstrained consequentialism went disastrously wrong. While many of you may be very critical of EA leadership and OpenPhil, I suspect many of you will be critiquing that orthodoxy from exactly the opposite direction. This is probably the weakest concern I have with the piece though, especially on reflection.
The edits are still under construction—I’d appreciate everyone’s patience while I finish them up.
I’m actually not sure what the right interpretation is
And perhaps the actions they lead to if you buy moral internalism
I think it’s likely that Nuño means something very different with this phrasing that I do, but I think the mix of ambiguity/hostility can led these extracts to be read in this way
Not to say that Nuño condones SBF or his actions in any way. I think this is just another case of where someone’s choice to get off the ‘train to crazy town’ can be viewed as another’s ‘cop-out’.
Hey, thanks for the comment. Indeed something I was worried about with the later post was whether I was a bit unhinged (but the converse is, am I afraid to point out dynamics that I think are correct?). I dealt with this by first asking friends for feedback, then posting it but distributing it not very widely, then once I got some comments (some of which private) saying that this also corresponded to other people’s impressions, I decided to share it more widely.
You are picking on the weakest example. The strongest one might be Sapphire. A more recent one might have been John Halstead, who had a bad day, and despite his longstanding contributions to the community was treated with very little empathy and left the forum.
I think this paragraph misrepresents me:
I don’t claim that “if you aren’t producing an SBF or two, then your movement isn’t being ambitious enough”. I explore different ways EA could look, and then write “From the creators of “if you haven’t missed a flight, you are spending too much time in airports” comes “if you aren’t producing an SBF or two, then your movement isn’t being ambitious enough.””. The first is a bold assertion, the second one is reasonable to present in the context of exploring varied possibilities.
The full context for the other quote is “There is a perspective from which having a few SBFs is a healthy sign. Sure, you would rather have zero, but the extreme in which one of your members scams billions seems better than one in which your followers are bogged down writing boring plays, or never organize to do meaningful action. I’m not actually sure that I do agree with this perspective, but I think there is something to it.” (bold mine). Another way to word this less provocatively is: even with SBF, I think the EA community has had positive impact.
In general, I think picking quotes out of context just seems deeply hostile.
So if leadership has priorities different from the rest of the movement, the rest of the movement should be more reluctant to follow. But this is for people to decide individually, I think.
You can see some examples in section 5.
I think the strongest version of my current beliefs is that quantification is underdeployed on the margin and it can unearth Pareto improvements. This is joined with an impression that we should generally be much more ambitious. This doesn’t require me to believe that more maximization will always be good, rather than, at the current margin, more ambition is.
Really appreciate your reply Nuno, and apologies if I’ve misrepresented you, or if I’m coming across as overly hostile. I’ll edit my original comment given your & Arepo’s response. I think part of why I posted my comment (even though I was nervous to), is that you’re a highly valued member of the community[1], and your criticisms are listening to and carrying weight. I am/was just trying to do my part to kick the tires, and distinguish criticisms I think are valid/supported from those which are less so.
On the object level claims, I’m going to come over to your home turf (blog) and discuss it there, given you expressed a preference for it! Though if you don’t think it’ll be valuable for you, then by all means feel free to not engage. I think there are actually lots of points where we agree (at least directionally), so I hope it may be productive, or at least useful for you if I can provide good/constructive criticism.
I very much value you and your work, even if I disagree
I think much of this criticism is off. There are things I would disagree with Nuno on, but most of what you’re highlighting doesn’t seem to fairly represent his actual concerns.
He does. Also, I suspect his main concern is with people being banned rather than having their posts moderated.
I don’t know what Nuno actually believes, but he carefully couches both of these as hypotheticals, so I don’t think you should cite them as things he believes. (in the same section, he hypothetically imagines ‘What if EA goes (¿continues to go?) in the direction of being a belief that is like an attire, without further consequences. People sip their wine and participate in the petty drama, but they don’t act differently.’ - which I don’t think he advocates).
Also, you’re equivocating the claim that EA is too naive (which he certainly seems to believe), too consequentialist (which I suspect but don’t know he believes), ignores common sense (which I imagine he believes), what he’s actually said he believes—that he thinks it should optimise more vigorously—what the hypothetical you quote.
I’m not sure what you want here—his blog is full of criticisms of EA organisations, including those linked in the OP.
He literally links to why he thinks their priorities are bad in the same sentence.
I don’t think it’s reasonable to assert that he conflates them in a post that estimates the degree to which OP money dominates the EA sphere, that includes the header ‘That the structural organization of the movement is distinct from the philosophy’, and that states ‘I think it makes sense for the rank and file EAs to more often do something different from EA™’. I read his criticism as being precisely that EA, the non-OP part of the movement has a lot of potential value, which is being curtailed by relying too much on OP.
I think you’re mispresenting the exact sentence you quote, which contains the modifier ‘to constrain the rank and file in a hypocritical way’. I don’t know how in favour of maximisation Nuno is, but what he actually writes about in that section is the ways OP has pursued maximising strategies of their own that don’t seem to respect the concerns they profess.
You don’t have to agree with him on any of these points, but in general I don’t think he’s saying what you think he’s saying.
Hey Arepo, thanks for the comment. I wasn’t trying to deliberately misrepresent Nuño, but I may have made inaccurate inferences, and I’m going to make some edits to clear up confusion I might have introduced. Some quick points of note:
On the comments/Nuño’s stance, I only looked at the direct comments (and parent comment where I could) in the ones he linked to in the ‘EA Forum Stewardship’ post, so I appreciate the added context. And having read that though, I can’t really square a disagreement about moderation policy with “I disagree with the EA Forum’s approach to life”—like the latter seems so out-of-distribution to me as a response to the former.
On the not going far-enough paragraph, I think that’s a spelling/grammar mistake on my part—I think we’re actually agreeing? I think many people’s take away from SBF/FTX is the danger of reckless/narrow optimisation, but to me it seems that Nuno is pointing in the other direction. Noted on the hypotheticals, I’ll edit those to make that clearer, but I think they’re definitely pointing to an actually-held position that may be more moderate, but still directionally opposite to many who would criticise ‘institutional EA’ post FTX.
On the EA machine/OpenPhil conflation—I somewhat get your take here, but on the other hand the post isn’t titled “Unflattering things about the EA machine/OpenPhil-Industrial-Complex’, it’s titled “Unflattering thins about EA”. Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to ‘the EA machine’, which seems to further reduce to OpenPhil. I think footnote 4 also scans as saying that the definition of an EA is someone who ‘pledge[s] allegiance to the EA machine’ - which doesn’t seem right to me either. I think, again, that Michael’s post raised similar concerns about OpenPhil dominance in a clearer way to me.
Finally, on the maximisation above all point, I’m not really sure of this point? Like I think the ‘constraint the rank and file’ is just wrong? Maybe because I’m reading it as “this is OpenPhil’s intentional strategy with this messaging” and not “functionally, this is the effect of the messaging even if unintended”. I think the points I agree with Nuño here is about lack of responsiveness to feedback some EA leadership shows, not in terms of ‘deliberate constraints’.
Perhaps part of this is that, while I did read some of Nuño’s other blogs/posts/comments, there’s a lot of context which is (at least from my background) missing in this post. For some people it really seems to have captured their experience and so they can self-provide that context, but I don’t, so I’ve had trouble doing that here.
I think this reduction is correct. Like, in practice, I think some people start with the abstract ideas but then suffer a switcheroo where it’s like: oh well, I guess I’m now optimizing for getting funding from Open Phil/getting hired at this limited set of institutions/etc. I think the switcheroo is bad. And I think that conceptualizing EA as a set of beliefs is just unhelpful for noticing this dynamic.
But I’m repeating myself, because this is one of the main threads in the post. I have the weird feeling that I’m not being your interlocutor here.
Hey Nuño,
I’ve updated my original comment, hopefully to make it more fair and reflective of the feedback you and Arepo gave.
I think we actually agree in lots of ways. I think that the ‘switcheroo’ you mention is problematic, and a lot of the ‘EA machinery’ should get better at improving its feedback loops both internally and with the community.
I think at some level we just disagree with what we mean by EA. I agree that thinking of it as a set of ideas might not be helpful for this dynamic you’re pointing to, but to me that dynamic isn’t EA.[1]
As for not being an interlocutor here, I was originally going to respond on your blog, but on reflection I think I need to read (or re-read) the blog posts you’ve linked in this post to understand your position better and look at the examples/evidence you provide in more detail. Your post didn’t connect with me, but it did for a lot of people, so I think it’s on me to go away and try harder to see things from your perspective and do my bit to close that ‘inferential distance’.
I wasn’t intentionally trying to misrepresent you or be hostile, and to the extent I did I apologise. I very much value your perspectives I hope to keep reading them in the future, and for the EA community to reflect on them and improve.
To me EA is not the EA machine, and the EA machine is not (though may be 90% funded by) OpenPhil. They’re obviously connected, but not the same thing. In my edited comment, the ‘what if Dustin shut down OpenPhil’ scenario proves this.
I see saying that I disagree with the EA Forum’s “approach to life” rubbed you the wrong way. It seemed low cost, so I’ve changed it to something more wordy.
I disagree with this. I think that by reducing the ideas in my post to those of that previous one, you are missing something important in the reduction.