Iâm going to stop posting on the Forum for the foreseeable future[1]. Iâve learned a lot from reading the Forum as well as participating in it. I hope that other users have learned something from my contributions, even if itâs just a sharper understanding of where theyâre right and Iâm wrong! Iâm particularly proud of Whatâs in a GWWC Pin? and 5 Historical Case Studies for an EA in Decline.
Iâm not deleting the account so if you want to get in touch the best way is probably DM here with an alternative way to stay in contact. Iâm happy to discuss my reasons for leaving in more detail,[2] or find ways of collaborating on future projects.[3]
I donât have the data to hand but Iâd guess Iâm probably one of the higher percentile Forum users in recent years, so why such a seemingly sudden change? The reason is that itâs less a relation to the Forum and more due to my decision to orient away from âEAâ in my lifeâand being consistent there means stepping away from the Forum. I could write a whole post on the reasons I have for not feeling âEAâ, but some paragraph summaries are:
I donât really know what âEAâ means anymore: Thereâs always been a tension between the âphilosophy or movementâ framings of EA, so in practice itâs been used as a fuzzy label for sets of ideas or people. In practice, âEAâ seems to have been defined by its enemies for the past ~2.5 years, though I know CEA seems to change this. But I think this lack of clarity is actually a sign that EA doesnât really have a coherent identity at the moment. It doesnât seem right to strongly associate with something I canât clearly define.
I have increasing differences with âphilosophicalâ EA (as I understand it): This difference has been growing recently, and itâs a fairly long list of which Iâll only include a few things. I think viewing morality as about âthe bestâ instead of âdoing rightâ is a mistake, especially if the former leads to viewing morality as about global/âperspectiveless maximisation.[4] Iâm a virtue ethicist and not a consequentialist/âutilitarian. I think my special relationships to others in my life create important and partial moral obligations/âduties. I donât think Expected Value is the only or best way to make decisions for individuals or institutions. I think cluelessness/âKnightian uncertainty arguments defeat most of the cases for longtermism in practice. The various differences I have seem significant enough that I canât really claim to be EA philosophically unless âEAâ is drawn arbitrarily and trivially wide.
I also donât feel connected to the âmovementâ side of EA either: Iâm not personally or socially connected to much of EA. My friends and close personal relationships are not related to EA. While Iâm more professionally involved with AI Safety than before, I also make clear that my positions are fairly unorthodox in this space too. While Iâve done my own bit of defending EA on and off the Forum, I no longer feel the identification or need to. So Iâm starting to leave the various online EA spaces I am a part of one-by-one.[5]Given my limited connection to EA personally or philosophically, it seems odd to be part of the movement in ways that imply Iâm giving it more support than I actually do.
I think there are better ways for me to spend my time than engaging with EA: In the last ~6 months EA engagement hasnât made me happy. This has often been from seeing EA criticisms left unresponded to, and I donât want to be associated with something society views negatively if I donât support that thing! I also think that there are more interesting and fulfilling pathways for my life to pursue which are either orthogonal to EA, or outside the âorthodoxyâ of cause areas/âinterventions. Finally, I just think I spend too much time reading the Forum or EA Twitter, and going cold turkey would be a good way to reallocate my attention better. I donât think that the ârightâ thing for me to do in terms of doing the right thing, or for personal flourishing, is to engage with EA.
Overall Takeaway: I never really claimed the label âEAâ for myselfâit was never the basis of my identity and Iâve never really had the âtaking ideas seriouslyâ genie take over my life. But I think, given my differences, I want to put clearer distance between me and EA into the future.
Anyway, that turned out to be a fair bit longer than I intended! If you made to the end, then I wish you all the best in your future endeavours đ[6]
Indeed, from a certain point of view EA could be seen as a philosophical version of Bostromâs paperclip monster. If there is a set definition of the good, and you want to maximise, then the right thing to do is paperclip the universe with your definition. My core commitment to pluralism views this as wrong, and makes me deeply suspicious of any philosophy which allows this, or directionally points towards it
Is there a good place to succinctly read about this: âI think cluelessness/âKnightian uncertainty arguments defeat most of the cases for longtermism in practiceâ? I donât see (what I understand to be cluelessness) as knockdown at all, so Iâm wondering if we understand this principle differently, or if perhaps more is resting here on Knightian uncertainity which Iâm unfamiliar with.
Unfortunately not that âsuccinctâ :) but I argue here that cluelessness-ish arguments defeat the impartial altruistic case for any intervention, longtermist or not. Tl;dr: our estimates of the sign of our net long-term impact are arbitrary. (Building on Mogensen (2021).)
(It seems maybe defensible to argue something like: âWe can at least non-arbitrarily estimate net near-term effects. Whereas weâre clueless about the sign of any particular (non-âgerrymanderedâ) long-term effect (or, thereâs something qualitatively worse about the reasons for our beliefs about such effects). So we have more reason to do interventions with the best near-term effects.â This post gives the strongest case for that Iâm aware of. Iâm not personally convinced, but think itâs worth investigating further.)
The argument Iâve seen is the opposite, that considering cluelessness favors longtermism instead of undermining it (âtherefore consider donating to LTFFâ, Greaves tentatively suggests).
I am however more sympathetic to Michaelâs skepticism that itâs often hard for me in practice to tell longtermist interventions apart from PlayPump (other than funding d/âacc-flavored fieldbuilding maybe), but maybe JWSâs reasoning is different.
Also âcluelessnessâ seems underspecified in forum discussions cf. this discussion thread so I wouldnât be surprised if you and JWS are talking about different things.
On Stepping away from the Forum and âEAâ
Iâm going to stop posting on the Forum for the foreseeable future[1]. Iâve learned a lot from reading the Forum as well as participating in it. I hope that other users have learned something from my contributions, even if itâs just a sharper understanding of where theyâre right and Iâm wrong! Iâm particularly proud of Whatâs in a GWWC Pin? and 5 Historical Case Studies for an EA in Decline.
Iâm not deleting the account so if you want to get in touch the best way is probably DM here with an alternative way to stay in contact. Iâm happy to discuss my reasons for leaving in more detail,[2] or find ways of collaborating on future projects.[3]
I donât have the data to hand but Iâd guess Iâm probably one of the higher percentile Forum users in recent years, so why such a seemingly sudden change? The reason is that itâs less a relation to the Forum and more due to my decision to orient away from âEAâ in my lifeâand being consistent there means stepping away from the Forum. I could write a whole post on the reasons I have for not feeling âEAâ, but some paragraph summaries are:
I donât really know what âEAâ means anymore: Thereâs always been a tension between the âphilosophy or movementâ framings of EA, so in practice itâs been used as a fuzzy label for sets of ideas or people. In practice, âEAâ seems to have been defined by its enemies for the past ~2.5 years, though I know CEA seems to change this. But I think this lack of clarity is actually a sign that EA doesnât really have a coherent identity at the moment. It doesnât seem right to strongly associate with something I canât clearly define.
I have increasing differences with âphilosophicalâ EA (as I understand it): This difference has been growing recently, and itâs a fairly long list of which Iâll only include a few things. I think viewing morality as about âthe bestâ instead of âdoing rightâ is a mistake, especially if the former leads to viewing morality as about global/âperspectiveless maximisation.[4] Iâm a virtue ethicist and not a consequentialist/âutilitarian. I think my special relationships to others in my life create important and partial moral obligations/âduties. I donât think Expected Value is the only or best way to make decisions for individuals or institutions. I think cluelessness/âKnightian uncertainty arguments defeat most of the cases for longtermism in practice. The various differences I have seem significant enough that I canât really claim to be EA philosophically unless âEAâ is drawn arbitrarily and trivially wide.
I also donât feel connected to the âmovementâ side of EA either: Iâm not personally or socially connected to much of EA. My friends and close personal relationships are not related to EA. While Iâm more professionally involved with AI Safety than before, I also make clear that my positions are fairly unorthodox in this space too. While Iâve done my own bit of defending EA on and off the Forum, I no longer feel the identification or need to. So Iâm starting to leave the various online EA spaces I am a part of one-by-one.[5] Given my limited connection to EA personally or philosophically, it seems odd to be part of the movement in ways that imply Iâm giving it more support than I actually do.
I think there are better ways for me to spend my time than engaging with EA: In the last ~6 months EA engagement hasnât made me happy. This has often been from seeing EA criticisms left unresponded to, and I donât want to be associated with something society views negatively if I donât support that thing! I also think that there are more interesting and fulfilling pathways for my life to pursue which are either orthogonal to EA, or outside the âorthodoxyâ of cause areas/âinterventions. Finally, I just think I spend too much time reading the Forum or EA Twitter, and going cold turkey would be a good way to reallocate my attention better. I donât think that the ârightâ thing for me to do in terms of doing the right thing, or for personal flourishing, is to engage with EA.
Overall Takeaway: I never really claimed the label âEAâ for myselfâit was never the basis of my identity and Iâve never really had the âtaking ideas seriouslyâ genie take over my life. But I think, given my differences, I want to put clearer distance between me and EA into the future.
Anyway, that turned out to be a fair bit longer than I intended! If you made to the end, then I wish you all the best in your future endeavours đ[6]
Definitely âfor nowâ, but possibly for longer. I havenât quite decided yet.
Though see below first
See the end of this comment for some ideas
Indeed, from a certain point of view EA could be seen as a philosophical version of Bostromâs paperclip monster. If there is a set definition of the good, and you want to maximise, then the right thing to do is paperclip the universe with your definition. My core commitment to pluralism views this as wrong, and makes me deeply suspicious of any philosophy which allows this, or directionally points towards it
I do, however, intend to continue with the GWWC Pledge for the foreseeable future
I also wish you the best if you didnât
Disappointed to hear this, but makes a lot of sense. Great meeting you at EAG London, and all the best with your future endeavours!
Iâve enjoyed reading your writing over the past few years, and Iâll miss you. Good luck with whatever you will be focusing on!
Is there a good place to succinctly read about this: âI think cluelessness/âKnightian uncertainty arguments defeat most of the cases for longtermism in practiceâ? I donât see (what I understand to be cluelessness) as knockdown at all, so Iâm wondering if we understand this principle differently, or if perhaps more is resting here on Knightian uncertainity which Iâm unfamiliar with.
Unfortunately not that âsuccinctâ :) but I argue here that cluelessness-ish arguments defeat the impartial altruistic case for any intervention, longtermist or not. Tl;dr: our estimates of the sign of our net long-term impact are arbitrary. (Building on Mogensen (2021).)
(It seems maybe defensible to argue something like: âWe can at least non-arbitrarily estimate net near-term effects. Whereas weâre clueless about the sign of any particular (non-âgerrymanderedâ) long-term effect (or, thereâs something qualitatively worse about the reasons for our beliefs about such effects). So we have more reason to do interventions with the best near-term effects.â This post gives the strongest case for that Iâm aware of. Iâm not personally convinced, but think itâs worth investigating further.)
The argument Iâve seen is the opposite, that considering cluelessness favors longtermism instead of undermining it (âtherefore consider donating to LTFFâ, Greaves tentatively suggests).
I am however more sympathetic to Michaelâs skepticism that itâs often hard for me in practice to tell longtermist interventions apart from PlayPump (other than funding d/âacc-flavored fieldbuilding maybe), but maybe JWSâs reasoning is different.
Also âcluelessnessâ seems underspecified in forum discussions cf. this discussion thread so I wouldnât be surprised if you and JWS are talking about different things.
Curious about what the critiques you saw that were unresponded to