In the dis-spirit of this article Iâm going to take the opposite tack and Iâm going to explore nagging doubts that I have about this line of argument.
To be honest, Iâm starting to get more and more sceptical/âannoyed about this behaviour (for want of a better word) in the effective altruism community. Iâm certainly not the first to voice these concerns, with both Matthew Yglesias and Scott Alexander noting how weird it is (if someone tells you that your level of seeking criticism gives off weird BDSM vibes, youâve probably gone too far).
Am I all in favour of going down intellectual rabbit holes to see where they take you? No. And I donât think it should be encouraged wholesale in this community. Maybe I just donât have the intellectual bandwidth to understand the arguments, but a lot of the time it just seems to lead to intellectual wank. With the most blatant example Iâve come across being infinite ethics. If infinities mean that anything is both good and bad in expectation, that should set off alarm bells that that way madness lies.
The crux of this argument also reminds me of rage therapy. Maybe you shouldnât explore those nagging doubts and express them out loud, just like maybe you shouldnât scream and hit things based on the mistaken belief that itâll help to get out your anger out. Maybe you should just remind yourself that its totally normal for people to have doubts about x-risk compared to other cause areas, because of a whole bunch of reasons that totally make sense.
Thankfully, most people in the effective altruism community do this. They just get on with their lives and jobs, and I think thatâs a good thing. There will always be some individuals that will go down these intellectual rabbit holes and they wonât need to be encouraged to do so. Let them go for gold. But at least in my personal view, the wider community doesnât need to be encouraged to do this.
On a similarly simple intellectual level, I see âpeople should not suppress doubts about the critical shift in direction that EA has taken over the past 10 yearsâ as a no-brainer. I do not see it as intellectual wank in an environment where every other person assumes p(doom) approaches 1 and timelines get shorter by a year every time you blink. EA may feature criticism circle jerking overall, but I think this kind of criticism is actually important and not actually super well received (I perceive a frosty response whenever Matthew Barnett criticizes AI doomerism)
In the dis-spirit of this article Iâm going to take the opposite tack and Iâm going to explore nagging doubts that I have about this line of argument.
To be honest, Iâm starting to get more and more sceptical/âannoyed about this behaviour (for want of a better word) in the effective altruism community. Iâm certainly not the first to voice these concerns, with both Matthew Yglesias and Scott Alexander noting how weird it is (if someone tells you that your level of seeking criticism gives off weird BDSM vibes, youâve probably gone too far).
Am I all in favour of going down intellectual rabbit holes to see where they take you? No. And I donât think it should be encouraged wholesale in this community. Maybe I just donât have the intellectual bandwidth to understand the arguments, but a lot of the time it just seems to lead to intellectual wank. With the most blatant example Iâve come across being infinite ethics. If infinities mean that anything is both good and bad in expectation, that should set off alarm bells that that way madness lies.
The crux of this argument also reminds me of rage therapy. Maybe you shouldnât explore those nagging doubts and express them out loud, just like maybe you shouldnât scream and hit things based on the mistaken belief that itâll help to get out your anger out. Maybe you should just remind yourself that its totally normal for people to have doubts about x-risk compared to other cause areas, because of a whole bunch of reasons that totally make sense.
Thankfully, most people in the effective altruism community do this. They just get on with their lives and jobs, and I think thatâs a good thing. There will always be some individuals that will go down these intellectual rabbit holes and they wonât need to be encouraged to do so. Let them go for gold. But at least in my personal view, the wider community doesnât need to be encouraged to do this.
On a similarly simple intellectual level, I see âpeople should not suppress doubts about the critical shift in direction that EA has taken over the past 10 yearsâ as a no-brainer. I do not see it as intellectual wank in an environment where every other person assumes p(doom) approaches 1 and timelines get shorter by a year every time you blink. EA may feature criticism circle jerking overall, but I think this kind of criticism is actually important and not actually super well received (I perceive a frosty response whenever Matthew Barnett criticizes AI doomerism)