As always, my Forum-posting ‘reach’ exceeds my time-available ‘grasp’, so here are some general ideas I have floating around in various states of scribbles, notes, google doc drafts etc, but please don’t view them as in any way finalised or a promise to write-up fully:
- AI Risk from a Moderate’s Perspective: Over the last year my AI risk vibe[1] has gone down, probably lower than many other EAs who work with this area. However, I’m also more concerned about it than many other people (especially people who think most of EA is good but AI risk is bonkers). I think my intuitions and beliefs make sense, but I’d like to write them down fully, answer potential criticisms, and identify cruxes at some point.
- Who holds EA’s Mandate of Heaven: Trying to look at the post-FTX landscape of EA, especially amongst the leadership, through a ‘Mandate of Heaven’ lens. Essentially, various parts of EA leaderships have lost the ‘right to be deferred to’,[2] but while some of this previous leadership/community emphasis has taken a step back, nothing has stepped in to fill the legitimacy vacuum. This post would look at potential candidates, and whether the movement needs something like this at all.
- A Pluralist Vision for ‘Third Wave’ EA: Ben’s post has been in my mind for a long time. I don’t at all claim to have to full answer to this, but I think some form of pluralism that counteracts latent totalism in EA may be a good thing. I think I’d personally tie this to proposals for EA democratisation, but I don’t want to make that a load-bearing part of the piece.
- An Ideological Genealogy of e/acc: I’ve watched the rise of e/acc with a mixture of bewilderment, amusement, and alarm over the last year-and-a-half. It seems like a new ideology for a new age, but as Keynes said “Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.” I have some academic scribblers in mind, so it would be interesting to see if anything coherent comes out of it.
- EA EDA, Full 2023 Edition: Thanks to cribbing the work of other Forum users, I have metadata for (almost) every EA Forum post and comment, along with tag data, that was published in 2023. I’ve mostly got it cleaned up, but need to structure it into a readable product that tells us something interesting about the state of EA in 2023, rather than just chuck lots of graphs at the viewer.
- Kicking the Tires on ‘Status’: The LessWrong community and broader rationalist diaspora use the term ‘status’ a lot to explain the world (this activity is low/high status, this person is doing this activity to gain high status etc.), and yet I’ve almost never seen anyone define what this actually means, or compare it to alternative explanations. I think one of the primary LW posts grounds it in a book about improv theatre? So I might do I deep dive on it taking an eliminativism/deflationary stance on status and proposing a more idea-focused paradigm for understanding social behaviour.[3]
Finally, updates to the Criticism of EA Criticism sequence will continue intermittently so long as bad criticisms continue or until my will finally breaks.
That is, instead of people acting in such-and-such a way so as to regulate/improve their social status, they act in the way do they do because they believe it is the right thing to do, based on their empirical and moral beliefs. Since ‘status’ isn’t really needed to explain the latter imho, we can essentially discard it
As always, my Forum-posting ‘reach’ exceeds my time-available ‘grasp’, so here are some general ideas I have floating around in various states of scribbles, notes, google doc drafts etc, but please don’t view them as in any way finalised or a promise to write-up fully:
- AI Risk from a Moderate’s Perspective: Over the last year my AI risk vibe[1] has gone down, probably lower than many other EAs who work with this area. However, I’m also more concerned about it than many other people (especially people who think most of EA is good but AI risk is bonkers). I think my intuitions and beliefs make sense, but I’d like to write them down fully, answer potential criticisms, and identify cruxes at some point.
- Who holds EA’s Mandate of Heaven: Trying to look at the post-FTX landscape of EA, especially amongst the leadership, through a ‘Mandate of Heaven’ lens. Essentially, various parts of EA leaderships have lost the ‘right to be deferred to’,[2] but while some of this previous leadership/community emphasis has taken a step back, nothing has stepped in to fill the legitimacy vacuum. This post would look at potential candidates, and whether the movement needs something like this at all.
- A Pluralist Vision for ‘Third Wave’ EA: Ben’s post has been in my mind for a long time. I don’t at all claim to have to full answer to this, but I think some form of pluralism that counteracts latent totalism in EA may be a good thing. I think I’d personally tie this to proposals for EA democratisation, but I don’t want to make that a load-bearing part of the piece.
- An Ideological Genealogy of e/acc: I’ve watched the rise of e/acc with a mixture of bewilderment, amusement, and alarm over the last year-and-a-half. It seems like a new ideology for a new age, but as Keynes said “Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.” I have some academic scribblers in mind, so it would be interesting to see if anything coherent comes out of it.
- EA EDA, Full 2023 Edition: Thanks to cribbing the work of other Forum users, I have metadata for (almost) every EA Forum post and comment, along with tag data, that was published in 2023. I’ve mostly got it cleaned up, but need to structure it into a readable product that tells us something interesting about the state of EA in 2023, rather than just chuck lots of graphs at the viewer.
- Kicking the Tires on ‘Status’: The LessWrong community and broader rationalist diaspora use the term ‘status’ a lot to explain the world (this activity is low/high status, this person is doing this activity to gain high status etc.), and yet I’ve almost never seen anyone define what this actually means, or compare it to alternative explanations. I think one of the primary LW posts grounds it in a book about improv theatre? So I might do I deep dive on it taking an eliminativism/deflationary stance on status and proposing a more idea-focused paradigm for understanding social behaviour.[3]
Finally, updates to the Criticism of EA Criticism sequence will continue intermittently so long as bad criticisms continue or until my will finally breaks.
I don’t like the term p(doom)
If such a right ever existed in the first place, maybe the right to be trusted/benefit-of-the-doubt would be more accurate
That is, instead of people acting in such-and-such a way so as to regulate/improve their social status, they act in the way do they do because they believe it is the right thing to do, based on their empirical and moral beliefs. Since ‘status’ isn’t really needed to explain the latter imho, we can essentially discard it