Sharing some planned Forum posts I’m considering, mostly as a commitment device, but welcome thoughts from others:
I plan to add another post in my “EA EDA” sequence analysing Forum trends in 2024. My pre-registered prediction is that we’ll see 2023 patterns continue: declining engagement (though possibly plateauing) and AI Safety further cementing its dominance across posts, karma, and comments.
I’ll also try to do another end-of-year Forum awards post (see here for last year’s) though with slightly different categories.
I’m working on an analysis of EA’s post-FTX reputation using both quantitative metrics (Forum engagement, Wikipedia traffic) and qualitative evidence (public statements from influential figures inside and outside EA). The preliminary data suggests more serious reputational damage than the recent Pulse survey. I meaningful (as opposed to methodological or just a mistake) I suspect it might highlight the difference between public and elite perception.
I recently finished reading former US General Stanley McChrystal’s book: Team of Teams. Ostensibly it’s a book about his command of JSOC in the Iraq War, but it’s really about the concept of Auftragstaktik as a method of command, and there was more than one passage which I thought was relevant to Effective Altruism (especially for what “Third Wave” EA might mean). This one is a stretch though, I’m not sure how interested the Forum would be for this, or whether it would be the right place to post it.
My focus for 2025 will be to work towards developing my position on AI Safety, and share that through a series of posts AI Safety sequence.[1] The concept of AGI went mainstream in 2024, and it does look like we will see significant technological and social disruption in the coming decades due to AI development. Nevertheless, I find myself increasingly skeptical of traditional narratives and arguments about what Alignment is, the likelihood of risk, and what ought to be done about it. Instead, I’ve come to view “Alignment” primarily as a political philosophy rather than a technical computer science. Nevertheless, I could very well be wrong on most-all of these ideas, and getting critical discussion from the community will I think be good both for myself and (I hope) the Forum readership.[2]
As such, I’m considering doing a deep-dive on the Apollo o1 report given the controversial reception it’s had.[3] I think this is the most unlikely one though, as I’d want to research it as thoroughly as I could, and time is at a premium since Christmas is around the corner, so this is definitely a “stretch goal”.
Finally, I don’t expect to devote much more time[4] to adding to the “Criticism of EA Criticism” sequence. I often finish the posts well after the initial discourse has died down, and I’m not sure what effect they really have.[5] Furthermore, and I’ve started to notice my own views of a variety of topics start to diverge from “EA Orthodoxy”, so I’m not really sure I’d make a good defender. This change may itself warrant a future post, though again I’m not committing to that yet.
It possibly may be more helpful for those without technical backgrounds concerned about AI, but I’m not sure. I also think have a somewhat AGI-sceptical persepctive represented on the Forum might be useful for intellectual diversity purposes but I don’t want to claim that. I’m very uncertain about the future of AI and could easily see myself being convinced to change my mind.
Sharing some planned Forum posts I’m considering, mostly as a commitment device, but welcome thoughts from others:
I plan to add another post in my “EA EDA” sequence analysing Forum trends in 2024. My pre-registered prediction is that we’ll see 2023 patterns continue: declining engagement (though possibly plateauing) and AI Safety further cementing its dominance across posts, karma, and comments.
I’ll also try to do another end-of-year Forum awards post (see here for last year’s) though with slightly different categories.
I’m working on an analysis of EA’s post-FTX reputation using both quantitative metrics (Forum engagement, Wikipedia traffic) and qualitative evidence (public statements from influential figures inside and outside EA). The preliminary data suggests more serious reputational damage than the recent Pulse survey. I meaningful (as opposed to methodological or just a mistake) I suspect it might highlight the difference between public and elite perception.
I recently finished reading former US General Stanley McChrystal’s book: Team of Teams. Ostensibly it’s a book about his command of JSOC in the Iraq War, but it’s really about the concept of Auftragstaktik as a method of command, and there was more than one passage which I thought was relevant to Effective Altruism (especially for what “Third Wave” EA might mean). This one is a stretch though, I’m not sure how interested the Forum would be for this, or whether it would be the right place to post it.
My focus for 2025 will be to work towards developing my position on AI Safety, and share that through a series of posts AI Safety sequence.[1] The concept of AGI went mainstream in 2024, and it does look like we will see significant technological and social disruption in the coming decades due to AI development. Nevertheless, I find myself increasingly skeptical of traditional narratives and arguments about what Alignment is, the likelihood of risk, and what ought to be done about it. Instead, I’ve come to view “Alignment” primarily as a political philosophy rather than a technical computer science. Nevertheless, I could very well be wrong on most-all of these ideas, and getting critical discussion from the community will I think be good both for myself and (I hope) the Forum readership.[2]
As such, I’m considering doing a deep-dive on the Apollo o1 report given the controversial reception it’s had.[3] I think this is the most unlikely one though, as I’d want to research it as thoroughly as I could, and time is at a premium since Christmas is around the corner, so this is definitely a “stretch goal”.
Finally, I don’t expect to devote much more time[4] to adding to the “Criticism of EA Criticism” sequence. I often finish the posts well after the initial discourse has died down, and I’m not sure what effect they really have.[5] Furthermore, and I’ve started to notice my own views of a variety of topics start to diverge from “EA Orthodoxy”, so I’m not really sure I’d make a good defender. This change may itself warrant a future post, though again I’m not committing to that yet.
Which I will rename
It possibly may be more helpful for those without technical backgrounds concerned about AI, but I’m not sure. I also think have a somewhat AGI-sceptical persepctive represented on the Forum might be useful for intellectual diversity purposes but I don’t want to claim that. I’m very uncertain about the future of AI and could easily see myself being convinced to change my mind.
I’m slightly leaning towards the skeptical interpretation myself, as you might have guessed
if any at all, unless an absolutely egregious but widely-shared example comes up
Does Martin Sandbu read the EA Forum, for instance?