Sharing some planned Forum posts Iām considering, mostly as a commitment device, but welcome thoughts from others:
I plan to add another post in my āEA EDAā sequence analysing Forum trends in 2024. My pre-registered prediction is that weāll see 2023 patterns continue: declining engagement (though possibly plateauing) and AI Safety further cementing its dominance across posts, karma, and comments.
Iāll also try to do another end-of-year Forum awards post (see here for last yearās) though with slightly different categories.
Iām working on an analysis of EAās post-FTX reputation using both quantitative metrics (Forum engagement, Wikipedia traffic) and qualitative evidence (public statements from influential figures inside and outside EA). The preliminary data suggests more serious reputational damage than the recent Pulse survey. I meaningful (as opposed to methodological or just a mistake) I suspect it might highlight the difference between public and elite perception.
I recently finished reading former US General Stanley McChrystalās book: Team of Teams. Ostensibly itās a book about his command of JSOC in the Iraq War, but itās really about the concept of Auftragstaktik as a method of command, and there was more than one passage which I thought was relevant to Effective Altruism (especially for what āThird Waveā EA might mean). This one is a stretch though, Iām not sure how interested the Forum would be for this, or whether it would be the right place to post it.
My focus for 2025 will be to work towards developing my position on AI Safety, and share that through a series of posts AI Safety sequence.[1] The concept of AGI went mainstream in 2024, and it does look like we will see significant technological and social disruption in the coming decades due to AI development. Nevertheless, I find myself increasingly skeptical of traditional narratives and arguments about what Alignment is, the likelihood of risk, and what ought to be done about it. Instead, Iāve come to view āAlignmentā primarily as a political philosophy rather than a technical computer science. Nevertheless, I could very well be wrong on most-all of these ideas, and getting critical discussion from the community will I think be good both for myself and (I hope) the Forum readership.[2]
As such, Iām considering doing a deep-dive on the Apollo o1 report given the controversial reception itās had.[3] I think this is the most unlikely one though, as Iād want to research it as thoroughly as I could, and time is at a premium since Christmas is around the corner, so this is definitely a āstretch goalā.
Finally, I donāt expect to devote much more time[4] to adding to the āCriticism of EA Criticismā sequence. I often finish the posts well after the initial discourse has died down, and Iām not sure what effect they really have.[5] Furthermore, and Iāve started to notice my own views of a variety of topics start to diverge from āEA Orthodoxyā, so Iām not really sure Iād make a good defender. This change may itself warrant a future post, though again Iām not committing to that yet.
It possibly may be more helpful for those without technical backgrounds concerned about AI, but Iām not sure. I also think have a somewhat AGI-sceptical persepctive represented on the Forum might be useful for intellectual diversity purposes but I donāt want to claim that. Iām very uncertain about the future of AI and could easily see myself being convinced to change my mind.
Sharing some planned Forum posts Iām considering, mostly as a commitment device, but welcome thoughts from others:
I plan to add another post in my āEA EDAā sequence analysing Forum trends in 2024. My pre-registered prediction is that weāll see 2023 patterns continue: declining engagement (though possibly plateauing) and AI Safety further cementing its dominance across posts, karma, and comments.
Iāll also try to do another end-of-year Forum awards post (see here for last yearās) though with slightly different categories.
Iām working on an analysis of EAās post-FTX reputation using both quantitative metrics (Forum engagement, Wikipedia traffic) and qualitative evidence (public statements from influential figures inside and outside EA). The preliminary data suggests more serious reputational damage than the recent Pulse survey. I meaningful (as opposed to methodological or just a mistake) I suspect it might highlight the difference between public and elite perception.
I recently finished reading former US General Stanley McChrystalās book: Team of Teams. Ostensibly itās a book about his command of JSOC in the Iraq War, but itās really about the concept of Auftragstaktik as a method of command, and there was more than one passage which I thought was relevant to Effective Altruism (especially for what āThird Waveā EA might mean). This one is a stretch though, Iām not sure how interested the Forum would be for this, or whether it would be the right place to post it.
My focus for 2025 will be to work towards developing my position on AI Safety, and share that through a series of posts AI Safety sequence.[1] The concept of AGI went mainstream in 2024, and it does look like we will see significant technological and social disruption in the coming decades due to AI development. Nevertheless, I find myself increasingly skeptical of traditional narratives and arguments about what Alignment is, the likelihood of risk, and what ought to be done about it. Instead, Iāve come to view āAlignmentā primarily as a political philosophy rather than a technical computer science. Nevertheless, I could very well be wrong on most-all of these ideas, and getting critical discussion from the community will I think be good both for myself and (I hope) the Forum readership.[2]
As such, Iām considering doing a deep-dive on the Apollo o1 report given the controversial reception itās had.[3] I think this is the most unlikely one though, as Iād want to research it as thoroughly as I could, and time is at a premium since Christmas is around the corner, so this is definitely a āstretch goalā.
Finally, I donāt expect to devote much more time[4] to adding to the āCriticism of EA Criticismā sequence. I often finish the posts well after the initial discourse has died down, and Iām not sure what effect they really have.[5] Furthermore, and Iāve started to notice my own views of a variety of topics start to diverge from āEA Orthodoxyā, so Iām not really sure Iād make a good defender. This change may itself warrant a future post, though again Iām not committing to that yet.
Which I will rename
It possibly may be more helpful for those without technical backgrounds concerned about AI, but Iām not sure. I also think have a somewhat AGI-sceptical persepctive represented on the Forum might be useful for intellectual diversity purposes but I donāt want to claim that. Iām very uncertain about the future of AI and could easily see myself being convinced to change my mind.
Iām slightly leaning towards the skeptical interpretation myself, as you might have guessed
if any at all, unless an absolutely egregious but widely-shared example comes up
Does Martin Sandbu read the EA Forum, for instance?