Very excited to read this post. I strongly agree with both the concrete direction and with the importance of making EA more intellectually vibrant.
Then again, I’m rather biased since I made a similar argument a few years back.
Main differences:
I suggested that it might make sense for virtual programs to create a new course rather than just changing the intro fellowship content. My current intuition is that splitting the intro fellowship would likely be the best option for now. Some people will get really annoyed if the course focuses too much on AI, whilst others will get annoyed if the course focuses too much on questions that would likely become redundant in a world where we expect capability advances to continue. My intuition is that things aren’t at the stage where it’d make sense for the intro fellowship to do a complete AGI pivot, so that’s why I’m suggesting a split. Both courses should probably still give participants a taste of the other.
I put more emphasis on the possibility that AI might be useful for addressing global poverty and that it intersects with animal rights, whilst perhaps Will might see this as too incrementalist (?).
Whilst I also suggested that putting more emphasis on the implications of advanced AI might make EA less intellectually stagnant, I also noted that perhaps it’d be better for EA to adopt a yearly theme and simply make the rise of AI the first. I still like the yearly theme idea, but the odds and legibility of AI being really important have increased enough that I’m now feeling a lot more confident as identifying AI as an area that deserves more than just a yearly theme.
I also agree with the “fuck PR” stance (my words, not Will’s). Especially insofar as the AIS movement has greater pressure to focus on PR, since it’s further towards the pointy end, I think it’s important for the EA movement to use its freedom to provide a counter-balance to this.
Very excited to read this post. I strongly agree with both the concrete direction and with the importance of making EA more intellectually vibrant.
Then again, I’m rather biased since I made a similar argument a few years back.
Main differences:
I suggested that it might make sense for virtual programs to create a new course rather than just changing the intro fellowship content. My current intuition is that splitting the intro fellowship would likely be the best option for now. Some people will get really annoyed if the course focuses too much on AI, whilst others will get annoyed if the course focuses too much on questions that would likely become redundant in a world where we expect capability advances to continue. My intuition is that things aren’t at the stage where it’d make sense for the intro fellowship to do a complete AGI pivot, so that’s why I’m suggesting a split. Both courses should probably still give participants a taste of the other.
I put more emphasis on the possibility that AI might be useful for addressing global poverty and that it intersects with animal rights, whilst perhaps Will might see this as too incrementalist (?).
Whilst I also suggested that putting more emphasis on the implications of advanced AI might make EA less intellectually stagnant, I also noted that perhaps it’d be better for EA to adopt a yearly theme and simply make the rise of AI the first. I still like the yearly theme idea, but the odds and legibility of AI being really important have increased enough that I’m now feeling a lot more confident as identifying AI as an area that deserves more than just a yearly theme.
I also agree with the “fuck PR” stance (my words, not Will’s). Especially insofar as the AIS movement has greater pressure to focus on PR, since it’s further towards the pointy end, I think it’s important for the EA movement to use its freedom to provide a counter-balance to this.