I think itâs very valuable for you to state what the proposition would mean in concrete terms.
Itâs not just concrete terms, itâs the terms weâve all agreed to vote on for the past week!
On the other hand, I think itâs quite reasonable for posts not spend time engaging with the question of whether âthere will be vast numbers of AIs that are smarter than usâ.
I think I just strongly disagree on this point. Not every post has to re-argue everything from the ground up, but I think every post does need at least a link or backing to why it believes that. Are people anchoring on Shulman/âCotra? Metaculus? Cold Takes? General feelings about AI progress? Drawing lines on graphs? Specific claims about the future that making reference only to scaled-up transformer models? These are all very different claims for the proposition, and differ in terms of types of AI, timelines, etc.
AI safety is already one of the main cause areas here and thereâs been plenty of discussion about these kinds of points already.
If someone has something new to say on that topic, then itâd be great for them to share it, otherwise it makes sense for people to focus on discussing the parts of the topic that have not already been covered as part of the discussions on AI safety.
I again disagree, for two slightly different reasons:
Iâm not sure how good the discussion has been about AI Safety. How much have these questions and cruxes actually been internalised? Titotalâs excellent series on AI risk scepticism has been under-discussed in my opinion. There are many anecdotal cases of EAs (especially younger, newer ones) simply accepting the importance of AI causes through deference alone.[1] At the latest EAG London, when I talked about AI risk skepticism I found surprising amounts of agreement with my positions even amongst well-known people working in the field of AI risk. There was certainly an interpretation that the Bay/âAI-focused wing of EA werenât interested in discussing this at all.
Even if something is consensus, it should still be allowed (even encouraged) to be questioned. If EA wants to spend lots of money on AI Welfare (or even AI Safety), it should be very sure that it is one of the best ways we can impact the world. Iâd like to see more explicit red-teaming of this in the community, beyond just Garfinkel on the 80k podcast.
I also met a young uni organiser who was torn about AI risk, since they didnât really seem to be convinced of it but felt somewhat trapped by the pressure they felt to âtowe the EA lineâ on this issue
What do you think was the best point that Titotal made?
Iâm not saying it canât be questioned. And there wasnât a rule that you couldnât discuss it as part of the AI welfare week. That said, whatâs wrong with taking a weekâs break from the usual discussions that we have here to focus on something else? To take the discussion in new directions? A week is not that long.
I donât quite know what to respond here.[1] If the aim was to discuss something differently then I guess there should have been a different debate prompt? Or maybe it shouldnât have been framed as a debate at all? Maybe it should have just prioritised AI Welfare as a topic and left it at that. Iâd certainly have less of an issue with the posts that were were that have happened, and certainly wouldnât have been confused by the voting if there wasnât a voting slider.[2]
Itâs not just concrete terms, itâs the terms weâve all agreed to vote on for the past week!
I think I just strongly disagree on this point. Not every post has to re-argue everything from the ground up, but I think every post does need at least a link or backing to why it believes that. Are people anchoring on Shulman/âCotra? Metaculus? Cold Takes? General feelings about AI progress? Drawing lines on graphs? Specific claims about the future that making reference only to scaled-up transformer models? These are all very different claims for the proposition, and differ in terms of types of AI, timelines, etc.
I again disagree, for two slightly different reasons:
Iâm not sure how good the discussion has been about AI Safety. How much have these questions and cruxes actually been internalised? Titotalâs excellent series on AI risk scepticism has been under-discussed in my opinion. There are many anecdotal cases of EAs (especially younger, newer ones) simply accepting the importance of AI causes through deference alone.[1] At the latest EAG London, when I talked about AI risk skepticism I found surprising amounts of agreement with my positions even amongst well-known people working in the field of AI risk. There was certainly an interpretation that the Bay/âAI-focused wing of EA werenât interested in discussing this at all.
Even if something is consensus, it should still be allowed (even encouraged) to be questioned. If EA wants to spend lots of money on AI Welfare (or even AI Safety), it should be very sure that it is one of the best ways we can impact the world. Iâd like to see more explicit red-teaming of this in the community, beyond just Garfinkel on the 80k podcast.
I also met a young uni organiser who was torn about AI risk, since they didnât really seem to be convinced of it but felt somewhat trapped by the pressure they felt to âtowe the EA lineâ on this issue
What do you think was the best point that Titotal made?
Iâm not saying it canât be questioned. And there wasnât a rule that you couldnât discuss it as part of the AI welfare week. That said, whatâs wrong with taking a weekâs break from the usual discussions that we have here to focus on something else? To take the discussion in new directions? A week is not that long.
I donât quite know what to respond here.[1] If the aim was to discuss something differently then I guess there should have been a different debate prompt? Or maybe it shouldnât have been framed as a debate at all? Maybe it should have just prioritised AI Welfare as a topic and left it at that. Iâd certainly have less of an issue with the posts that were were that have happened, and certainly wouldnât have been confused by the voting if there wasnât a voting slider.[2]
So I probably wonâtâwe seem to have strong differing intuitions and intepretations of fact, which probably makes communication difficult
But I liked the voting slider, it was a cool feature!