I think it’s very valuable for you to state what the proposition would mean in concrete terms.
On the other hand, I think it’s quite reasonable for posts not spend time engaging with the question of whether “there will be vast numbers of AIs that are smarter than us”.
AI safety is already one of the main cause areas here and there’s been plenty of discussion about these kinds of points already.
If someone has something new to say on that topic, then it’d be great for them to share it, otherwise it makes sense for people to focus on discussing the parts of the topic that have not already been covered as part of the discussions on AI safety.
I think it’s very valuable for you to state what the proposition would mean in concrete terms.
It’s not just concrete terms, it’s the terms we’ve all agreed to vote on for the past week!
On the other hand, I think it’s quite reasonable for posts not spend time engaging with the question of whether “there will be vast numbers of AIs that are smarter than us”.
I think I just strongly disagree on this point. Not every post has to re-argue everything from the ground up, but I think every post does need at least a link or backing to why it believes that. Are people anchoring on Shulman/Cotra? Metaculus? Cold Takes? General feelings about AI progress? Drawing lines on graphs? Specific claims about the future that making reference only to scaled-up transformer models? These are all very different claims for the proposition, and differ in terms of types of AI, timelines, etc.
AI safety is already one of the main cause areas here and there’s been plenty of discussion about these kinds of points already.
If someone has something new to say on that topic, then it’d be great for them to share it, otherwise it makes sense for people to focus on discussing the parts of the topic that have not already been covered as part of the discussions on AI safety.
I again disagree, for two slightly different reasons:
I’m not sure how good the discussion has been about AI Safety. How much have these questions and cruxes actually been internalised? Titotal’s excellent series on AI risk scepticism has been under-discussed in my opinion. There are many anecdotal cases of EAs (especially younger, newer ones) simply accepting the importance of AI causes through deference alone.[1] At the latest EAG London, when I talked about AI risk skepticism I found surprising amounts of agreement with my positions even amongst well-known people working in the field of AI risk. There was certainly an interpretation that the Bay/AI-focused wing of EA weren’t interested in discussing this at all.
Even if something is consensus, it should still be allowed (even encouraged) to be questioned. If EA wants to spend lots of money on AI Welfare (or even AI Safety), it should be very sure that it is one of the best ways we can impact the world. I’d like to see more explicit red-teaming of this in the community, beyond just Garfinkel on the 80k podcast.
I also met a young uni organiser who was torn about AI risk, since they didn’t really seem to be convinced of it but felt somewhat trapped by the pressure they felt to ‘towe the EA line’ on this issue
What do you think was the best point that Titotal made?
I’m not saying it can’t be questioned. And there wasn’t a rule that you couldn’t discuss it as part of the AI welfare week. That said, what’s wrong with taking a week’s break from the usual discussions that we have here to focus on something else? To take the discussion in new directions? A week is not that long.
I don’t quite know what to respond here.[1] If the aim was to discuss something differently then I guess there should have been a different debate prompt? Or maybe it shouldn’t have been framed as a debate at all? Maybe it should have just prioritised AI Welfare as a topic and left it at that. I’d certainly have less of an issue with the posts that were were that have happened, and certainly wouldn’t have been confused by the voting if there wasn’t a voting slider.[2]
I think it’s very valuable for you to state what the proposition would mean in concrete terms.
On the other hand, I think it’s quite reasonable for posts not spend time engaging with the question of whether “there will be vast numbers of AIs that are smarter than us”.
AI safety is already one of the main cause areas here and there’s been plenty of discussion about these kinds of points already.
If someone has something new to say on that topic, then it’d be great for them to share it, otherwise it makes sense for people to focus on discussing the parts of the topic that have not already been covered as part of the discussions on AI safety.
It’s not just concrete terms, it’s the terms we’ve all agreed to vote on for the past week!
I think I just strongly disagree on this point. Not every post has to re-argue everything from the ground up, but I think every post does need at least a link or backing to why it believes that. Are people anchoring on Shulman/Cotra? Metaculus? Cold Takes? General feelings about AI progress? Drawing lines on graphs? Specific claims about the future that making reference only to scaled-up transformer models? These are all very different claims for the proposition, and differ in terms of types of AI, timelines, etc.
I again disagree, for two slightly different reasons:
I’m not sure how good the discussion has been about AI Safety. How much have these questions and cruxes actually been internalised? Titotal’s excellent series on AI risk scepticism has been under-discussed in my opinion. There are many anecdotal cases of EAs (especially younger, newer ones) simply accepting the importance of AI causes through deference alone.[1] At the latest EAG London, when I talked about AI risk skepticism I found surprising amounts of agreement with my positions even amongst well-known people working in the field of AI risk. There was certainly an interpretation that the Bay/AI-focused wing of EA weren’t interested in discussing this at all.
Even if something is consensus, it should still be allowed (even encouraged) to be questioned. If EA wants to spend lots of money on AI Welfare (or even AI Safety), it should be very sure that it is one of the best ways we can impact the world. I’d like to see more explicit red-teaming of this in the community, beyond just Garfinkel on the 80k podcast.
I also met a young uni organiser who was torn about AI risk, since they didn’t really seem to be convinced of it but felt somewhat trapped by the pressure they felt to ‘towe the EA line’ on this issue
What do you think was the best point that Titotal made?
I’m not saying it can’t be questioned. And there wasn’t a rule that you couldn’t discuss it as part of the AI welfare week. That said, what’s wrong with taking a week’s break from the usual discussions that we have here to focus on something else? To take the discussion in new directions? A week is not that long.
I don’t quite know what to respond here.[1] If the aim was to discuss something differently then I guess there should have been a different debate prompt? Or maybe it shouldn’t have been framed as a debate at all? Maybe it should have just prioritised AI Welfare as a topic and left it at that. I’d certainly have less of an issue with the posts that were were that have happened, and certainly wouldn’t have been confused by the voting if there wasn’t a voting slider.[2]
So I probably won’t—we seem to have strong differing intuitions and intepretations of fact, which probably makes communication difficult
But I liked the voting slider, it was a cool feature!