For some forum users, why are you downvoting the post? There are separate disagree votes available on top-level posts now
Downvoter here. The post is more than just wrong (worthy of a disagree vote). It’s substantially negative EV for the future of the world. Or, to put it bluntly, it’s significantly[1] increasing the risk that we all get killed in the next few years.
It’s dangerous because it sounds plausible (and indeed has been upvoted a bunch and is the second highest karma post in this debate series currently). But it contains a number of unjustified claims (see other comments, e.g. [1], [2], [3], [4]), and is framed from the perspective of AI x-risk not being a problem (there’s a reason Nora works at Eleuther rather than Conjecture). Right now, the EA community seems like it’s on the fence on the issue of an AGI moratorium (or slowing down AI in general). But there are signs that EAs are warming to the idea. I see this debate series as being high stakes in terms whether there will be significant EA resources directed toward pushing for a moratorium. Such resources could really make the difference between it happening or not (given how few resources are being directed toward it so far).
EDIT: I expected that this comment itself would be downvoted. Why are you downvoting the comment? [There are separate disagree votes available on comments now.]
Thanks for replying Greg. I have indeed upvoted/disagreevoted you here, because I really appreciate Forum voters explaining their reasoning even if I disagree.
Mainly, I think calling Nora’s post “substantially negative EV for the future of the world” is tending towards the ‘galaxy brain’ end of EA that puts people off. I can’t calculate that, and I think it’s much more plausible that it provides EA Forum with a well written and knowledgable perspective of someone who disagrees on alignment difficulty and whether a pause is the best policy.
It’s part of a debate series, so in my opinion it’s entirely fine for it to be Nora’s perspective. Her post is quite open that she thinks Alignment is going well, and I valued it a lot even if I disagreed with specific points in it. I don’t think Nora’s being intentionally wrong, those are just claims she believes that may turn out to be incorrect.
I recognise that you are a lot more concerned about AI x-risk than I am (not to say I’m not concerned though) and are a lot more sure about pursuing a moratorium. I suppose I’d caution against presupposing your conclusion is so correct that other views, such as Nora’s, don’t deserve a hearing in the public sphere. I think that’s a really dangerous line of thought to go down. I think this is a place where a moral uncertainty framework could mitigate this line of thought, without necessarily watering down your commitment to prevent AI xRisk.
It’s part of a debate series, so in my opinion it’s entirely fine for it to be Nora’s perspective. Her post is quite open that she thinks Alignment is going well, and I valued it a lot even if I disagreed with specific points in it. I don’t think Nora’s being intentionally wrong, those are just claims she believes that may turn out to be incorrect.
I agree with this (apart from the “valued it a lot” part, and I think Nora is coming in with a pro-AI bias). I downvoted because I thought the karma total was (still is) way too high, and high karma posts and their headlines do, for better or worse, influence the community and how it directs its resources.
I suppose I’d caution against presupposing your conclusion is so correct that other views, such as Nora’s, don’t deserve a hearing in the public sphere.
Again, it deserves a hearing. I’m upset by how highly upvoted it is. If it was on, say, 10 karma (on a similar number of votes), I wouldn’t’ve downvoted it any further[1].
[I also upvoted, disagreevoted your comment above :)]
It’s currently on 101 karma on 114 votes, which at least marks it out as somewhat controversial (I think <1 karma/vote is generally the sign of a controversial post on the EA Forum). Note for reference that my post from a few months ago, raising the alarm about very short term AGI x-risk, is on 66 karma from 100 votes. But I made the mistake of cross-posting it to LW (where people are generally allergic to any kind of political activism), which led to a bunch of people coming over from there and downvoting it here as well.
Downvoter here. The post is more than just wrong (worthy of a disagree vote). It’s substantially negative EV for the future of the world. Or, to put it bluntly, it’s significantly[1] increasing the risk that we all get killed in the next few years.
It’s dangerous because it sounds plausible (and indeed has been upvoted a bunch and is the second highest karma post in this debate series currently). But it contains a number of unjustified claims (see other comments, e.g. [1], [2], [3], [4]), and is framed from the perspective of AI x-risk not being a problem (there’s a reason Nora works at Eleuther rather than Conjecture). Right now, the EA community seems like it’s on the fence on the issue of an AGI moratorium (or slowing down AI in general). But there are signs that EAs are warming to the idea. I see this debate series as being high stakes in terms whether there will be significant EA resources directed toward pushing for a moratorium. Such resources could really make the difference between it happening or not (given how few resources are being directed toward it so far).
EDIT: I expected that this comment itself would be downvoted. Why are you downvoting the comment? [There are separate disagree votes available on comments now.]
1+ basis points?
Thanks for replying Greg. I have indeed upvoted/disagreevoted you here, because I really appreciate Forum voters explaining their reasoning even if I disagree.
Mainly, I think calling Nora’s post “substantially negative EV for the future of the world” is tending towards the ‘galaxy brain’ end of EA that puts people off. I can’t calculate that, and I think it’s much more plausible that it provides EA Forum with a well written and knowledgable perspective of someone who disagrees on alignment difficulty and whether a pause is the best policy.
It’s part of a debate series, so in my opinion it’s entirely fine for it to be Nora’s perspective. Her post is quite open that she thinks Alignment is going well, and I valued it a lot even if I disagreed with specific points in it. I don’t think Nora’s being intentionally wrong, those are just claims she believes that may turn out to be incorrect.
I recognise that you are a lot more concerned about AI x-risk than I am (not to say I’m not concerned though) and are a lot more sure about pursuing a moratorium. I suppose I’d caution against presupposing your conclusion is so correct that other views, such as Nora’s, don’t deserve a hearing in the public sphere. I think that’s a really dangerous line of thought to go down. I think this is a place where a moral uncertainty framework could mitigate this line of thought, without necessarily watering down your commitment to prevent AI xRisk.
I agree with this (apart from the “valued it a lot” part, and I think Nora is coming in with a pro-AI bias). I downvoted because I thought the karma total was (still is) way too high, and high karma posts and their headlines do, for better or worse, influence the community and how it directs its resources.
Again, it deserves a hearing. I’m upset by how highly upvoted it is. If it was on, say, 10 karma (on a similar number of votes), I wouldn’t’ve downvoted it any further[1].
[I also upvoted, disagreevoted your comment above :)]
It’s currently on 101 karma on 114 votes, which at least marks it out as somewhat controversial (I think <1 karma/vote is generally the sign of a controversial post on the EA Forum). Note for reference that my post from a few months ago, raising the alarm about very short term AGI x-risk, is on 66 karma from 100 votes. But I made the mistake of cross-posting it to LW (where people are generally allergic to any kind of political activism), which led to a bunch of people coming over from there and downvoting it here as well.