Thanks for replying Greg. I have indeed upvoted/âdisagreevoted you here, because I really appreciate Forum voters explaining their reasoning even if I disagree.
Mainly, I think calling Noraâs post âsubstantially negative EV for the future of the worldâ is tending towards the âgalaxy brainâ end of EA that puts people off. I canât calculate that, and I think itâs much more plausible that it provides EA Forum with a well written and knowledgable perspective of someone who disagrees on alignment difficulty and whether a pause is the best policy.
Itâs part of a debate series, so in my opinion itâs entirely fine for it to be Noraâs perspective. Her post is quite open that she thinks Alignment is going well, and I valued it a lot even if I disagreed with specific points in it. I donât think Noraâs being intentionally wrong, those are just claims she believes that may turn out to be incorrect.
I recognise that you are a lot more concerned about AI x-risk than I am (not to say Iâm not concerned though) and are a lot more sure about pursuing a moratorium. I suppose Iâd caution against presupposing your conclusion is so correct that other views, such as Noraâs, donât deserve a hearing in the public sphere. I think thatâs a really dangerous line of thought to go down. I think this is a place where a moral uncertainty framework could mitigate this line of thought, without necessarily watering down your commitment to prevent AI xRisk.
Itâs part of a debate series, so in my opinion itâs entirely fine for it to be Noraâs perspective. Her post is quite open that she thinks Alignment is going well, and I valued it a lot even if I disagreed with specific points in it. I donât think Noraâs being intentionally wrong, those are just claims she believes that may turn out to be incorrect.
I agree with this (apart from the âvalued it a lotâ part, and I think Nora is coming in with a pro-AI bias). I downvoted because I thought the karma total was (still is) way too high, and high karma posts and their headlines do, for better or worse, influence the community and how it directs its resources.
I suppose Iâd caution against presupposing your conclusion is so correct that other views, such as Noraâs, donât deserve a hearing in the public sphere.
Again, it deserves a hearing. Iâm upset by how highly upvoted it is. If it was on, say, 10 karma (on a similar number of votes), I wouldnâtâve downvoted it any further[1].
[I also upvoted, disagreevoted your comment above :)]
Itâs currently on 101 karma on 114 votes, which at least marks it out as somewhat controversial (I think <1 karma/âvote is generally the sign of a controversial post on the EA Forum). Note for reference that my post from a few months ago, raising the alarm about very short term AGI x-risk, is on 66 karma from 100 votes. But I made the mistake of cross-posting it to LW (where people are generally allergic to any kind of political activism), which led to a bunch of people coming over from there and downvoting it here as well.
Thanks for replying Greg. I have indeed upvoted/âdisagreevoted you here, because I really appreciate Forum voters explaining their reasoning even if I disagree.
Mainly, I think calling Noraâs post âsubstantially negative EV for the future of the worldâ is tending towards the âgalaxy brainâ end of EA that puts people off. I canât calculate that, and I think itâs much more plausible that it provides EA Forum with a well written and knowledgable perspective of someone who disagrees on alignment difficulty and whether a pause is the best policy.
Itâs part of a debate series, so in my opinion itâs entirely fine for it to be Noraâs perspective. Her post is quite open that she thinks Alignment is going well, and I valued it a lot even if I disagreed with specific points in it. I donât think Noraâs being intentionally wrong, those are just claims she believes that may turn out to be incorrect.
I recognise that you are a lot more concerned about AI x-risk than I am (not to say Iâm not concerned though) and are a lot more sure about pursuing a moratorium. I suppose Iâd caution against presupposing your conclusion is so correct that other views, such as Noraâs, donât deserve a hearing in the public sphere. I think thatâs a really dangerous line of thought to go down. I think this is a place where a moral uncertainty framework could mitigate this line of thought, without necessarily watering down your commitment to prevent AI xRisk.
I agree with this (apart from the âvalued it a lotâ part, and I think Nora is coming in with a pro-AI bias). I downvoted because I thought the karma total was (still is) way too high, and high karma posts and their headlines do, for better or worse, influence the community and how it directs its resources.
Again, it deserves a hearing. Iâm upset by how highly upvoted it is. If it was on, say, 10 karma (on a similar number of votes), I wouldnâtâve downvoted it any further[1].
[I also upvoted, disagreevoted your comment above :)]
Itâs currently on 101 karma on 114 votes, which at least marks it out as somewhat controversial (I think <1 karma/âvote is generally the sign of a controversial post on the EA Forum). Note for reference that my post from a few months ago, raising the alarm about very short term AGI x-risk, is on 66 karma from 100 votes. But I made the mistake of cross-posting it to LW (where people are generally allergic to any kind of political activism), which led to a bunch of people coming over from there and downvoting it here as well.