@thoth hermes (or https://āāx.com/āāthoth_iv if someone can get it to them if youāre Twitter friends then pls go ahead.[1] Iām responding to this thread hereāI am not saying āthat EA is losing the memetic war because of its high epistemic standardsā, in fact quite the opposite r.e. AI Safety, and maybe because of misunderstanding of how politics work/ānot caring about the social perception of the movement. My reply to Iyngkarran below fleshes it out a bit more, but if thereās a way for you to get in touch directly, Iād love to clarify what I think, and also hear your thoughts more. But I think I was trying to come from a similar place that Richard Ngo is, and many of his comments on the LessWrong thread here very much chime with my own point-of-view. What I am trying to push for is the AI Safety movement reflecting on losing ground memetically and then asking āwhy is that? what are we getting wrong?ā rather than doubling down into lowest-common denominator communication. I think we actually agree here? Maybe I didnāt make that clear enough in my OP though.
@Iyngkarran KumarāThanks for sharing your thoughts, but I must say that I disagree with it. I donāt think that the epistemic standards are working against us by being too polite, quite the opposite. I think the epistemic standards in AI Safety have been too low relative to the attempts to wield power. If you are potentialy going to criminalise existing Open-Source models,[2] you better bring the epistemic goods. And for many people in the AI Safety field, the goods have not been brought (which is why I see people like Jeremy Howard, Sara Hooker, Rohit Krishnan etc get increasingly frustrated by the AI Safety field). This is on the field of AI Safety imo for not being more persuasive. If the AI Safety field was right, the arguments would have been more convincing. I think, while itās good for Eliezer to say what he thinks accurately, the ābomb the datacentersā[3] piece has probably been harmful for AI Safetyās cause, and things like it a very liable to turn people away from supporting AI Safety. I also donāt think itās good to say that itās a claim of āwhat we believeā, as I donāt really agree with Eliezer on much.
(r.e. inside vs outside game, see this post from Holly Elmore)
@anormative/ā @David MathersāYeah itās difficult to manage the exact hypothesis here, especially for falsified preferences. Iām pretty sure SV is āliberalā overall, but I wouldnāt be surprised if Trump % is greater than 16 and 20, and it definitely seems to be a lot more open this time, e.g. a16z and Musk openly endorsing Trump, Sequoia Capital partners claiming that Biden dropping out was worse than the Jan 6th riot. Things seem very different this time around, different enough to be paid attention to.
- - - - - - - - - - - -
Once again, if you disagree, Iād love to actually here why. Up/ādown voting is a crude feedback to, and discussion of ideas leads to much quicker sharing of knowledge. If you want to respond but donāt want to publicly, then by all means please send a DM :)
This piece from @1a3orn is excellent and to absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;drāAI Safety people, engage with 1a3orn more!)
I disagree FWIW. I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.
I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.
I donāt disagree with this statement, but also think the original comment is reading into twitter way too much.
absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;drāAI Safety people, engage with 1a3orn more!)
There are many (edit: 2) comments responding and offering to talk. 1a3orn doesnāt appear to have replied to any of these comments. (To be clear, Iām not saying theyāre under any obligation here, just that there isnāt a absence of attempted engagement and thus you shouldnāt update in the direction you seem to be updating here.)
a) r.e. Twitter, almost tautologically true Iām sure. I think it is a bit of signal though, just very noisy. And one of the few ways for non-Bay people such as myself to try to get a sense of the pulse of the Bay, though obviously very prone to error, and perhaps not worth doing at all.
Folding in Responses here
@thoth hermes (or https://āāx.com/āāthoth_iv if someone can get it to them if youāre Twitter friends then pls go ahead.[1] Iām responding to this thread hereāI am not saying āthat EA is losing the memetic war because of its high epistemic standardsā, in fact quite the opposite r.e. AI Safety, and maybe because of misunderstanding of how politics work/ānot caring about the social perception of the movement. My reply to Iyngkarran below fleshes it out a bit more, but if thereās a way for you to get in touch directly, Iād love to clarify what I think, and also hear your thoughts more. But I think I was trying to come from a similar place that Richard Ngo is, and many of his comments on the LessWrong thread here very much chime with my own point-of-view. What I am trying to push for is the AI Safety movement reflecting on losing ground memetically and then asking āwhy is that? what are we getting wrong?ā rather than doubling down into lowest-common denominator communication. I think we actually agree here? Maybe I didnāt make that clear enough in my OP though.
@Iyngkarran KumarāThanks for sharing your thoughts, but I must say that I disagree with it. I donāt think that the epistemic standards are working against us by being too polite, quite the opposite. I think the epistemic standards in AI Safety have been too low relative to the attempts to wield power. If you are potentialy going to criminalise existing Open-Source models,[2] you better bring the epistemic goods. And for many people in the AI Safety field, the goods have not been brought (which is why I see people like Jeremy Howard, Sara Hooker, Rohit Krishnan etc get increasingly frustrated by the AI Safety field). This is on the field of AI Safety imo for not being more persuasive. If the AI Safety field was right, the arguments would have been more convincing. I think, while itās good for Eliezer to say what he thinks accurately, the ābomb the datacentersā[3] piece has probably been harmful for AI Safetyās cause, and things like it a very liable to turn people away from supporting AI Safety. I also donāt think itās good to say that itās a claim of āwhat we believeā, as I donāt really agree with Eliezer on much.
(r.e. inside vs outside game, see this post from Holly Elmore)
@anormative/ā @David MathersāYeah itās difficult to manage the exact hypothesis here, especially for falsified preferences. Iām pretty sure SV is āliberalā overall, but I wouldnāt be surprised if Trump % is greater than 16 and 20, and it definitely seems to be a lot more open this time, e.g. a16z and Musk openly endorsing Trump, Sequoia Capital partners claiming that Biden dropping out was worse than the Jan 6th riot. Things seem very different this time around, different enough to be paid attention to.
- - - - - - - - - - - -
Once again, if you disagree, Iād love to actually here why. Up/ādown voting is a crude feedback to, and discussion of ideas leads to much quicker sharing of knowledge. If you want to respond but donāt want to publicly, then by all means please send a DM :)
I donāt have Twitter and think itād be harmful for my epistemic & mental health if I did get an account and become immersed in āThe Discourseā
This piece from @1a3orn is excellent and to absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;drāAI Safety people, engage with 1a3orn more!)
I know thatās not what it literally says but itās what people know it as
I think youāre reading into twitter way too much.
I disagree FWIW. I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.
I donāt disagree with this statement, but also think the original comment is reading into twitter way too much.
There are many (edit: 2) comments responding and offering to talk. 1a3orn doesnāt appear to have replied to any of these comments. (To be clear, Iām not saying theyāre under any obligation here, just that there isnāt a absence of attempted engagement and thus you shouldnāt update in the direction you seem to be updating here.)
a) r.e. Twitter, almost tautologically true Iām sure. I think it is a bit of signal though, just very noisy. And one of the few ways for non-Bay people such as myself to try to get a sense of the pulse of the Bay, though obviously very prone to error, and perhaps not worth doing at all.
b) I havenāt seen those comments,[1] could you point me to them or where they happened? I know there was a bunch of discussion around their concerns about the Biorisk paper, but Iām particularly concerned with the āMany AI Safety Orgs Have Tried to Criminalize Currently-Existing Open-Source AIā articleāwhich I havenāt seen good pushback to. Again, welcome to being wrong on this.
Ok, Iāve seen Ladish and Kokotajlo offer to talk which is good, would have like 1a3orn to take them up on that offer for sure.
Scroll down to see comments.