@thoth hermes (or https://ââx.com/ââthoth_iv if someone can get it to them if youâre Twitter friends then pls go ahead.[1] Iâm responding to this thread hereâI am not saying âthat EA is losing the memetic war because of its high epistemic standardsâ, in fact quite the opposite r.e. AI Safety, and maybe because of misunderstanding of how politics work/ânot caring about the social perception of the movement. My reply to Iyngkarran below fleshes it out a bit more, but if thereâs a way for you to get in touch directly, Iâd love to clarify what I think, and also hear your thoughts more. But I think I was trying to come from a similar place that Richard Ngo is, and many of his comments on the LessWrong thread here very much chime with my own point-of-view. What I am trying to push for is the AI Safety movement reflecting on losing ground memetically and then asking âwhy is that? what are we getting wrong?â rather than doubling down into lowest-common denominator communication. I think we actually agree here? Maybe I didnât make that clear enough in my OP though.
@Iyngkarran KumarâThanks for sharing your thoughts, but I must say that I disagree with it. I donât think that the epistemic standards are working against us by being too polite, quite the opposite. I think the epistemic standards in AI Safety have been too low relative to the attempts to wield power. If you are potentialy going to criminalise existing Open-Source models,[2] you better bring the epistemic goods. And for many people in the AI Safety field, the goods have not been brought (which is why I see people like Jeremy Howard, Sara Hooker, Rohit Krishnan etc get increasingly frustrated by the AI Safety field). This is on the field of AI Safety imo for not being more persuasive. If the AI Safety field was right, the arguments would have been more convincing. I think, while itâs good for Eliezer to say what he thinks accurately, the âbomb the datacentersâ[3] piece has probably been harmful for AI Safetyâs cause, and things like it a very liable to turn people away from supporting AI Safety. I also donât think itâs good to say that itâs a claim of âwhat we believeâ, as I donât really agree with Eliezer on much.
(r.e. inside vs outside game, see this post from Holly Elmore)
@anormative/â @David MathersâYeah itâs difficult to manage the exact hypothesis here, especially for falsified preferences. Iâm pretty sure SV is âliberalâ overall, but I wouldnât be surprised if Trump % is greater than 16 and 20, and it definitely seems to be a lot more open this time, e.g. a16z and Musk openly endorsing Trump, Sequoia Capital partners claiming that Biden dropping out was worse than the Jan 6th riot. Things seem very different this time around, different enough to be paid attention to.
- - - - - - - - - - - -
Once again, if you disagree, Iâd love to actually here why. Up/âdown voting is a crude feedback to, and discussion of ideas leads to much quicker sharing of knowledge. If you want to respond but donât want to publicly, then by all means please send a DM :)
This piece from @1a3orn is excellent and to absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;drâAI Safety people, engage with 1a3orn more!)
I disagree FWIW. I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.
I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.
I donât disagree with this statement, but also think the original comment is reading into twitter way too much.
absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;drâAI Safety people, engage with 1a3orn more!)
There are many (edit: 2) comments responding and offering to talk. 1a3orn doesnât appear to have replied to any of these comments. (To be clear, Iâm not saying theyâre under any obligation here, just that there isnât a absence of attempted engagement and thus you shouldnât update in the direction you seem to be updating here.)
a) r.e. Twitter, almost tautologically true Iâm sure. I think it is a bit of signal though, just very noisy. And one of the few ways for non-Bay people such as myself to try to get a sense of the pulse of the Bay, though obviously very prone to error, and perhaps not worth doing at all.
Folding in Responses here
@thoth hermes (or https://ââx.com/ââthoth_iv if someone can get it to them if youâre Twitter friends then pls go ahead.[1] Iâm responding to this thread hereâI am not saying âthat EA is losing the memetic war because of its high epistemic standardsâ, in fact quite the opposite r.e. AI Safety, and maybe because of misunderstanding of how politics work/ânot caring about the social perception of the movement. My reply to Iyngkarran below fleshes it out a bit more, but if thereâs a way for you to get in touch directly, Iâd love to clarify what I think, and also hear your thoughts more. But I think I was trying to come from a similar place that Richard Ngo is, and many of his comments on the LessWrong thread here very much chime with my own point-of-view. What I am trying to push for is the AI Safety movement reflecting on losing ground memetically and then asking âwhy is that? what are we getting wrong?â rather than doubling down into lowest-common denominator communication. I think we actually agree here? Maybe I didnât make that clear enough in my OP though.
@Iyngkarran KumarâThanks for sharing your thoughts, but I must say that I disagree with it. I donât think that the epistemic standards are working against us by being too polite, quite the opposite. I think the epistemic standards in AI Safety have been too low relative to the attempts to wield power. If you are potentialy going to criminalise existing Open-Source models,[2] you better bring the epistemic goods. And for many people in the AI Safety field, the goods have not been brought (which is why I see people like Jeremy Howard, Sara Hooker, Rohit Krishnan etc get increasingly frustrated by the AI Safety field). This is on the field of AI Safety imo for not being more persuasive. If the AI Safety field was right, the arguments would have been more convincing. I think, while itâs good for Eliezer to say what he thinks accurately, the âbomb the datacentersâ[3] piece has probably been harmful for AI Safetyâs cause, and things like it a very liable to turn people away from supporting AI Safety. I also donât think itâs good to say that itâs a claim of âwhat we believeâ, as I donât really agree with Eliezer on much.
(r.e. inside vs outside game, see this post from Holly Elmore)
@anormative/â @David MathersâYeah itâs difficult to manage the exact hypothesis here, especially for falsified preferences. Iâm pretty sure SV is âliberalâ overall, but I wouldnât be surprised if Trump % is greater than 16 and 20, and it definitely seems to be a lot more open this time, e.g. a16z and Musk openly endorsing Trump, Sequoia Capital partners claiming that Biden dropping out was worse than the Jan 6th riot. Things seem very different this time around, different enough to be paid attention to.
- - - - - - - - - - - -
Once again, if you disagree, Iâd love to actually here why. Up/âdown voting is a crude feedback to, and discussion of ideas leads to much quicker sharing of knowledge. If you want to respond but donât want to publicly, then by all means please send a DM :)
I donât have Twitter and think itâd be harmful for my epistemic & mental health if I did get an account and become immersed in âThe Discourseâ
This piece from @1a3orn is excellent and to absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;drâAI Safety people, engage with 1a3orn more!)
I know thatâs not what it literally says but itâs what people know it as
I think youâre reading into twitter way too much.
I disagree FWIW. I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.
I donât disagree with this statement, but also think the original comment is reading into twitter way too much.
There are many (edit: 2) comments responding and offering to talk. 1a3orn doesnât appear to have replied to any of these comments. (To be clear, Iâm not saying theyâre under any obligation here, just that there isnât a absence of attempted engagement and thus you shouldnât update in the direction you seem to be updating here.)
a) r.e. Twitter, almost tautologically true Iâm sure. I think it is a bit of signal though, just very noisy. And one of the few ways for non-Bay people such as myself to try to get a sense of the pulse of the Bay, though obviously very prone to error, and perhaps not worth doing at all.
b) I havenât seen those comments,[1] could you point me to them or where they happened? I know there was a bunch of discussion around their concerns about the Biorisk paper, but Iâm particularly concerned with the âMany AI Safety Orgs Have Tried to Criminalize Currently-Existing Open-Source AIâ articleâwhich I havenât seen good pushback to. Again, welcome to being wrong on this.
Ok, Iâve seen Ladish and Kokotajlo offer to talk which is good, would have like 1a3orn to take them up on that offer for sure.
Scroll down to see comments.