A lot of longtermists do pay attention to this sort of stuff, they just tend not to post on the EA Forum / LessWrong. I personally heard about the report from many different people after it was published, and also from a couple of people even before it was published (when there was a chance to provide input on it).
In general I expect that for any sufficiently large object-level thing, the discourse on the EA Forum will lag pretty far behind the discourse of people actively working on that thing (whether that discourse is public or not). I read the EA Forum because (1) I’m interested in EA and (2) I’d like to correct misconceptions about AI alignment in EA. I would not read it as a source of articles relevant to AI alignment (though every once in a while they do come up).
It still seems to me like this is a sufficiently important and interesting report that it’d be better if there was a little more mention of it on the Forum, for the sake of “the general longtermist public”, since (a) the Forum seems arguably the main, central hub for EA discourse in general, and (b) there is a bunch of other AI governance type stuff here, so having that without things like this report could give a distorted picture.
But it also doesn’t seem like a horrible or shocking error has been committed. And it does make sense that these things would be first, and mostly, discussed in more specialised sub-communities and venues.
I have gotten the general feeling that there is not nearly enough curiosity in this community about the ins and outs of politics vs stuff like the research and tech world. Reports just aren’t very sexy. Specialization can be good but there are topics that EAs engage with that are probably as specialized (hyper specific notions of how an ai might kill us?) that see much more engagement and I don’t think it is due to impact estimates.
I don’t read much on AI safety so I could be way off but it feels pretty important. The US government could snap their fingers and double the amount of funding going into AI safety. This seems very salient for predicting impacts of EA AI safety orgs. Either way, this has made me more interested in reading through https://forum.effectivealtruism.org/tag/ai-governance.
A lot of longtermists do pay attention to this sort of stuff, they just tend not to post on the EA Forum / LessWrong. I personally heard about the report from many different people after it was published, and also from a couple of people even before it was published (when there was a chance to provide input on it).
In general I expect that for any sufficiently large object-level thing, the discourse on the EA Forum will lag pretty far behind the discourse of people actively working on that thing (whether that discourse is public or not). I read the EA Forum because (1) I’m interested in EA and (2) I’d like to correct misconceptions about AI alignment in EA. I would not read it as a source of articles relevant to AI alignment (though every once in a while they do come up).
Yeah, that makes sense.
It still seems to me like this is a sufficiently important and interesting report that it’d be better if there was a little more mention of it on the Forum, for the sake of “the general longtermist public”, since (a) the Forum seems arguably the main, central hub for EA discourse in general, and (b) there is a bunch of other AI governance type stuff here, so having that without things like this report could give a distorted picture.
But it also doesn’t seem like a horrible or shocking error has been committed. And it does make sense that these things would be first, and mostly, discussed in more specialised sub-communities and venues.
I have gotten the general feeling that there is not nearly enough curiosity in this community about the ins and outs of politics vs stuff like the research and tech world. Reports just aren’t very sexy. Specialization can be good but there are topics that EAs engage with that are probably as specialized (hyper specific notions of how an ai might kill us?) that see much more engagement and I don’t think it is due to impact estimates.
I don’t read much on AI safety so I could be way off but it feels pretty important. The US government could snap their fingers and double the amount of funding going into AI safety. This seems very salient for predicting impacts of EA AI safety orgs. Either way, this has made me more interested in reading through https://forum.effectivealtruism.org/tag/ai-governance.