It still seems to me like this is a sufficiently important and interesting report that itād be better if there was a little more mention of it on the Forum, for the sake of āthe general longtermist publicā, since (a) the Forum seems arguably the main, central hub for EA discourse in general, and (b) there is a bunch of other AI governance type stuff here, so having that without things like this report could give a distorted picture.
But it also doesnāt seem like a horrible or shocking error has been committed. And it does make sense that these things would be first, and mostly, discussed in more specialised sub-communities and venues.
I have gotten the general feeling that there is not nearly enough curiosity in this community about the ins and outs of politics vs stuff like the research and tech world. Reports just arenāt very sexy. Specialization can be good but there are topics that EAs engage with that are probably as specialized (hyper specific notions of how an ai might kill us?) that see much more engagement and I donāt think it is due to impact estimates.
I donāt read much on AI safety so I could be way off but it feels pretty important. The US government could snap their fingers and double the amount of funding going into AI safety. This seems very salient for predicting impacts of EA AI safety orgs. Either way, this has made me more interested in reading through https://āāforum.effectivealtruism.org/āātag/āāai-governance.
Yeah, that makes sense.
It still seems to me like this is a sufficiently important and interesting report that itād be better if there was a little more mention of it on the Forum, for the sake of āthe general longtermist publicā, since (a) the Forum seems arguably the main, central hub for EA discourse in general, and (b) there is a bunch of other AI governance type stuff here, so having that without things like this report could give a distorted picture.
But it also doesnāt seem like a horrible or shocking error has been committed. And it does make sense that these things would be first, and mostly, discussed in more specialised sub-communities and venues.
I have gotten the general feeling that there is not nearly enough curiosity in this community about the ins and outs of politics vs stuff like the research and tech world. Reports just arenāt very sexy. Specialization can be good but there are topics that EAs engage with that are probably as specialized (hyper specific notions of how an ai might kill us?) that see much more engagement and I donāt think it is due to impact estimates.
I donāt read much on AI safety so I could be way off but it feels pretty important. The US government could snap their fingers and double the amount of funding going into AI safety. This seems very salient for predicting impacts of EA AI safety orgs. Either way, this has made me more interested in reading through https://āāforum.effectivealtruism.org/āātag/āāai-governance.