EA people might have written thousands of posts on the reason behind the dangers of AI, but these posts were made for the community, assuming people already believed in Bostrom’s work etc. These posts have a certain jargon and assume many things that the mainstream audience doesn’t assume at all. And talking about advocacy about AI is still very much debated and far from granted (see the recent post on advocacy on the forum, I can link it if people are confused). Also the way OpenPhil advocates money and its clear concentration on CGR detrimentally to more neartermist, globah health causes is a fact, and their influence on Congress is also a fact. Facts can be skewed towards a certain thinking perspective, true, but they’re there.
Despite the feeling some might have—that most in EA consider existential risks related to AI as THE most pressing issue, I’m not sure how true that is. The forum is a nice screen of smoke given that people posting and commenting on these posts are always the same, and Rethink Priorities survey is NOT representative—I ran the number of my own EA group with the trends that were evoked for my group on the RP priorities and it doesn’t align, it’s clearly skewed towards a minority that is always on the forum and AI afficionados.
So we can be mad all we want and rant that these journalists are dense (I don’t deny Politico bad coverage of EA btw, it’s just not the only journal making these conclusions), as long as we don’t take advocacy seriously and try to get these arguments out there, nothing better will happen. So let’s take these articles as an opportunity to do better, instead of taking our arguments for granted. There is work to do about this insideand outside the community.
And let me anticipate the downvotes that these opinions usually get me (quite bad btw for the a community that is supposed to seek truth and not just cede to the human impulsion of ’I don’t like it, I’ll downvote it without arguing): if you disagree on these specific points, let me know why. Be constructive. It’s also an issue: imagine a journalist creating an account to understand better the EA community and comment on the posts who gets downvoted every time he dares raising negative opinions/asking uncomfortable questions about AI safety. Well, so much for our ability to be constructive.
Sure! So far there are arguments showing how General AI could be created, or even superintelligence, but the timelines vary immensely from researcher to researcher and there is no data-based evidence that would justify pouring all this money into it? EA is supposed to be evidenced-based and yet all we seem to have are arguments, but not evidences? I understand this is the nature of these things, but it’s striking to see how efficient EA tries to be when it comes to measuring GiveWell’s impact VS the impact created by pouring money into AI safety for longtermist causes? It feels that impact evaluation when it comes to AI safety is non-existent at worse and not good at best (see useful post of RP about impact-based evidence regarding x-risks).
EA people might have written thousands of posts on the reason behind the dangers of AI, but these posts were made for the community, assuming people already believed in Bostrom’s work etc. These posts have a certain jargon and assume many things that the mainstream audience doesn’t assume at all. And talking about advocacy about AI is still very much debated and far from granted (see the recent post on advocacy on the forum, I can link it if people are confused). Also the way OpenPhil advocates money and its clear concentration on CGR detrimentally to more neartermist, globah health causes is a fact, and their influence on Congress is also a fact. Facts can be skewed towards a certain thinking perspective, true, but they’re there.
Despite the feeling some might have—that most in EA consider existential risks related to AI as THE most pressing issue, I’m not sure how true that is. The forum is a nice screen of smoke given that people posting and commenting on these posts are always the same, and Rethink Priorities survey is NOT representative—I ran the number of my own EA group with the trends that were evoked for my group on the RP priorities and it doesn’t align, it’s clearly skewed towards a minority that is always on the forum and AI afficionados.
So we can be mad all we want and rant that these journalists are dense (I don’t deny Politico bad coverage of EA btw, it’s just not the only journal making these conclusions), as long as we don’t take advocacy seriously and try to get these arguments out there, nothing better will happen. So let’s take these articles as an opportunity to do better, instead of taking our arguments for granted. There is work to do about this inside and outside the community.
And let me anticipate the downvotes that these opinions usually get me (quite bad btw for the a community that is supposed to seek truth and not just cede to the human impulsion of ’I don’t like it, I’ll downvote it without arguing): if you disagree on these specific points, let me know why. Be constructive. It’s also an issue: imagine a journalist creating an account to understand better the EA community and comment on the posts who gets downvoted every time he dares raising negative opinions/asking uncomfortable questions about AI safety. Well, so much for our ability to be constructive.
Can you give some examples here? What are some uncomfortable questions about AI safety (that a journalist might ask)?
Sure! So far there are arguments showing how General AI could be created, or even superintelligence, but the timelines vary immensely from researcher to researcher and there is no data-based evidence that would justify pouring all this money into it? EA is supposed to be evidenced-based and yet all we seem to have are arguments, but not evidences? I understand this is the nature of these things, but it’s striking to see how efficient EA tries to be when it comes to measuring GiveWell’s impact VS the impact created by pouring money into AI safety for longtermist causes? It feels that impact evaluation when it comes to AI safety is non-existent at worse and not good at best (see useful post of RP about impact-based evidence regarding x-risks).