The strategy of “get a lot of press about our cause area, to get a lot of awareness, even if they get the details wrong” seems to be the opposite of what EA is all about. Shouldn’t we be using evidence and reason to figure out how to benefit others as much as possible?
When the logic is, I feel very strongly about cause area X. Therefore we should do things about X as much as possible. Anything that helps X is good. Any people excited about X are good. Any way of spending money on X is good. Well, then X could equally well be cancer research, or saving the whales, or donating to the Harvard endowment, or the San Francisco Symphony.
“The strategy of “get a lot of press about our cause area, to get a lot of awareness, even if they get the details wrong” seems to be the opposite of what EA is all about” Yes, and I think this is a huge vulnerability for things like this. Winning the narrative actually matters in the real world.
I have a variety of reservations about the original post, but I don’t think this comment does a good job of expressing my views, nor do I find the criticism very compelling, if only for the obvious distinction that the things you list at the end of the comment don’t involve all of humanity dying and >trillions of people not existing in the future.
For me (a global health guy), EA is mostly about doing the most good we can with our life. If right now, increasing awareness about AI danger, while some details may be lost is what will do the most good then I think it is consistent with EA thinking.
The OP is using evidence and reason to argue this point.
The strategy of “get a lot of press about our cause area, to get a lot of awareness, even if they get the details wrong” seems to be the opposite of what EA is all about. Shouldn’t we be using evidence and reason to figure out how to benefit others as much as possible?
When the logic is, I feel very strongly about cause area X. Therefore we should do things about X as much as possible. Anything that helps X is good. Any people excited about X are good. Any way of spending money on X is good. Well, then X could equally well be cancer research, or saving the whales, or donating to the Harvard endowment, or the San Francisco Symphony.
“The strategy of “get a lot of press about our cause area, to get a lot of awareness, even if they get the details wrong” seems to be the opposite of what EA is all about” Yes, and I think this is a huge vulnerability for things like this. Winning the narrative actually matters in the real world.
I have a variety of reservations about the original post, but I don’t think this comment does a good job of expressing my views, nor do I find the criticism very compelling, if only for the obvious distinction that the things you list at the end of the comment don’t involve all of humanity dying and >trillions of people not existing in the future.
For me (a global health guy), EA is mostly about doing the most good we can with our life. If right now, increasing awareness about AI danger, while some details may be lost is what will do the most good then I think it is consistent with EA thinking.
The OP is using evidence and reason to argue this point.