Here I think the EA movement has a very substantial role to play. My hypothesis is thus that the EA message is very fruitful even though it may be philosophically trivial.
It seems correct given EA’s goals, its effectiveness should not be measured philosophically—instead it should be assessed practically. If EA fails, likely it is because it becomes meta discussion (like this one) and fails to make a difference in this world. (This is not intended as a dig against the present discussion). My sense is that EA sometimes involves interested parties that are not directly involved in DOING the relevant activities in question. Thus it is a kind of meta-discussion by its nature. I think this is fine… as an AI guy I notice that practitioners rarely ask the hardest questions questions about what they are doing. as a former DARPA guy I saw the same myopia in the defense sphere. So outsiders may well be the right ingredient to add.
Personally I would assess EA on the basis of its subjectively/objectively-assessed movement of the Overton window for relevant decision makers. e. g. company owner, voters, activists, researchers, etc. The issues EA takes on are really quite large. It seems hard to directly move that needle. Still it seems plausible that EA could end up being transformative by changing very thinking of humanity. And it seems possible that it gets wrapped up into its own sub-communities whose beliefs end up diverging from humanity at large and thus are ignored by humanity at large.
When I look at questions around AGI safety I think the tiny amounts of human effort and money expended by EA, perhaps this can be counted as a “win” … that humanity’s thinking is moving in directions that will affect large scale policy. (On this particular issue, I fall into the “too little too late” camp) but still I have to acknowledge the apparently real impact EA has had in legitimizing this topic in practically impactful ways.
It seems correct given EA’s goals, its effectiveness should not be measured philosophically—instead it should be assessed practically. If EA fails, likely it is because it becomes meta discussion (like this one) and fails to make a difference in this world. (This is not intended as a dig against the present discussion). My sense is that EA sometimes involves interested parties that are not directly involved in DOING the relevant activities in question. Thus it is a kind of meta-discussion by its nature. I think this is fine… as an AI guy I notice that practitioners rarely ask the hardest questions questions about what they are doing. as a former DARPA guy I saw the same myopia in the defense sphere. So outsiders may well be the right ingredient to add.
Personally I would assess EA on the basis of its subjectively/objectively-assessed movement of the Overton window for relevant decision makers. e. g. company owner, voters, activists, researchers, etc. The issues EA takes on are really quite large. It seems hard to directly move that needle. Still it seems plausible that EA could end up being transformative by changing very thinking of humanity. And it seems possible that it gets wrapped up into its own sub-communities whose beliefs end up diverging from humanity at large and thus are ignored by humanity at large.
When I look at questions around AGI safety I think the tiny amounts of human effort and money expended by EA, perhaps this can be counted as a “win” … that humanity’s thinking is moving in directions that will affect large scale policy. (On this particular issue, I fall into the “too little too late” camp) but still I have to acknowledge the apparently real impact EA has had in legitimizing this topic in practically impactful ways.