Considered writing a similar post about the impact of anti-realism in EA, but I’m going to write here instead. In short, I think accepting anti-realism is a bit worse/wierder for ‘EA as currently’ than you think:
Impartiality
It broadly seems like the best version of morality available under anti-realism is contractualism. If so, this probably significantly weakens the core EA value of impartiality, in favour of only those who you have a ‘contract’. It might rule out spatially far away people, it might rule out temporally far away people (unless you have an ‘asymmetrical contract’ whereby we are obligated to future generations because past generations were obligated to us), it probably rules out impartiality animals or non-agents/morally incapable beings.
‘Evangelism’
EA generally seems to think that we should put resources into convincing others of our views (bad phrasing but gist is there). This seems much less compelling on anti-realism, because your views are literally no more correct than others. You could counter that ‘we’ have thought more and therefore can help people who are less clear. You could counter that other people have inconsistent views (“Suffering is really bad but factory farms are fine”), however there’s nothing compelling bad about inconsistency on an anti-realist viewpoint either.
Demandingness
Broadly, turning morality into conditionals means a lot of the ‘driving force’ behind doing good is lost. It’s very easy to say “if I want to do good I should do X”, but then say “wow X is hard, maybe I don’t really want to do good after all”. I imagine this affects a bunch of things that EA would like people to do, and makes it much harder practically to cause changes if you outright accept it’s all conditional.
Note: I’m using Draft Amnesty rules for this comment, I reckon on a few hours of reflection I might disagree with some/all of these.
On impartiality, I mostly agree, with one caveat: when you read Hobbes and Rousseau, the social contract is a metaphor. The state of nature serves more like a morality tale, akin to the Garden of Eden, rather than an objective attempt to describe anything. In Hobbes, for example, the ‘acceptance’ of the contract comes from fear and the need for security, not from a 21st-century liberal notion of consent as we would understand it today. It is coerced by necessity. To the extent that they are viable, there are many ways to get what you want from contractualism.
The rest of what you say is precisely the point I am attempting to get at. That obligatory driving force which we associate with morality. The kind that legitimises policies and states and gives purpose to individual lives seems to be irrevocably lost, unfortunately. It is like the old story:
A traveller comes across three bricklayers working on a construction site and asks each of them what they are doing.
The first bricklayer, focused on the immediate task, replies that he is laying bricks.
The second, seeing a larger purpose, responds that he is building a wall.
The third, with the grandest vision, proclaims that he is building the house of the Lord.
That latter kind of purpose is hard to attain with these assumptions. And if you could bottle it, that kind of purpose would be a fruitful resource indeed for EA.
Considered writing a similar post about the impact of anti-realism in EA, but I’m going to write here instead. In short, I think accepting anti-realism is a bit worse/wierder for ‘EA as currently’ than you think:
Impartiality
It broadly seems like the best version of morality available under anti-realism is contractualism. If so, this probably significantly weakens the core EA value of impartiality, in favour of only those who you have a ‘contract’. It might rule out spatially far away people, it might rule out temporally far away people (unless you have an ‘asymmetrical contract’ whereby we are obligated to future generations because past generations were obligated to us), it probably rules out impartiality animals or non-agents/morally incapable beings.
‘Evangelism’
EA generally seems to think that we should put resources into convincing others of our views (bad phrasing but gist is there). This seems much less compelling on anti-realism, because your views are literally no more correct than others. You could counter that ‘we’ have thought more and therefore can help people who are less clear. You could counter that other people have inconsistent views (“Suffering is really bad but factory farms are fine”), however there’s nothing compelling bad about inconsistency on an anti-realist viewpoint either.
Demandingness
Broadly, turning morality into conditionals means a lot of the ‘driving force’ behind doing good is lost. It’s very easy to say “if I want to do good I should do X”, but then say “wow X is hard, maybe I don’t really want to do good after all”. I imagine this affects a bunch of things that EA would like people to do, and makes it much harder practically to cause changes if you outright accept it’s all conditional.
Note: I’m using Draft Amnesty rules for this comment, I reckon on a few hours of reflection I might disagree with some/all of these.
On impartiality, I mostly agree, with one caveat: when you read Hobbes and Rousseau, the social contract is a metaphor. The state of nature serves more like a morality tale, akin to the Garden of Eden, rather than an objective attempt to describe anything. In Hobbes, for example, the ‘acceptance’ of the contract comes from fear and the need for security, not from a 21st-century liberal notion of consent as we would understand it today. It is coerced by necessity. To the extent that they are viable, there are many ways to get what you want from contractualism.
The rest of what you say is precisely the point I am attempting to get at. That obligatory driving force which we associate with morality. The kind that legitimises policies and states and gives purpose to individual lives seems to be irrevocably lost, unfortunately. It is like the old story:
A traveller comes across three bricklayers working on a construction site and asks each of them what they are doing.
The first bricklayer, focused on the immediate task, replies that he is laying bricks.
The second, seeing a larger purpose, responds that he is building a wall.
The third, with the grandest vision, proclaims that he is building the house of the Lord.
That latter kind of purpose is hard to attain with these assumptions. And if you could bottle it, that kind of purpose would be a fruitful resource indeed for EA.