Thanks for the pointer to “independence of irrelevant alternatives.”
I’m curious to know how you think about “some normative weight.” I think of these arguments as being about mathematical systems that do not describe humans, hence no normative weight. Do you think of them as being about mathematical systems that *somewhat* describe humans, hence *some* normative weight?
Link to discussion on Facebook: https://www.facebook.com/groups/eahangout/permalink/2845485492205023/
I think this math is interesting, and I appreciate the good pedagogy here. But I don’t think this type of reasoning is relevant to my effective altruism (defined as “figuring out how to do the most good”). In particular, I disagree that this is an “argument for utilitarianism” in the sense that it has the potential to convince me to donate to cause A instead of donating to cause B.
(I really do mean “me” and “my” in that sentence; other people may find that this argument can indeed convince them of this, and that’s a fact about them I have no quarrel with. I’m posting this because I just want to put a signpost saying “some people in EA believe this,” in case others feel the same way.)
Following Richard Ngo’s post https://forum.effectivealtruism.org/posts/TqCDCkp2ZosCiS3FB/arguments-for-moral-indefinability, I don’t think that human moral preferences can be made free of contradiction. Although I don’t like contradictions and I don’t want to have them, I also don’t like things like the repugnant conclusion, and I’m not sure why the distaste towards contradictions should be the one that always triumphs.
Since VNM-rationality is based on transitive preferences, and I disagree that human preferences can or “should” be transitive, I interpret things like this as without normative weight.
What is meant by “not my problem”? My understanding is that what is meant is “what I care about is no better off if I worry about this thing than if I don’t.” Hence the analogy to salary; if all I care about is $$, then getting paid in Facebook stock means that my utility is the same if I worry about the value of Google stock or if I don’t.
It sounds like you’re saying that, if I’m working at org A but getting paid in impact certificates from org B, the actual value of org A impact certificates is “not my problem” in this sense. Here obviously I care about things other than $$.
This doesn’t seem right at all to me, given the current state of the world. Worrying about whether my org is impactful is my problem in that it might indeed affect things I care about, for example because I might go work somewhere else.
Thinking about this more, I recalled the strength of the assumption that, in this world, everyone agrees to maximize impact certificates *instead of* counterfactual impact. This seems like it just obliterates all of my objections, which are arguments based on counterfactual impact. They become arguments at the wrong level. If the market is not robust, that means more certificates for me *which is definitionally good*.
So this is an argument that if everyone collectively agrees to change their incentives, we’d get more counterfactual impact in the long run. I think my main objection is not about this as an end state — not that I’m sure I agree with that, I just haven’t thought about it much in isolation — but about the feasibility of taking that kind of collective action, and about issues that may arise if some people do it unilaterally.
I’m saying we need to specify more than, “The chance that the full stack of individual propositions evaluates as true in the relevant direction.” I’m not sure if we’re disagreeing, or … ?
Suppose you’re in the future and you can tell how it all worked out. How do you know if it was right to work on AI safety or not?
There are a few different operationalizations of that. For example, you could ask whether your work obviously directly saved the world, or you could ask whether, if you could go back and do it over again with what you knew now, you would still work in AI safety.
The percentage would be different depending on what you mean. I suspect Gordon and Buck might have different operationalizations in mind, and I suspect that’s why Buck’s number seems crazy high to Gordon.
I agree with this intuition. I suspect the question that needs to be asked is “14% chance of what?”
I’m deciding whether organization A is effective. I see some respectable people working there, so I assume they must think work at A is effective, so I update in favor of A being effective. But unbeknownst to me, those people don’t actually think work at A is effective, but they trade their impact certificates to other folks who do. I don’t know these other folks.
Based on the theory that it’s important to know who you’re trusting, this is bad.
“The sense in which employees are deferring to their employer’s views on what to do” sounds fine to me, that’s all I meant to say.
Sure, I agree that if they’re anonymous forever you can’t do much. But that was just the generating context; I’m not arguing only against anonymity.
I’m arguing against impact certificate trading as a *wholesale replacement* for attempting to update each other. If you are trading certificates with someone, you are deferring to their views on what to do, which is fine, but it’s important to know you’re doing that and to have a decent understanding of why you differ.
I agree with this. I wasn’t trying to make a hard distinction between empirical and moral worldviews. (Not sure if there are better words than ‘means’ and ‘ends’ here.)
I think you’ve clarified it for me. It seems to me that impact certificate trades have little downside when there is persistent, intractable disagreement. But in other cases, deciding to trade rather than to attempt to update each other may leave updates on the table. That’s the situation I’m concerned about.
For context, I was imagining a trade with an anonymous partner, in a situation where you have reason to believe you have more information about org A than they do (because you work there).
I think this is an interesting topic. However, I downvoted because if you’re going to claim something is the “greatest priority cause,” which is quite a claim, I would at least want to see an analysis of how it fares against other causes on scale, tractability + neglectedness.
(Basically I agree with MichaelStJules’s comment, except I think the analysis need not be quantitative.)
Hmm, your first paragraph is indeed a different perspective than the one I had. Thanks! I remain unconvinced though.
Casting it as moral trade gives me the impression that impact certificates are for people who disagree about ends, not for people who agree about ends but disagree about means. In the case where my buyer and myself both have the same goals (e.g. chicken deaths prevented), why would I trust their assessment of chicken-welfare org A more than I trust my own? (Especially since presumably I work there and have access to more information about it than them.)
Some reasons I can imagine:
- I might think that the buyer is wiser than me and want to defer to them on this point. In this case I’d want to be clear that I’m deferring.
- I might think that no individual buyer is wiser than me, but the market aggregates information in a way that makes it wiser than me. In this case I’d want a robust market, probably better than PredictIt.
I had the same reaction (checking in my head that a 10% chance still merited action).
However I really think we ought to be able to discuss guesses about what’s true merely on the level of what’s true, without thinking about secondary messages being sent by some statement or another. It seems to me that if we’re unable to do so, that will make the difficult task of finding truth even more difficult.
Ha, no I am an unrelated Eli.
I like this point
But you are way more likely to end up being Dorothea.
because it emphasizes that the reason to have this mindset is a fact about the world. Sometimes, when I encounter statements like this, it can be easy for them to “bounce off” because I object “oh, of course it’s adaptive to think that way… but that doesn’t mean it’s actually true.” It was hard for this post to “bounce off” me because of the force of this point.
(I think the tone of this comment is the reason it is being downvoted. Since we all presumably believe that EA should be evidential, rational and objective, stating it again reads as a strong attack, as if you were trying to point out that no assessment of impact had been done, even though the original post links to some.)
I donated $1000 since it seems to me that something like the EA Hotel really ought to exist, and it would be really sad if it went under.
I’m posting this here so that, if you’re debating donating, you have the additional data point of knowing that others are doing so.
FWIW, I don’t find it at all surprising when people’s moral preferences contradict themselves (in terms of likely implications, as you say). I myself have many contradictory moral preferences.