Re 3: Yes and no. ^.^ I’m currently working on something of whose robustness I have very weak evidence. I made a note to think about it, interview some people, and maybe write a post to ask for further input, but then I started working on it before I did any of these things. It’s like an optimal stopping problem. I’ll need to remedy that before my sunk cost starts to bias me too much… I suppose I’m not the only one in this situation. But then again I have friends who’ve thought for many years mostly just about the robustness of various approaches to their problem.
Arden Koehler: Do you think that’s an appropriate reaction to these cluelessness worries or does that seem like a misguided reaction?
Hilary Greaves: Yeah, I don’t know. It’s definitely an interesting reaction. I mean, it feels like this is going to be another case where the discussion is going to go something like, “Well, I’ve got one intervention that might be really, really, really good, but there’s an awful lot of uncertainty about it. It might just not work out at all. I’ve got another thing that’s more robustly good, and now how do we trade off the maybe smaller probability or very speculative possibility of a really good thing against a more robustly good thing that’s a bit more modest?”
Hilary Greaves: And then this feels like a conversation we’ve had many times over; is what we’re doing just something structurally, like expected utility theory, where it just depends on the numbers, or is there some more principled reason for discarding the extremely speculative things?
Arden Koehler: And you don’t think cluelessness adds anything to that conversation or pushes in favor of the less speculative thing?
Hilary Greaves: I think it might do. So again, it’s really unclear how to model cluelessness, and its plausible different models of it would say really different things about this kind of issue. So it feels to me just like a case where I would need to do a lot more thinking and modeling, and I wouldn’t be able to predict in advance how it’s all going to pan out. But I do think it’s a bit tempting to say too quickly, “Oh yeah, obviously cluelessness is going to favor more robust things.” I find it very non-obvious. Plausible, but very non-obvious.
She has thought about this a lot more than I have, so my objection probably doesn’t make sense, but the situation I find myself in is usually different from the one she describes in two ways: (1) There is no one really good but not robust intervention but rather everything is super murky (even whether the interventions have EV) and I can usually think of a dozen ways any particular intervention can backfire; and (2) this backfiring doesn’t mean that we have no impact but that we have enormous negative impact. In the midst of this murkiness, the very few interventions that seem much less murky than others – like priorities research or encouraging moral cooperation – stand out quite noticeably.
Re 4: I’ve so far only seen Shapley values as a way of attributing impact, something that seems relevant for impact certificates, thanking the right people, and noticing some relevant differences between situations, but by and large only for niche applications and none that are relevant for me at the moment. Nuno might disagree with that.
I usually ask myself not what impact I would have by doing something but which of my available actions will determine the world history with the maximal value. So I don’t break this down to my person at all. Doing so seems to me like a lot of wasted overhead. (And I don’t currently understand how to apply Shapley values to infinite sets of cooperators, and I don’t quite know who I am given that there are many people who are like me to various degrees.) But maybe using Shapley values or some other, similar algorithm would just make that reasoning a lot more principled and reliable. It’s well possible.
Re 3: Yes and no. ^.^ I’m currently working on something of whose robustness I have very weak evidence. I made a note to think about it, interview some people, and maybe write a post to ask for further input, but then I started working on it before I did any of these things. It’s like an optimal stopping problem. I’ll need to remedy that before my sunk cost starts to bias me too much… I suppose I’m not the only one in this situation. But then again I have friends who’ve thought for many years mostly just about the robustness of various approaches to their problem.
Hilary Graves doesn’t seem to be so sure that robustness gets us very far, but the example she gives is unlike the situations that I usually find myself in.
She has thought about this a lot more than I have, so my objection probably doesn’t make sense, but the situation I find myself in is usually different from the one she describes in two ways: (1) There is no one really good but not robust intervention but rather everything is super murky (even whether the interventions have EV) and I can usually think of a dozen ways any particular intervention can backfire; and (2) this backfiring doesn’t mean that we have no impact but that we have enormous negative impact. In the midst of this murkiness, the very few interventions that seem much less murky than others – like priorities research or encouraging moral cooperation – stand out quite noticeably.
Re 4: I’ve so far only seen Shapley values as a way of attributing impact, something that seems relevant for impact certificates, thanking the right people, and noticing some relevant differences between situations, but by and large only for niche applications and none that are relevant for me at the moment. Nuno might disagree with that.
I usually ask myself not what impact I would have by doing something but which of my available actions will determine the world history with the maximal value. So I don’t break this down to my person at all. Doing so seems to me like a lot of wasted overhead. (And I don’t currently understand how to apply Shapley values to infinite sets of cooperators, and I don’t quite know who I am given that there are many people who are like me to various degrees.) But maybe using Shapley values or some other, similar algorithm would just make that reasoning a lot more principled and reliable. It’s well possible.