This use-case for impact certificates isn’t predicated on trusting the market more than yourself (although that might be a nice upside). It’s more like a facilitated form of moral trade, where people with different preferences about what altruistic work happens all end up happier on account of switching so that people can work on things they can make more progress on rather than the things they personally want to bet on. (There are some reasons to be sceptical about how often this will actually be a good trade, because there can be significant comparative advantage to working on a project you believe in, from both motivation and having a clear sense of the goals; however I expect at least some of the time there would be good trades.)
On your second concern, I think that working in this way should basically be seen as a special case of earning to give. You’re working for an employer whose goals you don’t directly believe in because they will pay you a lot (in this case in impact certificates), which you can use to further things you do believe in. Sure there’s a small degree to which people might interpret your place of work as an endorsement, but I don’t think this is one of the principle factors feeding into our collective epistemic processes (particularly since you can explicitly disavow it; and in a world where this happens often others may be aware of the possibility even before disavowal) and wouldn’t give it too much weight in the decision.
Hmm, your first paragraph is indeed a different perspective than the one I had. Thanks! I remain unconvinced though.
Casting it as moral trade gives me the impression that impact certificates are for people who disagree about ends, not for people who agree about ends but disagree about means. In the case where my buyer and myself both have the same goals (e.g. chicken deaths prevented), why would I trust their assessment of chicken-welfare org A more than I trust my own? (Especially since presumably I work there and have access to more information about it than them.)
Some reasons I can imagine:
- I might think that the buyer is wiser than me and want to defer to them on this point. In this case I’d want to be clear that I’m deferring.
- I might think that no individual buyer is wiser than me, but the market aggregates information in a way that makes it wiser than me. In this case I’d want a robust market, probably better than PredictIt.
I’m not trying to take any view over whether there’s moral disagreement (I think in practice moral and empirical disagreements are not always cleanly distinguishable, but that’s a side point).
If you agree on goals, then maybe you will Aumann update towards agreement on actions and no trade will be needed. If there’s a persistent disagreement (even after you express that organisation A does not seem to you to be a good use of resources) then maybe it’s not a trade between different ultimate moral perspectives, but a trade between different empirical worldviews, such that the expectation of having made the trade is better for both worldviews than before making the trade. From your perspective as a certificate-seller, you don’t need to know whether the buyer agrees with your moral views or not.
I agree with this. I wasn’t trying to make a hard distinction between empirical and moral worldviews. (Not sure if there are better words than ‘means’ and ‘ends’ here.)
I think you’ve clarified it for me. It seems to me that impact certificate trades have little downside when there is persistent, intractable disagreement. But in other cases, deciding to trade rather than to attempt to update each other may leave updates on the table. That’s the situation I’m concerned about.
For context, I was imagining a trade with an anonymous partner, in a situation where you have reason to believe you have more information about org A than they do (because you work there).
In the case where the other party is anonymous, how could you hope to update each other? (i.e. you seem to be arguing against anonymity, not against selling impact certificates)
Sure, I agree that if they’re anonymous forever you can’t do much. But that was just the generating context; I’m not arguing only against anonymity.
I’m arguing against impact certificate trading as a *wholesale replacement* for attempting to update each other. If you are trading certificates with someone, you are deferring to their views on what to do, which is fine, but it’s important to know you’re doing that and to have a decent understanding of why you differ.
If you are trading certificates with someone, you are deferring to their views on what to do
I think this is meaningfully wrong; at least the sense in which you are deferring is not stronger than the sense in which employees are deferring to their employer’s views on what to do (i.e. it’s not an epistemic deferral but a deferral to authority).
This use-case for impact certificates isn’t predicated on trusting the market more than yourself (although that might be a nice upside). It’s more like a facilitated form of moral trade, where people with different preferences about what altruistic work happens all end up happier on account of switching so that people can work on things they can make more progress on rather than the things they personally want to bet on. (There are some reasons to be sceptical about how often this will actually be a good trade, because there can be significant comparative advantage to working on a project you believe in, from both motivation and having a clear sense of the goals; however I expect at least some of the time there would be good trades.)
On your second concern, I think that working in this way should basically be seen as a special case of earning to give. You’re working for an employer whose goals you don’t directly believe in because they will pay you a lot (in this case in impact certificates), which you can use to further things you do believe in. Sure there’s a small degree to which people might interpret your place of work as an endorsement, but I don’t think this is one of the principle factors feeding into our collective epistemic processes (particularly since you can explicitly disavow it; and in a world where this happens often others may be aware of the possibility even before disavowal) and wouldn’t give it too much weight in the decision.
Hmm, your first paragraph is indeed a different perspective than the one I had. Thanks! I remain unconvinced though.
Casting it as moral trade gives me the impression that impact certificates are for people who disagree about ends, not for people who agree about ends but disagree about means. In the case where my buyer and myself both have the same goals (e.g. chicken deaths prevented), why would I trust their assessment of chicken-welfare org A more than I trust my own? (Especially since presumably I work there and have access to more information about it than them.)
Some reasons I can imagine:
- I might think that the buyer is wiser than me and want to defer to them on this point. In this case I’d want to be clear that I’m deferring.
- I might think that no individual buyer is wiser than me, but the market aggregates information in a way that makes it wiser than me. In this case I’d want a robust market, probably better than PredictIt.
I’m not trying to take any view over whether there’s moral disagreement (I think in practice moral and empirical disagreements are not always cleanly distinguishable, but that’s a side point).
If you agree on goals, then maybe you will Aumann update towards agreement on actions and no trade will be needed. If there’s a persistent disagreement (even after you express that organisation A does not seem to you to be a good use of resources) then maybe it’s not a trade between different ultimate moral perspectives, but a trade between different empirical worldviews, such that the expectation of having made the trade is better for both worldviews than before making the trade. From your perspective as a certificate-seller, you don’t need to know whether the buyer agrees with your moral views or not.
I agree with this. I wasn’t trying to make a hard distinction between empirical and moral worldviews. (Not sure if there are better words than ‘means’ and ‘ends’ here.)
I think you’ve clarified it for me. It seems to me that impact certificate trades have little downside when there is persistent, intractable disagreement. But in other cases, deciding to trade rather than to attempt to update each other may leave updates on the table. That’s the situation I’m concerned about.
For context, I was imagining a trade with an anonymous partner, in a situation where you have reason to believe you have more information about org A than they do (because you work there).
In the case where the other party is anonymous, how could you hope to update each other? (i.e. you seem to be arguing against anonymity, not against selling impact certificates)
Sure, I agree that if they’re anonymous forever you can’t do much. But that was just the generating context; I’m not arguing only against anonymity.
I’m arguing against impact certificate trading as a *wholesale replacement* for attempting to update each other. If you are trading certificates with someone, you are deferring to their views on what to do, which is fine, but it’s important to know you’re doing that and to have a decent understanding of why you differ.
I think this is meaningfully wrong; at least the sense in which you are deferring is not stronger than the sense in which employees are deferring to their employer’s views on what to do (i.e. it’s not an epistemic deferral but a deferral to authority).
“The sense in which employees are deferring to their employer’s views on what to do” sounds fine to me, that’s all I meant to say.