For what itās worth, I get a sense of vagueness from this post, like I donāt have a strong understanding of what specific claims are being made and like I predict that different readers will spot or interpret different claims from this.
I think attempting to provide a summary of the key points in the form of specific claims and arguments for/āagainst them would be a useful exercise, to force clarity of thought/āexpression here. So what follows is one possible summary. Note that I think many of the arguments in this attempted summary are flawed, as Iāll explain below.
āI think we should base our ethical decision-making in part on the views that people from the past (including past versions of currently living people) wouldāve held or did hold. I see three reasons for this:
Those past people may have been right and we may be wrong
Those past peopleās utility matters, and our decisions can affect their utility
A norm of respecting their preferences could contribute to future people respecting our preferences, which is good from our perspectiveā
I think (1) is obviously true, and it does seem worth people bearing it in mind. But I donāt see any reason to think that people on average currently under-weight that pointāi.e., that people pay less attention to past views than they should given how often past views will be better than present views. I also donāt think that this post provided such arguments. So I donāt think that merely stating this basic point seems very useful. (Though I do think a post providing some arguments or evidence on whether people should change how much or when they pay attention to past views would be useful.)
I think (2) is just false, if by utility we have in mind experiences (including experiences of preference-satisfaction), for the obvious reason that the past has already happened and we canāt change it. This seems like a major error in the post. Your footnote 1 touches on this but seems to me to conflate arguments (2) and (3) in my above attempted summary.
Or perhaps youāre thinking of utils in terms of whether preferences are actually satisfied, regardless of whether people know or experience that and whether theyāre alive at that time? If so, then I think thatās a pretty unusual form of utilitarianism, itās a form Iād give very little weight to, and thatās a point that it seems like you shouldāve clarified in the main text.
I think (3) is true, but to me it raises the key questions āHow good (if at all) is it for future people to respect our preferences?ā. āWhat are the best ways to get that to happen?ā, and āAre there ways to get our preferences fulfilled that are better than getting future people to respect them?ā And I think that:
Itās far from obvious that itās good for future people to respect present-people-in-generalās preferences.
Itās not obvious but more likely that itās good for them to respect EAsā preferences.
Itās unlikely that the best way to get them to respect our preferences is to respect past peopleās preferences to build a norm (alternatives include e.g. simply writing compelling materials arguing to respect our preferences, or shifting culture in various ways).
Itās likely that there are better options for getting our preferences fulfilled (relative to actively working to get future people to choose to respect our preferences), such as reducing x-risk or maybe even things like pursuing cryonics or whole-brain emulation to extend our own lifespans.
So here again, I get a feeling that this post:
Merely flags a hypothesis in a somewhat fuzzy way
Implies confidence in that hypothesis and in the view that this means we should spend more resources fulfilling or thinking about past peopleās preferences
But it doesnāt really make this explicit enough or highlight in-my-view relatively obvious counterpoints, alternative options, or further questions
...I guess this comment is written more like a review than like constructive criticism. But what Iād say on the latter front (if youāre interested!) is that it seems worth trying to make your specific claims and argument structures more explicit, attempting to summarize all the key things (both because summaries are useful for readers and as an exercising in forcing clear thought), and spending more thought on alternative options and counterpoints to whatever youāre initially inclined to propose.
Or perhaps youāre thinking of utils in terms of whether preferences are actually satisfied, regardless of whether people know or experience that and whether theyāre alive at that time? If so, then I think thatās a pretty unusual form of utilitarianism, itās a form Iād give very little weight to, and thatās a point that it seems like you shouldāve clarified in the main text.
Although I find this version of utilitarianism extremely implausible, it is actually a very common form of it. Discussions of preference-satisfaction theories of wellbeing presupposed by preference utilitarianism often explicitly point out that āsatisfactionā is used in a logical rather than a psychological sense, to refer to the preferences that are actually satisfied rather than the subjective experience of satisfaction. For example, Shelly Kagan writes:
Second, there are desire or preference theories, which hold that being well-off is a matter of having oneās (intrinsic) desires satisfied. What is intended here, of course, is āsatisfactionā in the logicianās sense: the question is simply whether or not the states of affairs that are the objects of oneās various desires obtain; it is irrelevant whether or not one realizes it, or whether one gets some psychological feeling of satisfaction.
So conditional on preference utilitarianism as it is generally understood, I think (2) is true. (But, to repeat, I donāt find this version of utilitarianism in the least plausible. I think the only reasons for respecting past peopleās preferences are instrumental reasons (societies probably function more smoothly if their members have a justified expectation that others will put some effort into satisfying their preferences posthumously) and perhaps reasons based on moral uncertainty, although Iām skeptical about the latter.)
I put a bunch of weight on decision theories which support 2.
A mundane example: I get value now from knowing that, even if I died, my partner would pursue certain Claire-specific projects I value being pursued because it makes me happy to know they will get pursued even if I die. I couldnāt have that happiness now if I didnāt believe he would actually do it, and itād be hard for him (a person who lives with me and who Iāve dated for many years) to make me believe that he actually would pursue them even if it werenāt true (as well as seeming sketchy from a deontological perspective).
And, +1 to Austinās example of funders; funders occasionally have people ask for retroactive funding, and say that they only did the thing because their model of the funders suggested the funder would pay.
Thank you so, so much for writing up your review & criticism! I think your sense of vagueness is very justified, mostly because my own post is more āme trying to lay out my intuitionsā and less āI know exactly how we should change EA on account of these intuitionsā. I had just not seen many statements from EAs, and even less among my non-EA acquaintances, defending the importance of (1), (2), or (3) - great breakdown, btw. I put this post up in the hopes of fostering discussion, so thank you (and all the other commenters) for contributing your thoughts!
I actually do have some amount of confidence in this view, and do think we should think about fulfilling past preferencesābut totally agree that I have not made those counterpoints, alternatives, or further questions available. Some of this is: I still just donāt knowāand to that end your review is very enlightening! And some is: thereās a tradeoff between post length and clarity of argument. On a meta level, EA Forum posts have been ballooning to somewhat hard-to-digest lengths as people try to anticipate every possible counterargument; Iād push for a return to more of Sequences-style shorter chunks.
I think (2) is just false, if by utility we have in mind experiences (including experiences of preference-satisfaction), for the obvious reason that the past has already happened and we canāt change it. This seems like a major error in the post. Your footnote 1 touches on this but seems to me to conflate arguments (2) and (3) in my above attempted summary.
I still believe in (2), but Iām not confident I can articulate why (and I might be wrong!). Once again, Iād draw upon the framing of deceptive or counterfeit utility. For example, I feel that involuntary wireheading or being tricked into staying in a simulation machine is wrong, because the utility provided is not a true utility. The person would not actually realize that utility if they were cognizant that this was a lie. So too would the conversationist laboring to preserve biodiversity feel deceived/ānot gain utility if they were aware of the future supplanting their wishes.
Can we change the past? I feel like the answer is not 100% obviously ānoāāI think this post by Joe Carlsmith lays out some arguments for why:
Overall, rejecting the common-sense comforts of CDT, and accepting the possibility of some kind of āacausal control,ā leaves us in strange and uncertain territory. I think we should do it anyway. But we should also tread carefully.
(but itās also super technical and Iām at risk of having misunderstood his post to service my own arguments.)
In terms of one specific claim: Large EA Funders (OpenPhil, FTX FF) should consider funding public goods retroactively instead of prospectively. More bounties and more āthis was a good idea, hereās your prizeā, and less āhereās some money to go do Xā.
Iām not entirely sure what % of my belief in this comes from āthis is a morally just way of paying out to the pastā vs āthis will be effective at producing better future outcomesā; maybe 20% compared to 80%? But I feel like many people would only state 10% or even less belief in the first.
To this end, Iāve been working on a proposal for equity for charitiesāstill in a very early stage, but as you work as a fund manager, Iād love to hear your thoughts (especially your criticism!)
For what itās worth, I get a sense of vagueness from this post, like I donāt have a strong understanding of what specific claims are being made and like I predict that different readers will spot or interpret different claims from this.
I think attempting to provide a summary of the key points in the form of specific claims and arguments for/āagainst them would be a useful exercise, to force clarity of thought/āexpression here. So what follows is one possible summary. Note that I think many of the arguments in this attempted summary are flawed, as Iāll explain below.
āI think we should base our ethical decision-making in part on the views that people from the past (including past versions of currently living people) wouldāve held or did hold. I see three reasons for this:
Those past people may have been right and we may be wrong
Those past peopleās utility matters, and our decisions can affect their utility
A norm of respecting their preferences could contribute to future people respecting our preferences, which is good from our perspectiveā
I think (1) is obviously true, and it does seem worth people bearing it in mind. But I donāt see any reason to think that people on average currently under-weight that pointāi.e., that people pay less attention to past views than they should given how often past views will be better than present views. I also donāt think that this post provided such arguments. So I donāt think that merely stating this basic point seems very useful. (Though I do think a post providing some arguments or evidence on whether people should change how much or when they pay attention to past views would be useful.)
I think (2) is just false, if by utility we have in mind experiences (including experiences of preference-satisfaction), for the obvious reason that the past has already happened and we canāt change it. This seems like a major error in the post. Your footnote 1 touches on this but seems to me to conflate arguments (2) and (3) in my above attempted summary.
Or perhaps youāre thinking of utils in terms of whether preferences are actually satisfied, regardless of whether people know or experience that and whether theyāre alive at that time? If so, then I think thatās a pretty unusual form of utilitarianism, itās a form Iād give very little weight to, and thatās a point that it seems like you shouldāve clarified in the main text.
I think (3) is true, but to me it raises the key questions āHow good (if at all) is it for future people to respect our preferences?ā. āWhat are the best ways to get that to happen?ā, and āAre there ways to get our preferences fulfilled that are better than getting future people to respect them?ā And I think that:
Itās far from obvious that itās good for future people to respect present-people-in-generalās preferences.
Itās not obvious but more likely that itās good for them to respect EAsā preferences.
Itās unlikely that the best way to get them to respect our preferences is to respect past peopleās preferences to build a norm (alternatives include e.g. simply writing compelling materials arguing to respect our preferences, or shifting culture in various ways).
Itās likely that there are better options for getting our preferences fulfilled (relative to actively working to get future people to choose to respect our preferences), such as reducing x-risk or maybe even things like pursuing cryonics or whole-brain emulation to extend our own lifespans.
So here again, I get a feeling that this post:
Merely flags a hypothesis in a somewhat fuzzy way
Implies confidence in that hypothesis and in the view that this means we should spend more resources fulfilling or thinking about past peopleās preferences
But it doesnāt really make this explicit enough or highlight in-my-view relatively obvious counterpoints, alternative options, or further questions
...I guess this comment is written more like a review than like constructive criticism. But what Iād say on the latter front (if youāre interested!) is that it seems worth trying to make your specific claims and argument structures more explicit, attempting to summarize all the key things (both because summaries are useful for readers and as an exercising in forcing clear thought), and spending more thought on alternative options and counterpoints to whatever youāre initially inclined to propose.
[Note that I havenāt read other comments.]
Although I find this version of utilitarianism extremely implausible, it is actually a very common form of it. Discussions of preference-satisfaction theories of wellbeing presupposed by preference utilitarianism often explicitly point out that āsatisfactionā is used in a logical rather than a psychological sense, to refer to the preferences that are actually satisfied rather than the subjective experience of satisfaction. For example, Shelly Kagan writes:
So conditional on preference utilitarianism as it is generally understood, I think (2) is true. (But, to repeat, I donāt find this version of utilitarianism in the least plausible. I think the only reasons for respecting past peopleās preferences are instrumental reasons (societies probably function more smoothly if their members have a justified expectation that others will put some effort into satisfying their preferences posthumously) and perhaps reasons based on moral uncertainty, although Iām skeptical about the latter.)
I put a bunch of weight on decision theories which support 2.
A mundane example: I get value now from knowing that, even if I died, my partner would pursue certain Claire-specific projects I value being pursued because it makes me happy to know they will get pursued even if I die. I couldnāt have that happiness now if I didnāt believe he would actually do it, and itād be hard for him (a person who lives with me and who Iāve dated for many years) to make me believe that he actually would pursue them even if it werenāt true (as well as seeming sketchy from a deontological perspective).
And, +1 to Austinās example of funders; funders occasionally have people ask for retroactive funding, and say that they only did the thing because their model of the funders suggested the funder would pay.
Thank you so, so much for writing up your review & criticism! I think your sense of vagueness is very justified, mostly because my own post is more āme trying to lay out my intuitionsā and less āI know exactly how we should change EA on account of these intuitionsā. I had just not seen many statements from EAs, and even less among my non-EA acquaintances, defending the importance of (1), (2), or (3) - great breakdown, btw. I put this post up in the hopes of fostering discussion, so thank you (and all the other commenters) for contributing your thoughts!
I actually do have some amount of confidence in this view, and do think we should think about fulfilling past preferencesābut totally agree that I have not made those counterpoints, alternatives, or further questions available. Some of this is: I still just donāt knowāand to that end your review is very enlightening! And some is: thereās a tradeoff between post length and clarity of argument. On a meta level, EA Forum posts have been ballooning to somewhat hard-to-digest lengths as people try to anticipate every possible counterargument; Iād push for a return to more of Sequences-style shorter chunks.
I still believe in (2), but Iām not confident I can articulate why (and I might be wrong!). Once again, Iād draw upon the framing of deceptive or counterfeit utility. For example, I feel that involuntary wireheading or being tricked into staying in a simulation machine is wrong, because the utility provided is not a true utility. The person would not actually realize that utility if they were cognizant that this was a lie. So too would the conversationist laboring to preserve biodiversity feel deceived/ānot gain utility if they were aware of the future supplanting their wishes.
Can we change the past? I feel like the answer is not 100% obviously ānoāāI think this post by Joe Carlsmith lays out some arguments for why:
(but itās also super technical and Iām at risk of having misunderstood his post to service my own arguments.)
In terms of one specific claim: Large EA Funders (OpenPhil, FTX FF) should consider funding public goods retroactively instead of prospectively. More bounties and more āthis was a good idea, hereās your prizeā, and less āhereās some money to go do Xā.
Iām not entirely sure what % of my belief in this comes from āthis is a morally just way of paying out to the pastā vs āthis will be effective at producing better future outcomesā; maybe 20% compared to 80%? But I feel like many people would only state 10% or even less belief in the first.
To this end, Iāve been working on a proposal for equity for charitiesāstill in a very early stage, but as you work as a fund manager, Iād love to hear your thoughts (especially your criticism!)
Finally (and to put my money where my mouth is): would you accept a $100 bounty for your comment, paid in Manifold Dollars aka a donation to the charity of your choice? If so, DM me!