Thank you so, so much for writing up your review & criticism! I think your sense of vagueness is very justified, mostly because my own post is more “me trying to lay out my intuitions” and less “I know exactly how we should change EA on account of these intuitions”. I had just not seen many statements from EAs, and even less among my non-EA acquaintances, defending the importance of (1), (2), or (3) - great breakdown, btw. I put this post up in the hopes of fostering discussion, so thank you (and all the other commenters) for contributing your thoughts!
I actually do have some amount of confidence in this view, and do think we should think about fulfilling past preferences—but totally agree that I have not made those counterpoints, alternatives, or further questions available. Some of this is: I still just don’t know—and to that end your review is very enlightening! And some is: there’s a tradeoff between post length and clarity of argument. On a meta level, EA Forum posts have been ballooning to somewhat hard-to-digest lengths as people try to anticipate every possible counterargument; I’d push for a return to more of Sequences-style shorter chunks.
I think (2) is just false, if by utility we have in mind experiences (including experiences of preference-satisfaction), for the obvious reason that the past has already happened and we can’t change it. This seems like a major error in the post. Your footnote 1 touches on this but seems to me to conflate arguments (2) and (3) in my above attempted summary.
I still believe in (2), but I’m not confident I can articulate why (and I might be wrong!). Once again, I’d draw upon the framing of deceptive or counterfeit utility. For example, I feel that involuntary wireheading or being tricked into staying in a simulation machine is wrong, because the utility provided is not a true utility. The person would not actually realize that utility if they were cognizant that this was a lie. So too would the conversationist laboring to preserve biodiversity feel deceived/not gain utility if they were aware of the future supplanting their wishes.
Can we change the past? I feel like the answer is not 100% obviously “no”—I think this post by Joe Carlsmith lays out some arguments for why:
Overall, rejecting the common-sense comforts of CDT, and accepting the possibility of some kind of “acausal control,” leaves us in strange and uncertain territory. I think we should do it anyway. But we should also tread carefully.
(but it’s also super technical and I’m at risk of having misunderstood his post to service my own arguments.)
In terms of one specific claim: Large EA Funders (OpenPhil, FTX FF) should consider funding public goods retroactively instead of prospectively. More bounties and more “this was a good idea, here’s your prize”, and less “here’s some money to go do X”.
I’m not entirely sure what % of my belief in this comes from “this is a morally just way of paying out to the past” vs “this will be effective at producing better future outcomes”; maybe 20% compared to 80%? But I feel like many people would only state 10% or even less belief in the first.
To this end, I’ve been working on a proposal for equity for charities—still in a very early stage, but as you work as a fund manager, I’d love to hear your thoughts (especially your criticism!)
Thank you so, so much for writing up your review & criticism! I think your sense of vagueness is very justified, mostly because my own post is more “me trying to lay out my intuitions” and less “I know exactly how we should change EA on account of these intuitions”. I had just not seen many statements from EAs, and even less among my non-EA acquaintances, defending the importance of (1), (2), or (3) - great breakdown, btw. I put this post up in the hopes of fostering discussion, so thank you (and all the other commenters) for contributing your thoughts!
I actually do have some amount of confidence in this view, and do think we should think about fulfilling past preferences—but totally agree that I have not made those counterpoints, alternatives, or further questions available. Some of this is: I still just don’t know—and to that end your review is very enlightening! And some is: there’s a tradeoff between post length and clarity of argument. On a meta level, EA Forum posts have been ballooning to somewhat hard-to-digest lengths as people try to anticipate every possible counterargument; I’d push for a return to more of Sequences-style shorter chunks.
I still believe in (2), but I’m not confident I can articulate why (and I might be wrong!). Once again, I’d draw upon the framing of deceptive or counterfeit utility. For example, I feel that involuntary wireheading or being tricked into staying in a simulation machine is wrong, because the utility provided is not a true utility. The person would not actually realize that utility if they were cognizant that this was a lie. So too would the conversationist laboring to preserve biodiversity feel deceived/not gain utility if they were aware of the future supplanting their wishes.
Can we change the past? I feel like the answer is not 100% obviously “no”—I think this post by Joe Carlsmith lays out some arguments for why:
(but it’s also super technical and I’m at risk of having misunderstood his post to service my own arguments.)
In terms of one specific claim: Large EA Funders (OpenPhil, FTX FF) should consider funding public goods retroactively instead of prospectively. More bounties and more “this was a good idea, here’s your prize”, and less “here’s some money to go do X”.
I’m not entirely sure what % of my belief in this comes from “this is a morally just way of paying out to the past” vs “this will be effective at producing better future outcomes”; maybe 20% compared to 80%? But I feel like many people would only state 10% or even less belief in the first.
To this end, I’ve been working on a proposal for equity for charities—still in a very early stage, but as you work as a fund manager, I’d love to hear your thoughts (especially your criticism!)
Finally (and to put my money where my mouth is): would you accept a $100 bounty for your comment, paid in Manifold Dollars aka a donation to the charity of your choice? If so, DM me!