A third perspective roughly justifies the current position; we should discount the future at the rate current humans think is appropriate, but also separately place significant value on having a positive long term future.
I feel that EA shouldn’t spend all or nearly all of its resources on the far future, but I’m uncomfortable with incorporating a moral discount rate for future humans as part of “regular longtermism” since it’s very intuitive to me that future lives should matter the same amount as present ones.
I prefer objections from the epistemic challenge, which I’m uncertain enough about to feel that various factors e.g. personal fit, flow-through effects, gaining experience in several domains means that it doesn’t make sense for EA to go “all-in”. An important aspect of personal fit is comfort working on very low probability bets.
I’m curious how common this feeling is, vs. feeling okay with a moral discount rate as part of one’s view. There’s some relevant discussion under the comment linked in the post.
Yeah. I have this idea that the EA movement should start with short-term interventions and work our way to interventions that operate over longer and longer timescales, as we get more comfortable understanding their long-term effects.
I feel that EA shouldn’t spend all or nearly all of its resources on the far future, but I’m uncomfortable with incorporating a moral discount rate for future humans as part of “regular longtermism” since it’s very intuitive to me that future lives should matter the same amount as present ones.
I prefer objections from the epistemic challenge, which I’m uncertain enough about to feel that various factors e.g. personal fit, flow-through effects, gaining experience in several domains means that it doesn’t make sense for EA to go “all-in”. An important aspect of personal fit is comfort working on very low probability bets.
I’m curious how common this feeling is, vs. feeling okay with a moral discount rate as part of one’s view. There’s some relevant discussion under the comment linked in the post.
Yeah. I have this idea that the EA movement should start with short-term interventions and work our way to interventions that operate over longer and longer timescales, as we get more comfortable understanding their long-term effects.