Other views on decision theory and credences matter, too, like how you deal with cluelessness and what you’re clueless about, whether you maximize expected value at all, whether you think you should do things that have only a 1 in a million chance of having the intended impact, the shape of your utility function, if any (both bounded and unbounded ones have serious problems).
I should make a post on cluelessness, but over the very long term future like thousands of years, longtermism suffers less than you think naively on cluelessness considerations.
Other views on decision theory and credences matter, too, like how you deal with cluelessness and what you’re clueless about, whether you maximize expected value at all, whether you think you should do things that have only a 1 in a million chance of having the intended impact, the shape of your utility function, if any (both bounded and unbounded ones have serious problems).
I should make a post on cluelessness, but over the very long term future like thousands of years, longtermism suffers less than you think naively on cluelessness considerations.
Can you say a bit more about how each of those considerations affect whether one should be longtermist or short-termist?