That does rule out pure time-discounting, which Haydn suggests. But I’d need quite a lot of convincing to allow that into a definition of longtermism. (The strongest case I could see would be if spatiotemporal discounting is the best solution to problems of infinite ethics.)
It seems quite plausible to me (based on intuitions from algorithmic complexity theory) that spatiotemporal discounting is the best solution to problems of infinite ethics. (See Anatomy of Multiversal Utility Functions: Tegmark Level IV for a specific proposal in this vein.)
I think the kinds of discounting suggested by algorithmic information theory is mild enough in practice to be compatible with our intuitive notions of longtermism (e.g., the discount factors for current spacetime and a billion years from now are almost the same), and would prefer a definition that doesn’t rule them out, in case we later determine that the correct solution to infinite ethics does indeed lie in that direction.
It seems quite plausible to me (based on intuitions from algorithmic complexity theory) that spatiotemporal discounting is the best solution to problems of infinite ethics. (See Anatomy of Multiversal Utility Functions: Tegmark Level IV for a specific proposal in this vein.)
I think the kinds of discounting suggested by algorithmic information theory is mild enough in practice to be compatible with our intuitive notions of longtermism (e.g., the discount factors for current spacetime and a billion years from now are almost the same), and would prefer a definition that doesn’t rule them out, in case we later determine that the correct solution to infinite ethics does indeed lie in that direction.