Point 1 in favour reads very much like “focus on near-future benefits because this will (most likely) bring far-future benefits”, which is in practice indistinguishable from just “focus on near-future benefits”. Plus the assumption—improving near-future will most likely improve far-future, which I also tend to think—is far from certain (you acknowledge it). With this reasoning, technically, the underlying reason to do X is improving the far-future, indeed. But the actual effect of X is improving the near-future with much higher certainty than improving the far-future; everything is aligned. This is not, AFAI understand, the point of the question. Who wouldn’t do X? Answering assuming this scenario doesn’t give any information.
Consider the following different scenario: doing X will make near-future worse with much higher certainty than it does far-future better (assume same value generation magnitudes as before, just make the value for near-future negative). Would you then advocate doing X? I think this gives the information the question is asking for.
I would overwhelmingly most likely not do X in that scenario. Because I know how dumb we (I) am and how complex reality is, so the longer the time frame, the less I trust any reasoning (cluelessness) [It is difficult enough to find out whether an action taken has actually been net positive for the immediate future!]. Would you? If your answer tends to No, then the far-future effects are not the most important determinant of what we ought to do for you.
It depends on the case, but there are definitely cases where I would.
Also, while you make a good point that these can sometimes converge, I think the priority of concerns is extremely different under shortermism vs longtermism, which i see as the important part of “most important determinant of what we ought to do.” (Setting aside mugging and risk aversion/robustness) some very small or even directional shift could make something hold the vast majority of your moral weight, as opposed to before where the impact might have not been that big/ impact would have been outweighed by lack of neglectedness or tractability.
P.S. If one (including myself) failed to do x, given that it would shift priorities but wouldn’t affect what one would do in light of short term damage, I think that would say less about one’s actual beliefs and more about their intuitions of disgust towards means-end-reasoning—but this is just a hunch and somewhat based on my own introspection (to be fair, sometimes this comes from moral uncertainty/reputational concerns that should be used in this reasoning and is to your point).
It depends on the case, but there are definitely cases where I would.
Then it definitely fits with your vote. I just meant that the fact that you (and me) tend to think that making the near-future better also will make the far-future better shouldn’t influence the answer to this question.
We just disagree on how confident we are on our assessment of how our actions will affect the far-future. And probably this is because of our age ;-)
Point 1 in favour reads very much like “focus on near-future benefits because this will (most likely) bring far-future benefits”, which is in practice indistinguishable from just “focus on near-future benefits”. Plus the assumption—improving near-future will most likely improve far-future, which I also tend to think—is far from certain (you acknowledge it). With this reasoning, technically, the underlying reason to do X is improving the far-future, indeed. But the actual effect of X is improving the near-future with much higher certainty than improving the far-future; everything is aligned. This is not, AFAI understand, the point of the question. Who wouldn’t do X? Answering assuming this scenario doesn’t give any information.
Consider the following different scenario: doing X will make near-future worse with much higher certainty than it does far-future better (assume same value generation magnitudes as before, just make the value for near-future negative). Would you then advocate doing X? I think this gives the information the question is asking for.
I would overwhelmingly most likely not do X in that scenario. Because I know how dumb we (I) am and how complex reality is, so the longer the time frame, the less I trust any reasoning (cluelessness) [It is difficult enough to find out whether an action taken has actually been net positive for the immediate future!]. Would you? If your answer tends to No, then the far-future effects are not the most important determinant of what we ought to do for you.
It depends on the case, but there are definitely cases where I would.
Also, while you make a good point that these can sometimes converge, I think the priority of concerns is extremely different under shortermism vs longtermism, which i see as the important part of “most important determinant of what we ought to do.” (Setting aside mugging and risk aversion/robustness) some very small or even directional shift could make something hold the vast majority of your moral weight, as opposed to before where the impact might have not been that big/ impact would have been outweighed by lack of neglectedness or tractability.
P.S. If one (including myself) failed to do x, given that it would shift priorities but wouldn’t affect what one would do in light of short term damage, I think that would say less about one’s actual beliefs and more about their intuitions of disgust towards means-end-reasoning—but this is just a hunch and somewhat based on my own introspection (to be fair, sometimes this comes from moral uncertainty/reputational concerns that should be used in this reasoning and is to your point).
Then it definitely fits with your vote. I just meant that the fact that you (and me) tend to think that making the near-future better also will make the far-future better shouldn’t influence the answer to this question.
We just disagree on how confident we are on our assessment of how our actions will affect the far-future. And probably this is because of our age ;-)