So i order to justify longtermism in particular, you have to point out proposed policies that are a lot less sensible seeming, and rely on a lot less certainty.
If you’re referring to the first point I would reword this to:
In order to justify longtermism in particular, you have to point out proposed policies that can’t be justified by drawing on the near future.
In the article the authors are somewhat ambiguous about the meaning of ‘near future’. They do at one point refer to the present and a few generations, as their potential time stamp. But your point raises an interesting question for the longtermists: How long does the future need to be in order for future people to have moral weight?
Although we might want to qualify it slightly in that the element of interest is not necessarily the number of years into the future but rather how many people (or beings) will be in the future. The question then becomes: How many people need to be alive in the future in order for their lives to have moral weight?
If we knew a black hole would swallow humanity in 200 years, on some estimates, there could still be ~15 billion human lives to come. If we knew that the future held only 15 more billion lives, would that justify not focusing on existential risks?