Metaculus is an EA project worth a mention in the “improving foresight” area. I’m also excited by what the Less Wrong 2 team is doing. And Clearer Thinking is cool.
I think steering capacity is valuable, but there has to be a balance between building steering capacity and taking object-level action. In many cases, object-level actions are likely to be time-sensitive. Delaying object-level action only makes sense insofar as we can usefully resolve our cluelessness. (But as you say, we tend to become less cluelessness about things as they move from the far future to the near future. So object-level actions which destroy option value can be bad.)
Remember also that acting in the world is sometimes the best way to gather information (which can help resolve cluelessness).
Metaculus is an EA project worth a mention in the “improving foresight” area. I’m also excited by what the Less Wrong 2 team is doing. And Clearer Thinking is cool.
I think steering capacity is valuable, but there has to be a balance between building steering capacity and taking object-level action. In many cases, object-level actions are likely to be time-sensitive. Delaying object-level action only makes sense insofar as we can usefully resolve our cluelessness. (But as you say, we tend to become less cluelessness about things as they move from the far future to the near future. So object-level actions which destroy option value can be bad.)
Remember also that acting in the world is sometimes the best way to gather information (which can help resolve cluelessness).