Given that longtermism seems to have turned out to be a crucial consideration which a prior might have been considered counterintuitive or very absurd, should we be on the lookout for similarly important but wild & out-there options? How far should the EA community be willing to ride the train to crazy town (or, rather, how much variance should there be in the EA community for this? Normal or log-normal)?
For example one could consider things like multiverse-wide cooperation, acausal trade, options of creating infinite amounts of value and how to compare those (although I guess this is already been thought about in the area of infinite ethics, and try to actively search for them & figure out their implications (which doesn’t appear to have much prominence in EA at the moment). (Other examples listed here)
I remember a post by Tomasik (can’t find it right now) where he argues that the expected size of a new crucial consideration should be the the average of all past instances of such new instances, if we apply this here, the possible value seems high.
What about future crucial considerations that Andrew hasn’t yet discovered? Can he make any statements about them? One way to do so would be to model unknown unknowns (UUs) as being sampled from some probability distribution P: UUi ~ P for all i. The distribution of UUs so far was {3, −5, −2, 10, −1}. The sample mean is 1, and the standard error is 2.6. The standard error is big enough that Andrew can’t have much confidence about future UUs, though the sample mean very weakly suggests future UUs are more likely on average to be positive than negative.
Given that longtermism seems to have turned out to be a crucial consideration which a prior might have been considered counterintuitive or very absurd, should we be on the lookout for similarly important but wild & out-there options? How far should the EA community be willing to ride the train to crazy town (or, rather, how much variance should there be in the EA community for this? Normal or log-normal)?
For example one could consider things like multiverse-wide cooperation, acausal trade, options of creating infinite amounts of value and how to compare those (although I guess this is already been thought about in the area of infinite ethics, and try to actively search for them & figure out their implications (which doesn’t appear to have much prominence in EA at the moment). (Other examples listed here)
I remember a post by Tomasik (can’t find it right now) where he argues that the expected size of a new crucial consideration should be the the average of all past instances of such new instances, if we apply this here, the possible value seems high.
A bit late but it might be this post:
Thanks! Definitely not too late, I’m often looking for this particular cite.