I’m broadly in your camp, i.e. starting with a 50-50 prior.
I think a useful intuition pump is asking oneself whether some candidate good near-term effect/action X is net good for total insect welfare or total nematode welfare over the next 0-5 years (assuming these are a thing, i.e. that insects or nematodes are sentient). I actually suspect the correlation here is even smaller than for the short-term vs long-term impact variables we typically consider, but I think it can be a good intuition pump because it’s so tangible.
I agree with “first-order effects are usually bigger than second-order effects”, but my model here is roughly that we have heavy-tailed uncertainty over ‘leverage’, i.e. which variable matters how much for the long-term future (and that usually includes sign uncertainty).
We can imagine some literal levers that are connected to each other through some absurdly complicated Rube Goldberg machine such that we can’t trace the effect that pulling on one lever is going to have on other levers. Then I think our epistemic position is roughly like “from a bunch of experience with pulling levers we know that it’s a reasonable prior that if we pull on one of these levers the force exerted on all the other levers is much smaller, though sometimes there are weird exceptions; unfortunately we don’t really know what the levers are doing, i.e. for all we know even a miniscule force exerted – or failing to exert a miniscule force – on one of the levers destroys the world”.
I’m broadly in your camp, i.e. starting with a 50-50 prior.
I think a useful intuition pump is asking oneself whether some candidate good near-term effect/action X is net good for total insect welfare or total nematode welfare over the next 0-5 years (assuming these are a thing, i.e. that insects or nematodes are sentient). I actually suspect the correlation here is even smaller than for the short-term vs long-term impact variables we typically consider, but I think it can be a good intuition pump because it’s so tangible.
I agree with “first-order effects are usually bigger than second-order effects”, but my model here is roughly that we have heavy-tailed uncertainty over ‘leverage’, i.e. which variable matters how much for the long-term future (and that usually includes sign uncertainty).
We can imagine some literal levers that are connected to each other through some absurdly complicated Rube Goldberg machine such that we can’t trace the effect that pulling on one lever is going to have on other levers. Then I think our epistemic position is roughly like “from a bunch of experience with pulling levers we know that it’s a reasonable prior that if we pull on one of these levers the force exerted on all the other levers is much smaller, though sometimes there are weird exceptions; unfortunately we don’t really know what the levers are doing, i.e. for all we know even a miniscule force exerted – or failing to exert a miniscule force – on one of the levers destroys the world”.