I had left this for a day and had just come back to write a response to this post but fortunately you’ve made a number of the points I was planning on making.
I think it’s really good to see criticism of core EA principles on here, but I did feel that a number of the criticisms might have benefited from being fleshed out more fully .
OP made it clear that he doesn’t agree with a number of Nick Bostrom’s opinions but I wasn’t entirely clear (I only read it the once and quite quickly, so it may be the case that I missed this) where precisely the main disagreement lay. I wasn’t sure if it whether OP was disagreeing with:
That there was a theoretical case to be made for orienting our actions with a view to the long term future/placing a high value on future human potential
High profile longtermists’ subsequent inferences based on longtermist values and/or the likelihoods they assign to achieving ‘high human flourishing’/transhumanist oucomes (ie. we should place a much lower probability on realising these high-utility futures and therefore many longtermist arguments are weakened)
The idea that longtermism can work as a practical guide in reality (ie. that longtermism may correctly identify the ‘best’ actions to take but due to misinterpretation and ‘slippery slope’ factors it acts as an information hazard and should therefore be avoided)
Re your response to the ‘Genocide’ section Alex: I think Phil’s argument was that longtermism/transhumanist potential leads to a Pascal’s mugging in this situation where very low probabilities of existential catastrophe can be weighted as so undesirable that they justify extraordinary behaviour (in this case killing large numbers of individuals in order to reduce existential risk by a very small amount). This doesn’t seem to me to be an entirely ridiculous point but I believe this paints a slightly absurd picture where longtermists do not see the value in international laws/human rights and would be happy to support their violation in aid of very small reductions in existential risk.
In the same way that consequentialists see the value in having a legal system based on generalised common laws, I think very few longtermists would argue for a wholesale abandonment of human rights.
As a separate point: I do think the use of ‘white supremacist’ is misleading, and is probably more likely to alienate then clarify. I think it could risk becoming a focus and detracting from some of the more substantial points being raised in the book.
I thought the book was an interesting critique though and forced me to clarify my thinking on a number of points. Would be interested to hear further.
I had left this for a day and had just come back to write a response to this post but fortunately you’ve made a number of the points I was planning on making.
I think it’s really good to see criticism of core EA principles on here, but I did feel that a number of the criticisms might have benefited from being fleshed out more fully .
OP made it clear that he doesn’t agree with a number of Nick Bostrom’s opinions but I wasn’t entirely clear (I only read it the once and quite quickly, so it may be the case that I missed this) where precisely the main disagreement lay. I wasn’t sure if it whether OP was disagreeing with:
That there was a theoretical case to be made for orienting our actions with a view to the long term future/placing a high value on future human potential
High profile longtermists’ subsequent inferences based on longtermist values and/or the likelihoods they assign to achieving ‘high human flourishing’/transhumanist oucomes (ie. we should place a much lower probability on realising these high-utility futures and therefore many longtermist arguments are weakened)
The idea that longtermism can work as a practical guide in reality (ie. that longtermism may correctly identify the ‘best’ actions to take but due to misinterpretation and ‘slippery slope’ factors it acts as an information hazard and should therefore be avoided)
Re your response to the ‘Genocide’ section Alex: I think Phil’s argument was that longtermism/transhumanist potential leads to a Pascal’s mugging in this situation where very low probabilities of existential catastrophe can be weighted as so undesirable that they justify extraordinary behaviour (in this case killing large numbers of individuals in order to reduce existential risk by a very small amount). This doesn’t seem to me to be an entirely ridiculous point but I believe this paints a slightly absurd picture where longtermists do not see the value in international laws/human rights and would be happy to support their violation in aid of very small reductions in existential risk.
In the same way that consequentialists see the value in having a legal system based on generalised common laws, I think very few longtermists would argue for a wholesale abandonment of human rights.
As a separate point: I do think the use of ‘white supremacist’ is misleading, and is probably more likely to alienate then clarify. I think it could risk becoming a focus and detracting from some of the more substantial points being raised in the book.
I thought the book was an interesting critique though and forced me to clarify my thinking on a number of points. Would be interested to hear further.