I’m not sure how your first point relates to what I was saying in this post; but, I’ll take a guess.
Sorry, what I said wasn’t very clear. Attempting to rephrase, I was thinking more along the lines of what the possible future for AI might look like if there were no EA interventions in the AI space. I haven’t seen much discussion of the possible downsides there (for example slowing down AI research by prioritizing alignment resulting in delays in AI advancement and delays in good things brought about by AI advancement). But this was a less-than-half-baked idea, thinking about it some more I’m having trouble thinking of scenarios where that could produce a lower expected utility.
It doesn’t matter what outcome you assign zero value to as long as the relative values are the same since if a utility function is an affine function of another utility function then they produce equivalent decisions.
Sorry, what I said wasn’t very clear. Attempting to rephrase, I was thinking more along the lines of what the possible future for AI might look like if there were no EA interventions in the AI space. I haven’t seen much discussion of the possible downsides there (for example slowing down AI research by prioritizing alignment resulting in delays in AI advancement and delays in good things brought about by AI advancement). But this was a less-than-half-baked idea, thinking about it some more I’m having trouble thinking of scenarios where that could produce a lower expected utility.
Thanks, I follow this now and see what you mean.