I think I’ll stick with this current statement—partly because it’s now been announced for a while so people may be relying on its specific implications for their essays, but also because this new formulation (to me) doesn’t seem to avoid the problem you raise, that it isn’t clear what your vote would be if you think the same type of work is recommended for both. Perhaps the solution to that issue is in footnote 3 on the current banner—if you think that the value from working on AI takeover is mostly from avoiding extinction, then you should vote agree. If you think it is from increasing the value of the future by another means (such as more democratic control of the future by humans), then you should vote disagree.
I think I’ll stick with this current statement—partly because it’s now been announced for a while so people may be relying on its specific implications for their essays, but also because this new formulation (to me) doesn’t seem to avoid the problem you raise, that it isn’t clear what your vote would be if you think the same type of work is recommended for both. Perhaps the solution to that issue is in footnote 3 on the current banner—if you think that the value from working on AI takeover is mostly from avoiding extinction, then you should vote agree. If you think it is from increasing the value of the future by another means (such as more democratic control of the future by humans), then you should vote disagree.