Thanks for giving me permission, I guess can use this if I need ever the opinion of “the EA community” ;)
However, I don’t think I’m ready to give up on trying to figure out my stance on AI risk just yet, since I still estimate it is my best shot in forming a more detailed understanding on any x-risk, and understanding x-risks better would be useful for establishing better opinions on other cause priorization issues.
That is also very reasonable! I think the important part is to not feel to bad about the possibility of never having a view (there is a vast sea of things I don’t have a view on), not least because I think it actually increases the chance of getting to the right view if more effort is spent.
(I would offer to chat directly, as I’m very much part of the subset of safety close to more normal ML, but am sadly over capacity at the moment.)
Thanks for giving me permission, I guess can use this if I need ever the opinion of “the EA community” ;)
However, I don’t think I’m ready to give up on trying to figure out my stance on AI risk just yet, since I still estimate it is my best shot in forming a more detailed understanding on any x-risk, and understanding x-risks better would be useful for establishing better opinions on other cause priorization issues.
That is also very reasonable! I think the important part is to not feel to bad about the possibility of never having a view (there is a vast sea of things I don’t have a view on), not least because I think it actually increases the chance of getting to the right view if more effort is spent.
(I would offer to chat directly, as I’m very much part of the subset of safety close to more normal ML, but am sadly over capacity at the moment.)