Thanks for writing this, it was fascinating to hear about your journey here. I also fell into the cognitive block of “I can’t possibly contribute to this problem, so I’m not going to learn or think more about it.” I think this block was quite bad in that it got in the way of me having true beliefs, or even trying to, for quite a few months. This wasn’t something I explicitly believed, but I think it implicitly affected how much energy I put into understanding or trying to be convinced by AI safety arguments.
I wouldn’t have realized it without your post, but my guess is that this trap is one of the most likely ways 80k could be counterproductive. By framing issues as “you need a phd from a top 10 uni to work on this cause,” they give a (implicit, unintentional) license to everybody else to not care about said cause.
As somebody who studied psychology, I think the way we talk about AI safety turned me off of even thinking about it’s importance. There seems to have been a shift recently toward “we need good ops and governance people too” which seems better but maybe has the same problem to a lesser degree.
For whatever it’s worth, my current belief is something like “ai safety is so important that it is worth it for me to work on it even if I don’t currently know how I can help” (exception being if I was counterproductive). I believe this quite strongly, and am willing(/privileged enough to be able to) sacrifice things like job security in order to try and help with alignment (though it’s unclear if this is the right decision).
I would love to chat more about my and your beliefs in you’re interested. You can message me or find me on Facebook or something.
Thanks for writing this, it was fascinating to hear about your journey here. I also fell into the cognitive block of “I can’t possibly contribute to this problem, so I’m not going to learn or think more about it.” I think this block was quite bad in that it got in the way of me having true beliefs, or even trying to, for quite a few months. This wasn’t something I explicitly believed, but I think it implicitly affected how much energy I put into understanding or trying to be convinced by AI safety arguments. I wouldn’t have realized it without your post, but my guess is that this trap is one of the most likely ways 80k could be counterproductive. By framing issues as “you need a phd from a top 10 uni to work on this cause,” they give a (implicit, unintentional) license to everybody else to not care about said cause. As somebody who studied psychology, I think the way we talk about AI safety turned me off of even thinking about it’s importance. There seems to have been a shift recently toward “we need good ops and governance people too” which seems better but maybe has the same problem to a lesser degree. For whatever it’s worth, my current belief is something like “ai safety is so important that it is worth it for me to work on it even if I don’t currently know how I can help” (exception being if I was counterproductive). I believe this quite strongly, and am willing(/privileged enough to be able to) sacrifice things like job security in order to try and help with alignment (though it’s unclear if this is the right decision). I would love to chat more about my and your beliefs in you’re interested. You can message me or find me on Facebook or something.