What’s the expected value of working in AI safety?
I’m not certain about longtermism and the value of reducing x-risks, I’m not optimistic that we can really affect the long future, and I guess the future of humanity may be bad. Many EA people are like me, that’s why only 15% people think AI safety is top cause area(survey by Rethink Priority).
However, In a “near-termist” view, AI safety research is still valuable because researching it can may avoid catastrophe(not only extinction), which causes the suffering of 8 billion people and maybe animals. But, things like researching on global health, preventing pandemic seems to have a more certain “expected value”(Maybe 100 QALY/extra person or so). Because we have our history experiences and a feedback loop. AI safety is the most difficult problem on earth, I feel like the expected value is like”???” It may be very high, may be 0. We don’t know how serious suffering it would make(it may cause extinction in a minute when we’re sleeping, or torture us for years?) We don’t know if we are on the way finding the soultion, or we are all doing the wrong predictions of AGI’s thoughts? Will the government control the power of AGI? All of the work on AI safety is kind of “guessing”, so I’m confused why 80000 hours estimates the tracability to be 1%. I know AI safety is highly neglected, and it may cause unpredictable huge suffering for human and animals. But if I work in AI safety, I’d feel a little lost becuase I don’t know if I really did something meaningful, if I don’t work in AI safety, I’d feel guilty. Could some give me(and the people who hestitates to work in AI safety) some recommendations?
What’s the expected value of working in AI safety?
I’m not certain about longtermism and the value of reducing x-risks, I’m not optimistic that we can really affect the long future, and I guess the future of humanity may be bad. Many EA people are like me, that’s why only 15% people think AI safety is top cause area(survey by Rethink Priority).
However, In a “near-termist” view, AI safety research is still valuable because researching it can may avoid catastrophe(not only extinction), which causes the suffering of 8 billion people and maybe animals. But, things like researching on global health, preventing pandemic seems to have a more certain “expected value”(Maybe 100 QALY/extra person or so). Because we have our history experiences and a feedback loop. AI safety is the most difficult problem on earth, I feel like the expected value is like”???” It may be very high, may be 0. We don’t know how serious suffering it would make(it may cause extinction in a minute when we’re sleeping, or torture us for years?) We don’t know if we are on the way finding the soultion, or we are all doing the wrong predictions of AGI’s thoughts? Will the government control the power of AGI? All of the work on AI safety is kind of “guessing”, so I’m confused why 80000 hours estimates the tracability to be 1%. I know AI safety is highly neglected, and it may cause unpredictable huge suffering for human and animals. But if I work in AI safety, I’d feel a little lost becuase I don’t know if I really did something meaningful, if I don’t work in AI safety, I’d feel guilty. Could some give me(and the people who hestitates to work in AI safety) some recommendations?