Strong upvote—I found your perspective really fresh: ”The most likely case to me is that if AI x-risk is solved or turns out not to be a serious issue, and we just keep facing x-risks in proportion to how strong our technology gets, forever. Eventually we draw a black ball and all die.”
Lots of us are considering a career pivot into AI safety. Is it...actually tractable at all? How hopeful should we be about it? No idea.
Thank you! My perspective is: “figuring out if it’s tractable is at least tractable enough that it’s worth a lot more time/attention going there than is currently”, but not necessarily “working on it is far and away the best use of time/money/attention for altruistic purposes”, and almost certainly not “working on it is the best use of time/money/attention under a wide variety of ethical frameworks and it should dominate a healthy moral parliament”.
It’s hard to say. Considering there are fewer than 300 people estimated working on AI Safety and it’s still just starting to gain traction, I wouldn’t expect us to know a ton about it yet.
Even in established fields people are expected to usually take years or even decades before they can produce truly great research.
Psychology was still using lobotomies until 55 years ago. We’ve learned a lot since then and there’s still much more to learn. It took a similar amount of time for AI capabilities to get to where they are now. AI Safety is much newer and could look completely different in 10 years. Or, if nobody works on it or the people working on it are unable to make progress, it could look relatively similar.
Strong upvote—I found your perspective really fresh:
”The most likely case to me is that if AI x-risk is solved or turns out not to be a serious issue, and we just keep facing x-risks in proportion to how strong our technology gets, forever. Eventually we draw a black ball and all die.”
Lots of us are considering a career pivot into AI safety. Is it...actually tractable at all? How hopeful should we be about it? No idea.
Thank you! My perspective is: “figuring out if it’s tractable is at least tractable enough that it’s worth a lot more time/attention going there than is currently”, but not necessarily “working on it is far and away the best use of time/money/attention for altruistic purposes”, and almost certainly not “working on it is the best use of time/money/attention under a wide variety of ethical frameworks and it should dominate a healthy moral parliament”.
It’s hard to say. Considering there are fewer than 300 people estimated working on AI Safety and it’s still just starting to gain traction, I wouldn’t expect us to know a ton about it yet.
Even in established fields people are expected to usually take years or even decades before they can produce truly great research.
Psychology was still using lobotomies until 55 years ago. We’ve learned a lot since then and there’s still much more to learn. It took a similar amount of time for AI capabilities to get to where they are now. AI Safety is much newer and could look completely different in 10 years. Or, if nobody works on it or the people working on it are unable to make progress, it could look relatively similar.