I work on AI safety tools. I believe this might be the most important thing for someone like me to do FWIW. I think AI doom is not likely but likely enough to be my personal top priority. But when I give money away I do it to GiveWell charities for reasons involving epistemic humility, moral uncertainty, and my belief in the importance of a balanced set of EA priorities.
I’m interested in why you don’t think AI doom is likely, given a lot of people in the AI safety space at least seem to suggest it’s reasonably likely (>10% likelihood in the next 10 or 20 years)
I wonder what are your thoughts on delaying timelines, instead of working on tooling, but I guess it might hinge on being more longtermist and personal fit.
I very badly want to delay timelines, especially because doing so gives us more time to develop responses, governance strategies, and tools to handle rapid changes. I think this is underemphasized. And lately, I have been thinking that the most likely thing that could make me shift my focus is the appeal of work that makes it harder to build risky AI or that improves our ability to respond to or endure threats. This contrasts with my current work which is mostly about making alignment easier.
Edited the title, do you think this is good enough?
Could you please point out your estimation? Since at the end of the day we do need to decide what to work on.
I believe this is a big improvement.
I work on AI safety tools. I believe this might be the most important thing for someone like me to do FWIW. I think AI doom is not likely but likely enough to be my personal top priority. But when I give money away I do it to GiveWell charities for reasons involving epistemic humility, moral uncertainty, and my belief in the importance of a balanced set of EA priorities.
I’m interested in why you don’t think AI doom is likely, given a lot of people in the AI safety space at least seem to suggest it’s reasonably likely (>10% likelihood in the next 10 or 20 years)
My guess is like 5-10%
Thank you for the pushback on the title!
I wonder what are your thoughts on delaying timelines, instead of working on tooling, but I guess it might hinge on being more longtermist and personal fit.
I very badly want to delay timelines, especially because doing so gives us more time to develop responses, governance strategies, and tools to handle rapid changes. I think this is underemphasized. And lately, I have been thinking that the most likely thing that could make me shift my focus is the appeal of work that makes it harder to build risky AI or that improves our ability to respond to or endure threats. This contrasts with my current work which is mostly about making alignment easier.