I’ve started to worry that it might be important to get digital sentience work (e.g. legal protection for digital beings) before we get transformative AI, and EA’s seem like approximately the only people who could realistically do this in the next ~5 years.
I was interested to see you mention this, as this is something I think is very important.
The phrasing here got me thinking a bit about what would that look like if we were try to make meaningful changes within 5 years specifically.
But I was wondering why you used the “~5 years” phrase here?
(Do you think transformative AI is likely within 5 years?)
Metaculus currently says 7% (for one definition of “transformative”).
But I chose five years more because it’s hard for me to predict things further in the future, rather than because most of my probability mass is in the <5 year range.
I was interested to see you mention this, as this is something I think is very important.
The phrasing here got me thinking a bit about what would that look like if we were try to make meaningful changes within 5 years specifically.
But I was wondering why you used the “~5 years” phrase here?
(Do you think transformative AI is likely within 5 years?)
Metaculus currently says 7% (for one definition of “transformative”).
But I chose five years more because it’s hard for me to predict things further in the future, rather than because most of my probability mass is in the <5 year range.