This year, Iām donating to MIRI, essentially because of the classic argument for MIRI. Here is a very short summary:
Artificial General Intelligence is possible and reasonably probable in the medium-term.
Such AI would be very powerful.
Without careful steps to avoid it, this AI is likely to be unfriendly. This would be very bad. Unfriendly AIs do not hate us, but we are made of atoms they can use for purposes not our own.
A freindly AI dedicated to promoting our values would be a very good thing.
Donating to MIRI is one of the best ways of doing this, as they are the only organisation fully focused on this one issue.
Even ignoring the risk of UFAI, I think that FAI may be one best ways of preventing run-away value drift from destroying all value in the future.
cross-posted on my blog
edit: minor formatting
This year, Iām donating to MIRI, essentially because of the classic argument for MIRI. Here is a very short summary:
Artificial General Intelligence is possible and reasonably probable in the medium-term.
Such AI would be very powerful.
Without careful steps to avoid it, this AI is likely to be unfriendly. This would be very bad. Unfriendly AIs do not hate us, but we are made of atoms they can use for purposes not our own.
A freindly AI dedicated to promoting our values would be a very good thing.
Donating to MIRI is one of the best ways of doing this, as they are the only organisation fully focused on this one issue.
Even ignoring the risk of UFAI, I think that FAI may be one best ways of preventing run-away value drift from destroying all value in the future.
cross-posted on my blog
edit: minor formatting