Brian Tomasik believes that there’s a chance that AI alignment may itself be dangerous, since a “near miss” in AI alignment could cause vastly more suffering than a paperclip maximizer. In his article on his donation recommendations, he estimates that organizations like MIRI may have a ~38% chance of doing active harm.
Brian Tomasik believes that there’s a chance that AI alignment may itself be dangerous, since a “near miss” in AI alignment could cause vastly more suffering than a paperclip maximizer. In his article on his donation recommendations, he estimates that organizations like MIRI may have a ~38% chance of doing active harm.