“King Emeric’s gift has thus played an important role in enabling us to live the monastic life, and it is a fitting sign of gratitude that we have been offering the Holy Sacrifice for him annually for the past 815 years.”
It seems to me like longtermists could learn something from people like this. (Maintaining a point of view for 800 years, both keeping the values aligned enough to do this and being around to be able to.)
If EA has a lot of extra money, could that be spent on incentivizing AI safety research? Maybe offer a really big bounty for solving some subproblem that’s really worth solving. (Like if somehow we could read and understand neural networks directly instead of them being black boxes.)
Could EA (and fellow travelers) become the market for an AI safety industry?
“King Emeric’s gift has thus played an important role in enabling us to live the monastic life, and it is a fitting sign of gratitude that we have been offering the Holy Sacrifice for him annually for the past 815 years.”
(source: https://sancrucensis.wordpress.com/2019/07/10/king-emeric-of-hungary/ )
It seems to me like longtermists could learn something from people like this. (Maintaining a point of view for 800 years, both keeping the values aligned enough to do this and being around to be able to.)
(Also a short blog post by me occasioned by these monks about “being orthogonal to history” https://formulalessness.blogspot.com/2019/07/orthogonal-to-history.html )
If EA has a lot of extra money, could that be spent on incentivizing AI safety research? Maybe offer a really big bounty for solving some subproblem that’s really worth solving. (Like if somehow we could read and understand neural networks directly instead of them being black boxes.)
Could EA (and fellow travelers) become the market for an AI safety industry?