This is truly an excellent article. I strongly agree that we need to maintain epistemological humility when predicting AGI occurrences, but I also understand that people always crave an accurate prediction, even with insufficient evidence; people are always uneasy when facing the unknown. However, I believe that even when the future state distribution is unknown, we are not without reasonable strategies for decision-making. I was inspired after reading Alexander Turner’s “Optimal Policies Tend to Seek Power,” which suggests that when the future reward function is randomly distributed, retaining nodes with more choices is the optimal solution. I think this has been very helpful in my own decision-making; even when the future environment is unknown, I believe choosing the node with more approximate branches is always a good approach.
This is truly an excellent article. I strongly agree that we need to maintain epistemological humility when predicting AGI occurrences, but I also understand that people always crave an accurate prediction, even with insufficient evidence; people are always uneasy when facing the unknown. However, I believe that even when the future state distribution is unknown, we are not without reasonable strategies for decision-making. I was inspired after reading Alexander Turner’s “Optimal Policies Tend to Seek Power,” which suggests that when the future reward function is randomly distributed, retaining nodes with more choices is the optimal solution. I think this has been very helpful in my own decision-making; even when the future environment is unknown, I believe choosing the node with more approximate branches is always a good approach.