Hey Trevor, it’s been a while, I just read Kuhan’s quick take which referred to this quick take, great to see you’re still active!
This is very interesting, I’ve been evaluating a cause area I think is very important and potentially urgent—something like the broader class of interventions of which “the long reflection” and “coherent extrapolated volition” are examples, essentially how do we make sure the future is as good as possible conditional on aligned advanced AI.
Anyways, I found it much easier to combine tractability and neglectedness into what I called “marginal tractability,” meaning how easy is it to increase success of a given cause area by, say, 1% at the current margin.
I feel like trying to abstractly estimate tractability independent of neglectedness was very awkward, and not scalable; i.e. tractability can change quite unpredictably over time, so it isn’t really a constant factor, but something you need to keep reevaluating as conditions change over time.
Asking the tractability question “If we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?” isn’t a bad trick, but in a cause area that is extremely neglected this is really hard to do because there are so few existing interventions, especially measurable ones. In this case investigating some of the best potential interventions is really helpful.
I think you’re right that the same applies when investigating specific interventions. Neglectedness is still a factor, but it’s not separable from tractability; marginal tractability is what matters, and that’s easiest to investigate by actually looking at the interventions to see how effective they are at the current margin.
I feel like there’s a huge amount of nuance here, and some of the above comments were good critiques…
But for now gotta continue on the research. The investigation is at about 30,000 words, need to finish, lightly edit, and write some shorter explainer versions, would love to get your feedback when it’s ready!
Hey Trevor, it’s been a while, I just read Kuhan’s quick take which referred to this quick take, great to see you’re still active!
This is very interesting, I’ve been evaluating a cause area I think is very important and potentially urgent—something like the broader class of interventions of which “the long reflection” and “coherent extrapolated volition” are examples, essentially how do we make sure the future is as good as possible conditional on aligned advanced AI.
Anyways, I found it much easier to combine tractability and neglectedness into what I called “marginal tractability,” meaning how easy is it to increase success of a given cause area by, say, 1% at the current margin.
I feel like trying to abstractly estimate tractability independent of neglectedness was very awkward, and not scalable; i.e. tractability can change quite unpredictably over time, so it isn’t really a constant factor, but something you need to keep reevaluating as conditions change over time.
Asking the tractability question “If we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?” isn’t a bad trick, but in a cause area that is extremely neglected this is really hard to do because there are so few existing interventions, especially measurable ones. In this case investigating some of the best potential interventions is really helpful.
I think you’re right that the same applies when investigating specific interventions. Neglectedness is still a factor, but it’s not separable from tractability; marginal tractability is what matters, and that’s easiest to investigate by actually looking at the interventions to see how effective they are at the current margin.
I feel like there’s a huge amount of nuance here, and some of the above comments were good critiques…
But for now gotta continue on the research. The investigation is at about 30,000 words, need to finish, lightly edit, and write some shorter explainer versions, would love to get your feedback when it’s ready!