My personal take on the issue is that, the better we understand how the updating works (including how to select the prior), the more seriously we should take the results. Currently we don’t seem to have a good understanding (e.g. see Dickens’ discussion: the way of selecting the median based on Give Directly seems reasonable, but there doesn’t seem to be a principled way of selecting the variance, and this seems to be the best effort at it so far), so these updating exercises can be used as heuristics but the results are not to be taken too seriously, and certainly not literally (together with the reason that input values are so speculative in some cases).
This is just my personal view and certainly many people disagree. E.g. my team decided to use the results of Bayesian updating to decide on the grant recipient.
My experience with the project lead me to be not very positive that it’s worth investing too much in improving this quantitative approach for the sake of decision making, if one could instead spend time on gathering qualitative information (or even quantitative information that don’t fit neatly in the framework of cost-effectiveness calculations or updating) that could be much more informative for decision making. This is along the lines of this post and seems to also fit the current approach of the Open Philanthropy Project (of utilizing qualitative evidence rather than relying on quantitative estimates). Of course this is all based on the current state of such quantitative modeling, e.g. how little we understand how updating works as well as how to select speculative inputs for the quantitative models (and my judgment about how hard it would be to try to improve on these fronts). There could be a drastically better version of such quantitative prioritization that I haven’t been able to imagine.
It could be very valuable to construct a quantitative model (or parts of one), think about the inputs and their values, etc., for reasons explained here. E.g. The MIRI model (in particular some inputs by Paul Christiano; see here) has really helped me realize the importance of AI safety. So does the “astronomical waste” argument, which gives one a sense of the scale even if one doesn’t take the numbers literally. Still, when I make a decision of whether to donate to MIRI I wouldn’t rely on a quantitative model (at least one like what I built) and would instead put a lot of weight on qualitative evidence that is likely impossible (for us yet) to model quantitatively.
My personal take on the issue is that, the better we understand how the updating works (including how to select the prior), the more seriously we should take the results. Currently we don’t seem to have a good understanding (e.g. see Dickens’ discussion: the way of selecting the median based on Give Directly seems reasonable, but there doesn’t seem to be a principled way of selecting the variance, and this seems to be the best effort at it so far), so these updating exercises can be used as heuristics but the results are not to be taken too seriously, and certainly not literally (together with the reason that input values are so speculative in some cases).
This is just my personal view and certainly many people disagree. E.g. my team decided to use the results of Bayesian updating to decide on the grant recipient.
My experience with the project lead me to be not very positive that it’s worth investing too much in improving this quantitative approach for the sake of decision making, if one could instead spend time on gathering qualitative information (or even quantitative information that don’t fit neatly in the framework of cost-effectiveness calculations or updating) that could be much more informative for decision making. This is along the lines of this post and seems to also fit the current approach of the Open Philanthropy Project (of utilizing qualitative evidence rather than relying on quantitative estimates). Of course this is all based on the current state of such quantitative modeling, e.g. how little we understand how updating works as well as how to select speculative inputs for the quantitative models (and my judgment about how hard it would be to try to improve on these fronts). There could be a drastically better version of such quantitative prioritization that I haven’t been able to imagine.
It could be very valuable to construct a quantitative model (or parts of one), think about the inputs and their values, etc., for reasons explained here. E.g. The MIRI model (in particular some inputs by Paul Christiano; see here) has really helped me realize the importance of AI safety. So does the “astronomical waste” argument, which gives one a sense of the scale even if one doesn’t take the numbers literally. Still, when I make a decision of whether to donate to MIRI I wouldn’t rely on a quantitative model (at least one like what I built) and would instead put a lot of weight on qualitative evidence that is likely impossible (for us yet) to model quantitatively.