I like this question/ think that questions that apply to most x-risks are generally good to think about. a few thoughts/questions:
I’m not sure this specific question is super well-defined.
What definition of “values” are we using?
Is there a cardinal ranking of values you are using? I assume you are just indexing utilitarian values as the best and the further you get from utilitarian values you get the worse the values. Or am I supposed to answer the question overlaying my own values?
Also not super relevant but worth noting that depending on how you define values, utilitarian values =! utilitarian outcomes.
Then this is sort of a nit-picky point, but how big of a basket is similar?
To take a toy example, let’s say values is measured on a scale between 0 and 100 with 100 being perfect values. Let’s further just assume we currently are at a 50. I’d assume in this case it would make sense to make similar =(33,67) so as to evenly kern the groupings. if say, similar = (49,51), then it seems like you shouldn’t put much probability into similar.
But then if we are at a 98⁄100, is similar (97,99)? It’s less clear how we should basket the groups.
Since you put similar as 20⁄100 I somewhat assumed that you were giving similar a more or less even basket size to worse and better but perhaps you put a lot of weight into the idea that we are in some sort of sapien cultural equilibrium.
For what it’s worth, if we sort of sweep some of these concerns aside, and assume similar has about as much value space as better and worse, my estimates would be as follows:
better: 33⁄100
similar: 33⁄100
worse: 33⁄100
But I agree with jack’s sense that we should drop similar and just go for better and worse, in which case:
better: 50⁄100
worse:50/100
A cold take, but I truly feel like I have almost no idea at current. My intuition is that your forecast is too strong for the current level of evidence and research, but I have heard very smart people give almost the exact same guess.
I like this question/ think that questions that apply to most x-risks are generally good to think about. a few thoughts/questions:
I’m not sure this specific question is super well-defined.
What definition of “values” are we using?
Is there a cardinal ranking of values you are using? I assume you are just indexing utilitarian values as the best and the further you get from utilitarian values you get the worse the values. Or am I supposed to answer the question overlaying my own values?
Also not super relevant but worth noting that depending on how you define values, utilitarian values =! utilitarian outcomes.
Then this is sort of a nit-picky point, but how big of a basket is similar?
To take a toy example, let’s say values is measured on a scale between 0 and 100 with 100 being perfect values. Let’s further just assume we currently are at a 50. I’d assume in this case it would make sense to make similar =(33,67) so as to evenly kern the groupings. if say, similar = (49,51), then it seems like you shouldn’t put much probability into similar.
But then if we are at a 98⁄100, is similar (97,99)? It’s less clear how we should basket the groups.
Since you put similar as 20⁄100 I somewhat assumed that you were giving similar a more or less even basket size to worse and better but perhaps you put a lot of weight into the idea that we are in some sort of sapien cultural equilibrium.
For what it’s worth, if we sort of sweep some of these concerns aside, and assume similar has about as much value space as better and worse, my estimates would be as follows:
better: 33⁄100
similar: 33⁄100
worse: 33⁄100
But I agree with jack’s sense that we should drop similar and just go for better and worse, in which case:
better: 50⁄100
worse:50/100
A cold take, but I truly feel like I have almost no idea at current. My intuition is that your forecast is too strong for the current level of evidence and research, but I have heard very smart people give almost the exact same guess.