Hi Vasco, thanks for the comment and sorry for not seeing this and responding earlier.
I agree that the weights/​coefficients in the model could end up quite arbitrary, and I would expect them to if someone tried to set them precisely. My sense is that:
We may be able to give some arguments for some bounds on the weights, and some structural constraints on how the weights relate to each other (e.g. it would be odd if we had a function of some measure of complexity like neuron counts that was looked very jagged, but this may be more supported by aesthetics or simplicty than evidence).
Within these constraints, the choices are very subjective and highly arbitrary. I think the situation is even worse than with gravity, because there may be no way to gather evidence one way or the other, and there may be no fact of the matter at all.
Thanks, Michael. Do not worry about not having replied earlier.
I agree that the weights/​coefficients in the model could end up quite arbitrary, and I would expect them to if someone tried to set them precisely.
I am still thinking that expected values should be precise, or at least practically precise. However, I think the weights of models should be modelled as distributions instead of constants as in Bob’s book about comparing welfare across species, andRethink Priorities’ (RP’s) digital consciousness model (DCM).
We may be able to give some arguments for some bounds on the weights, and some structural constraints on how the weights relate to each other
I agree.
Within these constraints, the choices are very subjective and highly arbitrary.
How would you choose the distributions for the model weights in a way that’s not itself arbitrary? E.g. how do you choose their forms and parameters in a way that’s not arbitrary?
I do think imprecise credences have a similar problem of deciding which distributions to include in their representor. I think ultimately we need to make some arbitrary choices and should accept some, but we can be more or less arbitrary, or stop when it’s no longer decision-relevant. Maybe sometimes we can hit a fixed point or see some kind of convergence in the extra steps we’re taking.
On there potentially being no fact of the matter, this may be helpful. It goes further than the issue of imprecise credences/​EVs.
How would you choose the distributions for the model weights in a way that’s not itself arbitrary? E.g. how do you choose their forms and parameters in a way that’s not arbitrary?
I agree the distributions for the model weights would be arbitrary to some extent. However, I think probability density functions (PDFs) should be precise at a fundamental level, which implies precised expected values (EVs). If 2 PDFs feel exactly as plausible, I would simply use the mean between them.
I am not sure it matters whether one endorses precise EVs or not. In practice, I still like to test different EVs when the underlying PDF is very arbitrary and uncertain, as it is the case for PDFs of welfare ranges. In such cases, I suspect decreasing uncertainty to find the best options has higher EV than the supposedly imprecise EVs of going with the current best option.
On there potentially being no fact of the matter, this may be helpful. It goes further than the issue of imprecise credences/​EVs.
Hi Vasco, thanks for the comment and sorry for not seeing this and responding earlier.
I agree that the weights/​coefficients in the model could end up quite arbitrary, and I would expect them to if someone tried to set them precisely. My sense is that:
We may be able to give some arguments for some bounds on the weights, and some structural constraints on how the weights relate to each other (e.g. it would be odd if we had a function of some measure of complexity like neuron counts that was looked very jagged, but this may be more supported by aesthetics or simplicty than evidence).
Within these constraints, the choices are very subjective and highly arbitrary. I think the situation is even worse than with gravity, because there may be no way to gather evidence one way or the other, and there may be no fact of the matter at all.
Thanks, Michael. Do not worry about not having replied earlier.
I am still thinking that expected values should be precise, or at least practically precise. However, I think the weights of models should be modelled as distributions instead of constants as in Bob’s book about comparing welfare across species, and Rethink Priorities’ (RP’s) digital consciousness model (DCM).
I agree.
I agree.
I disagree.
How would you choose the distributions for the model weights in a way that’s not itself arbitrary? E.g. how do you choose their forms and parameters in a way that’s not arbitrary?
I do think imprecise credences have a similar problem of deciding which distributions to include in their representor. I think ultimately we need to make some arbitrary choices and should accept some, but we can be more or less arbitrary, or stop when it’s no longer decision-relevant. Maybe sometimes we can hit a fixed point or see some kind of convergence in the extra steps we’re taking.
On there potentially being no fact of the matter, this may be helpful. It goes further than the issue of imprecise credences/​EVs.
I agree the distributions for the model weights would be arbitrary to some extent. However, I think probability density functions (PDFs) should be precise at a fundamental level, which implies precised expected values (EVs). If 2 PDFs feel exactly as plausible, I would simply use the mean between them.
I am not sure it matters whether one endorses precise EVs or not. In practice, I still like to test different EVs when the underlying PDF is very arbitrary and uncertain, as it is the case for PDFs of welfare ranges. In such cases, I suspect decreasing uncertainty to find the best options has higher EV than the supposedly imprecise EVs of going with the current best option.
Here is a seemingly great summary from Gemini.