Nice! It strikes me that in figure 1, information is propagating upward, from indicator to feature to stance to overall probability, and so the arrows should also be pointing upward. I think the view (stance?) I am most sympathetic to is that all our current theories of consciousness arenât much good, so we shouldnât update very far away from our prior, but that picking a prior is quite subjective, and so it is hard to make collective progress on this when different people might just have quite different priors for P(current AI consciousness).
Hi Oscar, thanks. Yes! Indicator evidence is inserted at the bottom and flows upward. The diagram priors flows down to set expectations, so itâs a common convention to draw it so. Informally you see it as each node splitting different nodes into subnodes, but the arrows donât mean to imply that information doesnât travel up.
On to your main point: thereâs no question that this is a preparadigmatic field where progress and consensus are difficult to find, and rightly so given the state of the evidence.
A few thoughts on why pursue this research now despite the uncertainty:
First, we want a framework in place that can help transition towards a more paradigmatic science of digital minds as the field progresses. Even if youâre sceptical about current reliability, we think that having a model that can modularly incorporate new evidence and judgements could serve as valuable infrastructure for organising future findings.
Second, whilst which theory is true remains uncertain, we found that operationalising specific stances was often less fuzzy than expected. Many theories make fairly specific predictions about conscious versus non-conscious systems, giving us firmer ground for stance-by-stance analysis. (Though you might still disagree with any given stance, of course!)
Your concern about theory quality might itself be worth modelling as a stanceâwe thought of something like âStance Xâ but it could be âTheoretical Scepticismââwhere all features provide very weak support. That would yield small updates from the prior regardless of the system. Already, you can see the uncertainty playing out in the model: notice how wide the bands are for chickens vs humans (Figure 3). Several theories disagree substantially about animal consciousness, which arguably reflects concerns about theory quality.
That said, weâre deliberately cautious about making pronouncements on consciousness probability or evidence strength. We see this as a promising way to start characterising these values rather than offering definitive answers.
(Oh and regarding priors: setting them is hard, and robustness important. it might be helpful to see Appendix E and Figures 9-11. The key finding is that while absolute posteriors are highly prior-dependent (as youâd expect), the comparative results and direction of updating are pretty robust across priors.)
Yep, that all makes sense, and I think this work can still tell us something, just it doesnât update me too much given the lack of compelling theories or much consensus in the scientific/âphilosophical community. This is harsher than what I actually think, but directionally, it has the feel of âcargo cult scienceâ where it has a fancy Bayesian model and lots of numbers and so forth, but if it all built on top of philosophical stances I donât trust then it doesnât move me much. But that said it is still interesting e.g. how wide the range for chickens is.
Nice! It strikes me that in figure 1, information is propagating upward, from indicator to feature to stance to overall probability, and so the arrows should also be pointing upward.
I think the view (stance?) I am most sympathetic to is that all our current theories of consciousness arenât much good, so we shouldnât update very far away from our prior, but that picking a prior is quite subjective, and so it is hard to make collective progress on this when different people might just have quite different priors for P(current AI consciousness).
Hi Oscar, thanks. Yes! Indicator evidence is inserted at the bottom and flows upward. The diagram priors flows down to set expectations, so itâs a common convention to draw it so. Informally you see it as each node splitting different nodes into subnodes, but the arrows donât mean to imply that information doesnât travel up.
On to your main point: thereâs no question that this is a preparadigmatic field where progress and consensus are difficult to find, and rightly so given the state of the evidence.
A few thoughts on why pursue this research now despite the uncertainty:
First, we want a framework in place that can help transition towards a more paradigmatic science of digital minds as the field progresses. Even if youâre sceptical about current reliability, we think that having a model that can modularly incorporate new evidence and judgements could serve as valuable infrastructure for organising future findings.
Second, whilst which theory is true remains uncertain, we found that operationalising specific stances was often less fuzzy than expected. Many theories make fairly specific predictions about conscious versus non-conscious systems, giving us firmer ground for stance-by-stance analysis. (Though you might still disagree with any given stance, of course!)
Your concern about theory quality might itself be worth modelling as a stanceâwe thought of something like âStance Xâ but it could be âTheoretical Scepticismââwhere all features provide very weak support. That would yield small updates from the prior regardless of the system. Already, you can see the uncertainty playing out in the model: notice how wide the bands are for chickens vs humans (Figure 3). Several theories disagree substantially about animal consciousness, which arguably reflects concerns about theory quality.
That said, weâre deliberately cautious about making pronouncements on consciousness probability or evidence strength. We see this as a promising way to start characterising these values rather than offering definitive answers.
(Oh and regarding priors: setting them is hard, and robustness important. it might be helpful to see Appendix E and Figures 9-11. The key finding is that while absolute posteriors are highly prior-dependent (as youâd expect), the comparative results and direction of updating are pretty robust across priors.)
Yep, that all makes sense, and I think this work can still tell us something, just it doesnât update me too much given the lack of compelling theories or much consensus in the scientific/âphilosophical community. This is harsher than what I actually think, but directionally, it has the feel of âcargo cult scienceâ where it has a fancy Bayesian model and lots of numbers and so forth, but if it all built on top of philosophical stances I donât trust then it doesnât move me much. But that said it is still interesting e.g. how wide the range for chickens is.