Executive summary: This report presents the Digital Consciousness Model, a probabilistic framework combining multiple theories of consciousness, and concludes that current (2024) large language models are unlikely to be conscious, though the evidence against consciousness is limited and highly sensitive to theoretical assumptions.
Key points:
The Digital Consciousness Model aggregates judgments from 13 diverse stances on consciousness using a hierarchical Bayesian model informed by over 200 indicators.
When starting from a uniform prior of ⅙, the aggregated evidence lowers the probability that 2024 LLMs are conscious relative to the prior.
The evidence against LLM consciousness is substantially weaker than the evidence against consciousness in very simple AI systems like ELIZA.
Different stances yield sharply divergent results, with cognitively oriented perspectives giving higher probabilities and biologically oriented perspectives giving much lower ones.
The model’s outputs are highly sensitive to prior assumptions, so the authors emphasize relative comparisons and evidence shifts rather than absolute probabilities.
The aggregated evidence strongly supports the conclusion that chickens are conscious, though some stances emphasizing advanced cognition assign them low probabilities.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: This report presents the Digital Consciousness Model, a probabilistic framework combining multiple theories of consciousness, and concludes that current (2024) large language models are unlikely to be conscious, though the evidence against consciousness is limited and highly sensitive to theoretical assumptions.
Key points:
The Digital Consciousness Model aggregates judgments from 13 diverse stances on consciousness using a hierarchical Bayesian model informed by over 200 indicators.
When starting from a uniform prior of ⅙, the aggregated evidence lowers the probability that 2024 LLMs are conscious relative to the prior.
The evidence against LLM consciousness is substantially weaker than the evidence against consciousness in very simple AI systems like ELIZA.
Different stances yield sharply divergent results, with cognitively oriented perspectives giving higher probabilities and biologically oriented perspectives giving much lower ones.
The model’s outputs are highly sensitive to prior assumptions, so the authors emphasize relative comparisons and evidence shifts rather than absolute probabilities.
The aggregated evidence strongly supports the conclusion that chickens are conscious, though some stances emphasizing advanced cognition assign them low probabilities.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.