Executive summary: This exploratory post argues that better understanding how to elicit valid intuitions—especially in forecasting, moral reasoning, and preference formation—could advance both human rationality and AI alignment, and suggests that EA-aligned research in psychology and cognitive science could develop methods to systematically improve these processes.
Key points:
Eliciting intuitions may be a core cognitive skill in both moral reasoning and decision-making, blending cold analysis with emotional and metacognitive insight; understanding and improving this skill is underexplored within EA psychology.
Thought experiments and heuristics like simulation or affect play a central role in how people generate intuitions, but these processes are subject to regular and potentially predictable distortions rooted in how the brain constructs knowledge in the moment.
This research could support AI alignment efforts, especially in understanding how to extrapolate or interpret human preferences, by mapping analogies between human and AI reasoning processes (e.g., chain-of-thought models).
A preliminary forecasting study used self-reported strategy usefulness and betting markets to test which prompting techniques (e.g., decomposition, reference classes, robustness checks) correlate with accurate forecasts, though no statistically significant associations were found.
Limitations in the study point to broader challenges in measuring intuition quality, including potential flaws in the taxonomy of techniques, self-report biases, and the influence of personality traits.
Future directions include experimental philosophy projects to differentiate between biases and genuine moral intuitions, and interdisciplinary efforts to operationalize and compare the quality of preference expression methods.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory post argues that better understanding how to elicit valid intuitions—especially in forecasting, moral reasoning, and preference formation—could advance both human rationality and AI alignment, and suggests that EA-aligned research in psychology and cognitive science could develop methods to systematically improve these processes.
Key points:
Eliciting intuitions may be a core cognitive skill in both moral reasoning and decision-making, blending cold analysis with emotional and metacognitive insight; understanding and improving this skill is underexplored within EA psychology.
Thought experiments and heuristics like simulation or affect play a central role in how people generate intuitions, but these processes are subject to regular and potentially predictable distortions rooted in how the brain constructs knowledge in the moment.
This research could support AI alignment efforts, especially in understanding how to extrapolate or interpret human preferences, by mapping analogies between human and AI reasoning processes (e.g., chain-of-thought models).
A preliminary forecasting study used self-reported strategy usefulness and betting markets to test which prompting techniques (e.g., decomposition, reference classes, robustness checks) correlate with accurate forecasts, though no statistically significant associations were found.
Limitations in the study point to broader challenges in measuring intuition quality, including potential flaws in the taxonomy of techniques, self-report biases, and the influence of personality traits.
Future directions include experimental philosophy projects to differentiate between biases and genuine moral intuitions, and interdisciplinary efforts to operationalize and compare the quality of preference expression methods.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.