Presumptive Listening: sticking to familiar concepts and missing the outer reasoning paths



Context for posting link:

Sixteen months ago, I read a draft by a researcher whom few in AI Safety know about, Forrest Landry.

Forrest claimed something counter-intuitive and scary about AGI safety. He argued toward a stark conclusion, claiming he had nailed the coffin shut. I felt averse about the ambiguity of the prose and the (self-confirming?) confidence of the author.

There was no call to action – if the conclusion was right, were we not helpless to act?
Yet, profound points were made and stuck. I could not dismiss it.


But busy as I was, running research programs and all that, the matter kept dropping aside. It took a mutual contact – who had passed on the draft, and had their own doubts – to encourage me to start summarising the arguments for LessWrong.

Just before, I tried to list where our like-minded community fails to “map the territory”. In at least six blindspots, we tended to overlook aspects relevant to whether work we scale up, including in AI safety, ends up having a massive negative impact. Yet if we could bridge the epistemic gap to different-minded outsiders, they could point out the aspects.

Forrest’s writings had a hippie holistic vibe that definitely marked him as a different-minded outsider. Drafting my first summary, I realised the arguments fell under all six blindspots.

Forrest wrote back feedback, which raised new questions for me. We set up a call.


Eleven months ago, Forrest called. It was late evening. I said I wanted to probe the arguments. Forrest said this would help me deal with common counter-arguments, so I knew how to convince others in the AI Safety community. I countered that my role was to find out whether his arguments made sense in the first place. We agreed that in practice, we were aligned.

Over three hours, Forrest answered my questions. Some answers made clear sense. Others slid past like a word salad of terms I could not grog (terms seemed to be defined with respect to each other). This raised new questions, many of which Forrest dismissed as side-tangents. It felt like being forced blindly down a narrow valley of argumentation – by some unknown outsider.


That was my perspective as the listener. If you click the link, you will find Forrest’s perspective as the explainer. Text is laid out in his precise research note-taking format.


I have probed at, nuanced, and cross-checked the arguments to understand them deeply. Forrest’s methods of defining concepts and their argumentative relations turned out sensible – they felt weird at first because of my unfamiliarity with them.

Now I can relate from the side of the explainer. I call with technical researchers who are busy, impatient, disoriented, counter-argumentative, and straight-up averse to get into this shit – just like I was!

The situation would be amusing, if it was not so grave.


If you want to probe at the arguments yourself, please be patient – perhaps start here.

If you want to cut to the chase instead – say obtain a short, precisely formalised, and intuitively followable summary of the arguments – this is not going to work.

Trust me, I tried to write seven summaries.
Each needed much one-on-one clarification of the premises, term definitions and reasoning steps to become more comprehensible to a few persons who were patient enough to ask clarifying questions, paraphrase back the arguments, and listen curiously.

Better to take months to dig further, whenever you have the time, like I did.



If you want to inquire further, there will be a project just for that at AI Safety Camp.

Crossposted from LessWrong (−14 points, 8 comments)
No comments.