[P]eople also talk about ‘belief-mapping’ software as if it’s a well-established micro-genre. I’m still not exactly sure what this is, but here’s something that sounds to me like ‘belief-mapping’: I’d love a way to log my credences in different things, as well as richer kinds of beliefs like confidence intervals and distributional forecasts. A combination of a spreadsheet plus Foretold.io can do all of this so far. But it could also be neat to express how beliefs must relate. For instance, if I update my credence in (A), and I’ve already expressed my credence in (B|A), then the software can tell me what my new credence in (B) should be, and update it if it seems reasonable. I could also say things like: “my credence in the disjunction A or B or C is 80% — so when I change one of A or B or C, please adjust the other two to add back up to 80%”. Or suppose I give some probability distribution over time for a “when will…” question, and then time passes and the event doesn’t happen.
I wrote a forum post about what I call “epistemic mapping” as well as a related post on “Research graphing” to support AI policy research (and other endeavors). I’ve begun working on a new, more-focused demo (although I’m still skeptical that it will get any traction/interest). I also just wrote up a shortform response to Leverage’s report on “argument mapping” (which multiple people have directed me towards, with at least one of those people citing it as a reason to be skeptical about “argument mapping”). I’ve even published a relevant article on this topic (“Complexity Demands Adaptation...”).
In short, I’m a big advocate for trying new research and analysis methods under the umbrella of what you might broadly call “belief-mapping.”
Despite my enthusiasm for some of this, I am honestly quite skeptical of attempts to create end-to-end calculations for analyses which feature very interconnected, dynamic, hard-to-evaluate, and hard-to-explicate variables—which applies to most of the world beyond a few disciplines. At least, I’m skeptical of tools which try to do much more than Squiggle/Excel can already do (e.g., mapping the relationships between a lot of credences). In my view, the more important and/or fixable failure modes are “I can’t remember all the reasons—and sub-reasons—I had for believing X,” and “I don’t know other people’s arguments (or responses to my counterarguments) for believing X.”[1]
I’ve been meaning to write up my thoughts on this kind of proposal for a few weeks (perhaps as part of a “Why most prior attempts at ‘argument mapping’ have failed, and why that doesn’t spell doom for all such methods” post series), but am also skeptical that it will reach a sufficiently large audience to make it worthwhile. If someone were actually interested, I might try to organize my thoughts more quickly and neatly, but otherwise IDK.
While paragraphs and bullet points can and do mitigate this to some extent, it really struggles to deal with complex debates (for a variety of reasons I have listed on various Notion pages but have yet to write a published post/comment on).
I wrote a forum post about what I call “epistemic mapping” as well as a related post on “Research graphing” to support AI policy research (and other endeavors). I’ve begun working on a new, more-focused demo (although I’m still skeptical that it will get any traction/interest). I also just wrote up a shortform response to Leverage’s report on “argument mapping” (which multiple people have directed me towards, with at least one of those people citing it as a reason to be skeptical about “argument mapping”). I’ve even published a relevant article on this topic (“Complexity Demands Adaptation...”).
In short, I’m a big advocate for trying new research and analysis methods under the umbrella of what you might broadly call “belief-mapping.”
Despite my enthusiasm for some of this, I am honestly quite skeptical of attempts to create end-to-end calculations for analyses which feature very interconnected, dynamic, hard-to-evaluate, and hard-to-explicate variables—which applies to most of the world beyond a few disciplines. At least, I’m skeptical of tools which try to do much more than Squiggle/Excel can already do (e.g., mapping the relationships between a lot of credences). In my view, the more important and/or fixable failure modes are “I can’t remember all the reasons—and sub-reasons—I had for believing X,” and “I don’t know other people’s arguments (or responses to my counterarguments) for believing X.”[1]
I’ve been meaning to write up my thoughts on this kind of proposal for a few weeks (perhaps as part of a “Why most prior attempts at ‘argument mapping’ have failed, and why that doesn’t spell doom for all such methods” post series), but am also skeptical that it will reach a sufficiently large audience to make it worthwhile. If someone were actually interested, I might try to organize my thoughts more quickly and neatly, but otherwise IDK.
While paragraphs and bullet points can and do mitigate this to some extent, it really struggles to deal with complex debates (for a variety of reasons I have listed on various Notion pages but have yet to write a published post/comment on).