I knew a bit about misinformation and fact-checking in 2017. AMA, if you’re really desperate.

In 2017, I did my Honours re­search pro­ject on whether, and how much, fact-check­ing poli­ti­ci­ans’ state­ments in­fluenced peo­ple’s at­ti­tudes to­wards those poli­ti­ci­ans, and their in­ten­tions to vote for them. (At my Aus­tralian uni­ver­sity, “Honours” meant a re­search-fo­cused, op­tional, se­lec­tive 4th year of an un­der­grad de­gree.) With some help, I later adapted my the­sis into a peer-re­viewed pa­per: Does truth mat­ter to vot­ers? The effects of cor­rect­ing poli­ti­cal mis­in­for­ma­tion in an Aus­tralian sam­ple. This was all within the do­mains of poli­ti­cal psy­chol­ogy and cog­ni­tive sci­ence.

Dur­ing that year, and in a unit I com­pleted ear­lier, I learned a lot about:

  • how mis­in­for­ma­tion forms

  • how it can sticky

  • how it can con­tinue to in­fluence be­liefs, at­ti­tudes, and be­havi­ours even af­ter be­ing cor­rected/​re­tracted, and even if peo­ple do re­mem­ber the cor­rec­tions/​retractions

  • ways of coun­ter­act­ing, or at­tempt­ing to coun­ter­act, these issues

    • E.g., fact-check­ing, or warn­ing peo­ple that they may be about to re­ceive misinformation

  • var­i­ous re­lated top­ics in the broad buck­ets of poli­ti­cal psy­chol­ogy and how peo­ple pro­cess in­for­ma­tion, such as im­pacts of “falsely bal­anced” reporting

The re­search that’s been done in these ar­eas has pro­vided many in­sights that I think might be use­ful for var­i­ous EA-al­igned efforts. For some ex­am­ples of such in­sights and how they might be rele­vant, see my com­ment on this post. Th­ese in­sights also seemed rele­vant in a small way in this com­ment thread, and in re­la­tion to the case for build­ing more and bet­ter epistemic in­sti­tu­tions in the effec­tive al­tru­ism com­mu­nity.

I’ve con­sid­ered writ­ing some­thing up about this (be­yond those brief com­ments), but my knowl­edge of these top­ics is too rusty for that to be some­thing I could smash out quickly and to a high stan­dard. So I’d like to in­stead just pub­li­cly say I’m happy to an­swer ques­tions re­lated to those top­ics.

I think it’d be ideal for ques­tions to be asked pub­li­cly, so oth­ers might benefit, but I’m also open to dis­cussing this stuff via mes­sages or video calls. The ques­tions could be about any­thing from a su­per spe­cific worry you have about your su­per spe­cific pro­ject, to gen­eral thoughts on how the EA com­mu­nity should com­mu­ni­cate (or what­ever).

Dis­claimers:

  • In 2017, I prob­a­bly wasn’t ad­e­quately con­cerned by the repli­ca­tion crisis, and many of the pa­pers I was read­ing were from be­fore psy­chol­ogy’s at­ten­tion was drawn to that. So we should as­sume some of my “knowl­edge” is based on pa­pers that wouldn’t repli­cate.

  • I was never a “proper ex­pert” in those top­ics, and I haven’t fo­cused on them since 2017. (I ended up with First Class Honours, mean­ing that I could do a fully funded PhD, but de­cided against it at that time.) So it might be that most of what I can provide is point­ing out key terms, pa­pers, and au­thors rele­vant to what you’re in­ter­ested in.

    • If your ques­tion is re­ally im­por­tant, you may want to just skip to con­tact­ing an ac­tive re­searcher in this area or check­ing the liter­a­ture your­self. You could per­haps use the links in my com­ment on this post as a start­ing point.

    • If you think you have more or more re­cent ex­per­tise in these or re­lated top­ics, please do make that known, and per­haps just com­man­deer this AMA out­right!

(Due to my cur­rent task list, I might re­spond to things mostly from 14 May on­wards. But you can ob­vi­ously com­ment & ask things be­fore then any­way.)