Building Cooperative Epistemology (Response to “EA has a Lying Problem”, among other things)

This is in re­sponse to Sarah Con­stantin’s re­cent post about in­tel­lec­tual dishon­esty within the EA com­mu­nity.

I roughly agree with Sarah’s main ob­ject level points, but I think this es­say doesn’t suffi­ciently em­body the spirit of co­op­er­a­tive dis­course it’s try­ing to pro­mote. I have a lot of thoughts here, but they are build­ing off a few ex­ist­ing es­says. (There’s been a re­cent re­vival over on Less Wrong at­tempt­ing to make it a bet­ter lo­cus for high qual­ity dis­cus­sion. I don’t know if it’s es­pe­cially suc­ceeded, but I think the con­cepts be­hind that in­tended re­vival and very im­por­tant)

  1. Why Our Kind Can’t Co­op­er­ate (Eliezer Yud­kowsky)

  2. A Re­turn to Dis­cus­sion (Sarah Con­stantin)

  3. The Im­por­tance of [Less Wrong, OR an­other Sin­gle Con­ver­sa­tional Lo­cus] (Em­pha­sis mine) (Anna Sala­mon)

  4. The Four Lay­ers of In­tel­lec­tual Con­ver­sa­tion (Eliezer Yud­kowsky)

    I think it’s im­por­tant to have all three con­cepts in con­text be­fore delv­ing into:

  5. EA has a ly­ing prob­lem (Sarah Con­stantin)

I recom­mend read­ing all of those. But here’s a rough sum­mary of what I con­sider the im­por­tant bits. (If you want to ac­tu­ally ar­gue with these bits, please read the ac­tual es­says be­fore do­ing so, so you’re en­gag­ing with the full sub­stance of the idea)

  • In­tel­lec­tu­als and con­trar­i­ans love to ar­gue and nit­pick. This is valuable—it pro­duces novel in­sights, and keeps us hon­est. BUT it makes it harder to ac­tu­ally work to­gether to achieve things. We need to un­der­stand how work­ing-to­gether works on a deep enough level that we can do so with­out turn­ing into an­other ran­dom in­sti­tu­tion that’s lost it’s pur­pose. (See Why Our Kind… for more)

  • Lately, peo­ple have tended to talk on so­cial me­dia (Face­book, Tum­blr, etc) rather than in for­mal blogs or fo­rums that en­courage longform dis­cus­sion. This has a few effects. (See A Re­turn to Dis­cus­sion for more)

    1. FB dis­cus­sion is frag­mented—it’s hard to find ev­ery­thing that’s been said on a topic. (And tum­blr is even worse)

    2. It’s hard to know whether OTHER peo­ple have read a given thing on a topic.

    3. A re­lated point (not nec­es­sar­ily in “A Re­turn to Dis­cus­sion” is that so­cial me­dia in­cen­tives some of the worst kinda of dis­cus­sion. Peo­ple share things quickly, with­out re­flec­tion. Peo­ple read and re­spond to things in 5-10 minute bursts, with­out hav­ing time to fully di­gest them.

  • Hav­ing a sin­gle, long form dis­cus­sion area that you can ex­pect ev­ery­one in an in­tel­lec­tual com­mu­nity to have read, makes it much eas­ier to build­ing knowl­edge. (And most of hu­man progress is due, not to hu­mans be­ing smart, but be­ing able to stand on the shoulders of gi­ants). Anna Sala­mon’s “Im­por­tance of a Sin­gle Con­ver­sa­tional Lo­cus” is framed around x-risk, but I think it ap­plies to all as­pects of EA: the prob­lems the world faces are so huge that they need a higher cal­iber of think­ing and knowl­edge-build­ing than we cur­rently have in or­der to solve.

  • In or­der to make true in­tel­lec­tual progress, you need peo­ple to be able to make cri­tiques. You also need those crit­ics to ex­pect their crit­i­cism to in turn be crit­i­cized, so that the crit­i­cism is high qual­ity. If a cri­tique turns out to be poorly thought out, we need shared, com­mon knowl­edge of that so that peo­ple don’t end up re­hash­ing the same de­bates.

  • And fi­nally, (one of) Sarah’s points in “EA has a ly­ing prob­lem” is that, in or­der to be differ­ent from other move­ments and suc­ceed where they failed, EA needs to hold it­self to a higher stan­dard than usual. There’s been much crit­i­cism of, say, In­ten­tional In­sights for do­ing sketchy, truth-bendy things to gain pres­tige and power. But that plenty of “high sta­tus” peo­ple within the EA com­mu­nity do things that are similar, even if to a differ­ent de­gree. We need to be aware of that.

    I would not ar­gue as strongly as Sarah does that we shouldn’t do it at all, but it’s worth pe­ri­od­i­cally call­ing each other out on it.

Co­op­er­a­tive Epistemology

So my biggest point here, is that we need to be more proac­tive and mind­ful about how dis­cus­sion and knowl­edge is built upon within the EA com­mu­nity.

To suc­ceed at our goals:

  • EA needs to hold it­self to a very high in­tel­lec­tual stan­dard (higher than we cur­rently have, prob­a­bly. In some sense any­way)

  • Fac­tions within EA needs to be able to co­op­er­ate, share knowl­edge. Both ob­ject level knowl­edge (i.e. how cost effec­tive is AMF?) and meta/​epistemic knowl­edge like:

    1. How do we eval­u­ate messy studies

    2. How do we dis­cuss things on­line so that peo­ple ac­tu­ally put effort into read­ing and con­tribut­ing the dis­cus­sion.

    3. What kinds of con­ver­sa­tional/​de­bate norms lead peo­ple to be more trans­par­ent.

  • We need to be able to ap­ply all the knowl­edge to go out and ac­com­plish things, which will prob­a­bly in­volve messy poli­ti­cal stuff.

I have spe­cific con­cerns about Sarah’s post, which I’ll post in a com­ment when I have a bit more time.