Idea: statements on behalf of the general EA community

Nor­mally we think of cause ar­eas as be­ing prob­lems where each of us just picks one or two to ap­proach with se­ri­ous in­di­vi­d­ual efforts. But on so­cial and poli­ti­cal is­sues, if the EA com­mu­nity col­lec­tively praises or con­demns some­thing, that may have a sub­stan­tial so­cial effect. If we gather our views into a clear doc­u­ment, that can heighten the im­pact.

I en­vi­sion some­thing similar to open let­ters which we com­monly see cir­cu­lated by mem­bers of some pro­fes­sions and so­cieties.

Here’s a sim­plified out­line to show an ex­am­ple of what I mean:

----------------------------------------------------------

Ex­am­ple Statement

Democ­racy As­sis­tance Efforts Must Be Continued

The pres­i­dent’s FY20XX bud­get re­quest asks for a de­crease in Amer­i­can fund­ing for democ­racy as­sis­tance. This short­fall will re­duce our abil­ities to mon­i­tor elec­tions for free­dom and fair­ness, as­sist poli­ti­cal par­ties, … etc.

Numer­ous for­eign policy ex­perts say that these efforts are effec­tive at pro­mot­ing democ­racy (cite) (cite). The value of democ­racy is in turn sup­ported by re­search which in­di­cates that it pro­tects hu­man rights (cite) and in­creases eco­nomic growth (cite). Democ­racy as­sis­tance fund­ing seems to do more good than many al­ter­na­tive uses of fund­ing, such as tax cuts or other forms of spend­ing (cite).

The EA com­mu­nity spans many peo­ple with varied back­grounds and poli­ti­cal per­spec­tives, but we gen­er­ally agree that cut­ting democ­racy as­sis­tance fund­ing is a bad idea for Amer­ica and for the world. We ask Congress to deny the pres­i­dent’s re­quest.

Signed,

[EA or­ga­ni­za­tion]

[EA or­ga­ni­za­tion]

[Promi­nent EA per­son]

[Im­por­tant per­son not nor­mally in EA who wants to sign on any­way]

[EA layper­son]

etc.

----------------------------------------------------------

Com­pared to how open let­ters are usu­ally writ­ten, I ex­pect we would be care­ful to in­clude a lot more se­ri­ous ev­i­dence and ar­gu­ments back­ing up our claims. They could be at­tached as an­nexes to a con­ven­tion­ally for­mat­ted open let­ter. This could add great per­sua­sive effect.

Th­ese could be used for other is­sues be­sides poli­tics. I think it could help with just about any­thing that re­quires col­lec­tive effort. For in­stance, we could make a pub­lic state­ment that we are aban­don­ing the hand­shake be­cause it spreads too much dis­ease.

How could this go wrong?

The open let­ters could be re­leased ex­ces­sively rarely, or never. Then we forgo the op­por­tu­nity to make progress on chang­ing peo­ple and in­sti­tu­tions, we forgo the pos­i­tive at­ten­tion that we could get from other peo­ple who agree with us, and we forgo the op­por­tu­nity for more ba­sic pub­lic­ity.

In times of great con­tro­versy, state­ments can be less a mat­ter of achiev­ing change than they are a mat­ter of avoid­ing look­ing bad. If the or­ga­ni­za­tions don’t make state­ments on press­ing cur­rent events, peo­ple may crit­i­cize the or­ga­ni­za­tions for be­ing silent. I think this is a minor con­sid­er­a­tion (just let them com­plain, most peo­ple won’t care) but still worth not­ing.

Mean­while, if we re­lease state­ments ex­ces­sively, there could be a va­ri­ety of prob­lems.

First, we could pro­mote points of view that are ac­tu­ally harm­ful, es­pe­cially if we pro­ceed with­out proper vet­ting and de­bate.

Se­cond, any EA who dis­agrees with the state­ment may (quite un­der­stand­ably) feel alienated from the EA com­mu­nity. We could miti­gate this by ac­tu­ally putting cred­ible EA dis­agree­ment on the open let­ter it­self—I’m not sure if this is a good or a bad idea. Of course we should gen­er­ally avoid marginal­iz­ing the voices of other EAs, but it would weaken the im­pacts of the let­ters. I don’t know if there will ever be truly unan­i­mous agree­ment among EAs, even if re­search and ex­perts all point the same way; some vaguely in­volved peo­ple will always com­plain and de­mand to be in­cluded.

Third, a state­ment could hurt our rep­u­ta­tion among out­siders who dis­agree with it.

Fourth, it could cre­ate a harsher ex­pec­ta­tion from peo­ple out­side EA for ev­ery­thing that we don’t speak up about. E.g., “you re­leased a state­ment for X, so why are you silent on Y?”

How should this pro­cess be han­dled?

The way open let­ters are nor­mally done, any­one can sim­ply start one, and it can cir­cu­late and get a lot of sig­na­tures de­pend­ing on how pop­u­lar it is. EAs could have made such open let­ters already, but it seems like peo­ple have just been re­luc­tant to do so, or not thought of it.

How­ever, I think this would be a bad model for EA and should not be nor­mal­ized. The main rea­son is the unilat­er­al­ist’s curse. It’s too easy for any per­son to make an un­sub­stan­ti­ated or con­tro­ver­sial state­ment and get sig­na­tures from a sig­nifi­cant minor­ity or a small ma­jor­ity of EAs. So—as­sum­ing that we do nor­mal­ize this be­hav­ior—we’ll end up mak­ing too many state­ments that don’t prop­erly rep­re­sent the com­mu­nity con­sen­sus. And there will be a lot of as­so­ci­ated con­tro­versy within EA which will get peo­ple an­gry and waste our time.

In­stead of let­ting some­one like me (who already cares about democ­racy as­sis­tance) re­lease such a state­ment, con­trol what it says and gather sig­na­to­ries, it’s bet­ter for a fixed, neu­tral body to perform this ser­vice. Since they are pre­s­e­lected, there won’t be bias from the au­thor try­ing to pro­mote their own cause. And since there will only be one group des­ig­nated with this re­spon­si­bil­ity, there won’t be a unilat­er­al­ist’s curse. Fi­nally, they will have some stand­ing and rep­u­ta­tion within the EA com­mu­nity, so will be more strongly pressed to not de­vi­ate from the rest of us, and will have more le­gi­t­i­macy.

EA or­ga­ni­za­tions some­times put out their own state­ments on gen­eral cur­rent events. For in­stance, I no­ticed one or two (does Twit­ter count??) EA or­ga­ni­za­tions make offi­cial state­ments fol­low­ing the mur­der of Ge­orge Floyd. Th­ese came out af­ter a sus­tained pe­riod of pub­lic con­tro­versy and I don’t think they will ac­com­plish any­thing no­table in terms of poli­ti­cal re­form. This also seems like a poor way to han­dle it—in­di­vi­d­ual or­ga­ni­za­tions can also fall vic­tim to the unilat­er­al­ist’s curse, and peo­ple can im­prop­erly take such state­ments to be rep­re­sen­ta­tive of EA writ large. EA orgs should stick to their ar­eas of ex­per­tise, they should be cau­tious be­fore mak­ing state­ments about other is­sues, and they shouldn’t be pres­sured to do so. And it is sim­ply re­dun­dant la­bor for many or­ga­ni­za­tions to sep­a­rately in­ves­ti­gate one is­sue that is out­side of their typ­i­cal wheelhouses. It seems healthier and safer all round for them to just sign onto a state­ment that was pro­duced more care­fully to ro­bustly rep­re­sent EAs. The or­ga­ni­za­tion it­self is less sub­ject to crit­i­cism for flaws in the state­ment, whereas the state­ment it­self has a more po­tent im­pact be­cause it gath­ers all our voices into an au­thor­i­ta­tive doc­u­ment.

There could even be a norm that only the fixed body make such pub­lic state­ments, and other or­ga­ni­za­tions could say “we do not make our own pub­lic state­ments, in­stead we sign onto this pro­ce­dure that is in­clu­sive of the rest of the EA com­mu­nity.” That would help them ward off pres­sure and could pre­vent them from mak­ing ill-ad­vised state­ments that might re­flect im­prop­erly on EAs in gen­eral.

An ex­cep­tion is when or­ga­ni­za­tions pro­mote is­sues which are squarely within their wheelhouse. For in­stance, FLI pro­duced two open let­ters, one op­pos­ing au­tonomous weapons and an­other pro­vid­ing re­search pri­ori­ties for ro­bust and benefi­cial AI. Though I dis­agree with one of these let­ters, I ten­ta­tively feel that the pro­cess is OK, since EA or­ga­ni­za­tions should be pro­ce­du­rally free to make progress within their cause ar­eas how­ever they see fit.

Re­gard­less, my pri­mary in­ter­est here is in am­plify­ing a cred­ible EA voice by adding a new state­ment pro­ce­dure; chang­ing what in­di­vi­d­ual EA or­ga­ni­za­tions do is of du­bi­ous and lesser im­por­tance.

What would this fixed body look like?

It should rep­re­sent the EA com­mu­nity fairly well. It could sim­ply be the Cen­ter for Effec­tive Altru­ism, or maybe some other com­mu­nity group.

Fixed doesn’t nec­es­sar­ily mean cen­tral­ized. The pro­cess could be run by com­mu­nity vot­ing, and the state­ment might be writ­ten as a col­lec­tive Google Doc­u­ment, as long as there is a clear and ac­cepted pro­ce­dure for do­ing it.

We could elect some­one who has the job of run­ning this pro­cess.

What would we write for an is­sue where EAs truly dis­agree with one an­other?

We could stay silent, or we could write an in­offen­sive let­ter that em­pha­sizes our few ar­eas of com­mon ground. The lat­ter may also in­clude ex­pla­na­tions of our ar­eas of in­ter­nal dis­agree­ment and a gen­eral plea for other peo­ple to rec­og­nize that there are good points on both sides of the de­bate.