EA Forum Prize: Winners for March 2019

CEA is pleased to an­nounce the win­ners of the March 2019 EA Fo­rum Prize!

In first place (for a prize of $999): “SHOW: A frame­work for shap­ing your tal­ent for di­rect work”, by Ryan Carey and Te­gan McCaslin.

In sec­ond place (for a prize of $500): “The case for de­lay­ing so­lar geo­eng­ineer­ing re­search”, by Halstead.

In third place (for a prize of $250): “The Case for the EA Ho­tel”, by Halffull.

We also awarded prizes in Novem­ber, De­cem­ber, Jan­uary, and Fe­bru­ary.

What is the EA Fo­rum Prize?

Cer­tain posts ex­em­plify the kind of con­tent we most want to see on the EA Fo­rum. They are well-re­searched and well-or­ga­nized; they care about in­form­ing read­ers, not just per­suad­ing them.

The Prize is an in­cen­tive to cre­ate posts like this. But more im­por­tantly, we see it as an op­por­tu­nity to show­case ex­cel­lent con­tent as an ex­am­ple and in­spira­tion to the Fo­rum’s users.

About the win­ning posts

SHOW: A frame­work for shap­ing your tal­ent for di­rect work” draws on the au­thors’ ex­pe­riences work­ing with EA or­ga­ni­za­tions to pro­duce use­ful ca­reer ad­vice (which also res­onated with sev­eral Prize vot­ers). The post uses a highly read­able blend of per­sonal anec­dotes and out­side ex­am­ples, while in­clud­ing enough strate­gic anal­y­sis that read­ers can un­der­stand the au­thors’ ar­gu­ments (and de­cide whether they agree).

Ex­pected-value calcu­la­tions are a key part of EA, but hu­mans still tend to think in heuris­tics, so heuris­tic-driven tools like the SHOW frame­work have a lot of prac­ti­cal use. Per­son­ally, I can imag­ine shar­ing the post with al­most any­one who tells me they want to move into di­rect work.

While geo­eng­ineer­ing isn’t one of the most pop­u­lar EA cause ar­eas, many peo­ple within EA think of it as a promis­ing idea which mer­its fur­ther ex­plo­ra­tion. In 2016, Open Phil made a $2.5 mil­lion grant to sup­port re­search on so­lar geo­eng­ineer­ing tech­nol­ogy, as well as the poli­ti­cal and so­cial im­pli­ca­tions of geo­eng­ineer­ing.

In “The case for de­lay­ing so­lar geo­eng­ineer­ing re­search”, Halstead pushes back on the tech­ni­cal side of this re­search, ar­gu­ing that:

  1. Due to geopoli­ti­cal ten­sion, we prob­a­bly won’t want to ac­tu­ally use geo­eng­ineer­ing in the next few decades.

  2. Re­search into weather al­ter­a­tion could lead us to dis­cover dan­ger­ous tech­nol­ogy, which means that we should avoid or de­lay the re­search if we can.

  3. And in fact, we can avoid or de­lay; we have a lot of time to ob­serve the re­sults of to­day’s cli­mate change miti­ga­tion efforts be­fore con­duct­ing high-risk geo­eng­ineer­ing re­search.

His ar­ti­cle adds con­sid­er­able de­tail be­yond what ap­pears in the Open Phil writeup, and rep­re­sents a change from his thoughts on the is­sue in 2018. The qual­ity of Halstead’s anal­y­sis and his will­ing­ness to re­con­sider his past po­si­tions make this piece an ex­cel­lent ex­am­ple of what we’d like to see on the Fo­rum.

The Case for the EA Ho­tel” is an­other in­stance of strong anal­y­sis ap­plied to a spec­u­la­tive cause area. Some of the post’s best fea­tures:

  • The au­thor starts with a gen­eral the­ory about the needs of the EA com­mu­nity be­fore ex­plain­ing how the EA Ho­tel might ad­dress these needs more effec­tively than other op­tions. Dis­cussing the na­ture of a prob­lem be­fore sug­gest­ing solu­tions helps read­ers:

    • De­cide whether they agree that the pro­posed prob­lem is re­ally a con­cern.

    • Un­der­stand whether the au­thor’s sug­ges­tions ac­tu­ally ad­dress the prob­lem.

  • The ar­gu­ments are clear and well-for­mat­ted, mak­ing it easy to move back and forth through the piece and con­sider each point in turn.

  • The au­thor uses an “epistemic sta­tus” note and bi­o­graph­i­cal in­for­ma­tion to help read­ers un­der­stand the con­text of his ar­gu­ment. We’d love to see more epistemic sta­tuses on Fo­rum posts. (And more bi­o­graph­i­cal in­for­ma­tion!)

    • His note that the post isn’t meant to be “bal­anced” is es­pe­cially helpful; if you want to write some­thing per­sua­sive for the Fo­rum, it’s good to ac­knowl­edge it as per­sua­sion.

The vot­ing process

All posts pub­lished in the month of March qual­ified for vot­ing, save for those writ­ten by CEA staff and Prize judges.

Prizes were cho­sen by five peo­ple. Two of them are the Fo­rum’s mod­er­a­tors (Aaron Gertler and Ju­lia WiseDenise Melchin took this month off).

The oth­ers were the three high­est-karma users at the time the new Fo­rum was launched (Peter Hur­ford, Joey Savoie, and Rob Wiblin).

Vot­ers re­cused them­selves from vot­ing on posts writ­ten by their col­leagues. Other­wise, they used their own in­di­vi­d­ual crite­ria for choos­ing posts, though they broadly agree with the goals out­lined above.

Win­ners were cho­sen by an ini­tial round of ap­proval vot­ing, fol­lowed by a runoff vote to re­solve ties.

Note: This month, Te­gan McCaslin’s ar­ti­cle on brain ar­chi­tec­ture (posted by “AI Im­pacts”, the or­ga­ni­za­tion she worked for at the time) re­ceived enough votes to be en­tered into a runoff vote for a prize.

How­ever, be­cause Te­gan was already set to be a re­cip­i­ent of the first-place prize, we elected to leave her sec­ond ar­ti­cle out of the runoff pro­cess. Nev­er­the­less, we wanted to ac­knowl­edge her out­stand­ing con­tri­bu­tions!

Feedback

If you think the Prize has changed the way you read or write on the Fo­rum, or you have ideas for ways we should change the cur­rent for­mat, please write a com­ment or con­tact Aaron Gertler.