Habryka

Karma: 2,197
Page 1
• Note: We (the Long Term Fu­ture Fund) will likely pub­lish our write­ups for the last round of grants within the next few days, which should give ap­pli­cants some more data on what kind of grants we are likely to fund in the fu­ture.

Sur­vival and Flour­ish­ing Fund grant ap­pli­ca­tions open un­til Oc­to­ber 4th ($1MM-$2MM planned for dis­per­sal)

9 Sep 2019 4:14 UTC
27 points
• At least in Will’s model, we are among the ear­liest hu­man gen­er­a­tions, so I don’t think this ar­gu­ment holds very much, un­less you posit a very fast diminish­ing prior (which so far no­body has done).

• I’d be in­ter­ested if oth­ers thought of very differ­ent ap­proaches. It’s pos­si­ble that I’m try­ing to pack too much into the con­cept of ‘most in­fluen­tial’, or that this con­cept should be kept sep­a­rate from the idea of mov­ing re­sources around to differ­ent times.

I tried en­gag­ing with the post for 2-3 hours and was work­ing on a re­sponse, but ended up kind of bounc­ing off at least in part be­cause the defi­ni­tion of hing­y­ness didn’t seem par­tic­u­larly ac­tion-rele­vant to me, mostly for the rea­sons that Gre­gory Lewis and Kit out­lined in their com­ments.

I also think a ma­jor is­sue with the cur­rent defi­ni­tion is that I don’t know of any tech­nol­ogy or abil­ity to re­li­ably pass on re­sources to fu­ture cen­turies, which in­tro­duces a nat­u­ral strong dis­count fac­tor into the sys­tem, but which seems like a ma­jor con­sid­er­a­tion in fa­vor of spend­ing re­sources now in­stead of try­ing to pass them on (and likely fail, as illus­trated in Robin Han­son’s origi­nal “giv­ing later” post).

• 5 Sep 2019 4:41 UTC
13 points
in reply to: Kit's comment

While I agree with you that is not that ac­tion rele­vant, it is what Will is an­a­lyz­ing in the post, and think that William Kiely’s sug­gested prior seems ba­si­cally rea­son­able for an­swer­ing that ques­tion. As Will said ex­plic­itly in an­other com­ment:

Agree that it might well be that even though one has a very low cre­dence in HoH, one should still act in the same way. (e.g. be­cause if one is not at HoH, one is a sim, and your ac­tions don’t have much im­pact).

I do think that the fo­cus on is the part of the post that I am least satis­fied by, and that makes it hard­est to en­gage with it, since I don’t re­ally know why we care about the ques­tion of “are we in the most in­fluen­tial time in his­tory?”. What we ac­tu­ally care about is the effec­tive­ness of our in­ter­ven­tions to give re­sources to the fu­ture, and the marginal effec­tive­ness of those re­sources in the fu­ture, both of which are quite far re­moved from that ques­tion (be­cause of the difficul­ties of send­ing re­sources to the fu­ture, and the fact that the an­swer to that ques­tion makes over­all only a small differ­ence for the to­tal mag­ni­tude of the im­pact of any in­di­vi­d­ual’s ac­tions).

• I was mostly skep­ti­cal be­cause the peo­ple in­volved did not seem to have any ex­pe­rience do­ing any kind of AI Align­ment re­search, or them­selves had the tech­ni­cal back­ground they were try­ing to teach. I think this caused them to fo­cus on the ob­vi­ous things to teach, in­stead of the things that are ac­tu­ally use­ful.

To be clear, I have broadly pos­i­tive im­pres­sions of Toon and think the pro­ject had promise, just that the team didn’t ac­tu­ally have the skills to ex­e­cute on it, which I think few peo­ple have.

• [Epistemic sta­tus: Talked to Ge­off a month ago about the state of Lev­er­age, try­ing to re­mem­ber the de­tails of what was said, but not su­per con­fi­dent I am get­ting ev­ery­thing right]

My sense is that I would not clas­sify Lev­er­age Re­search as hav­ing been dis­banded, though it did scale down quite sig­nifi­cantly and does ap­pear to have changed shape in ma­jor ways. Lev­er­age Re­search con­tinues to ex­ist as an or­ga­ni­za­tion with about 5 staff, and con­tinues to share a very close re­la­tion­ship with Paradigm Academy, though I do be­lieve that those or­ga­ni­za­tions have be­come more dis­tinct re­cently (no longer hav­ing any shared staff, and no longer hav­ing any shared meet­ings, but still liv­ing in the same build­ing and both be­ing led by Ge­off).

• Yes! Due to a bunch of other LTFF things tak­ing up my time I was plan­ning to post my re­ply to this around the same time as the next round of grant an­nounce­ments.

• In his email to us he only men­tioned time-con­straints (in par­tic­u­lar I think his other com­mit­ments at Bel­lroy and helping with MIRI seemed to ramp up around that time, though I also think the fund took more time than he had ini­tially ex­pected).

• I think the In­for­ma­tion se­cu­rity ca­reers for GCR re­duc­tion post is a rel­a­tively bad first place, and made me up­date rea­son­ably strong down­wards on the sig­nal of the price.

It’s not that the post is bad, but I didn’t per­ceive it to con­tribute much to in­tel­lec­tual progress in any ma­jor way, and to me mostly parsed as an or­ga­ni­za­tional an­nounce­ment. The post ob­vi­ously got a lot of up­votes, which is good be­cause it was an im­por­tant an­nounce­ment, but I think a large part of that is be­cause it was writ­ten by Open Phil (which is what makes it an im­por­tant an­nounce­ment) [Edit: I be­lieve this less strongly now than I did at the time of writ­ing this com­ment. See my short thread with Peter_Hur­ford]. I ex­pect the same post writ­ten by some­one else would have not re­ceived much promi­nence and I ex­pect would have very un­likely been se­lected for a price.

I think it’s par­tic­u­larly bad for posts to get prizes that would have been im­pos­si­ble to write when not com­ing from an es­tab­lished or­ga­ni­za­tion. I am much less con­fi­dent about posts that could have been writ­ten by some­one else, but that hap­pened to have been writ­ten by some­one in a full time role at an EA or­ga­ni­za­tion.

• Thanks for the re­sponse!

I think you mi­s­un­der­stood what I was say­ing at least a bit, in that I did read the post in rea­son­ably close de­tail (about a to­tal of half an hour of read­ing) and was aware of most of your com­ment.

I will try to find the time to write a longer re­sponse that tries to ex­plain my case in more de­tail, but can’t cur­rently make any promises. I ex­pect there are some larger in­fer­en­tial dis­tances here that would take a while to cross for both of us.

• First of all, I think eval­u­a­tions like this are quite im­por­tant and a core part of what I think of as EA’s value propo­si­tion. I ap­plaud the effort and ded­i­ca­tion that went into this re­port, and would like to see more peo­ple try­ing similar things in the fu­ture.

Tee Bar­nett asked me for feed­back in a pri­vate mes­sage. Here is a very slightly ed­ited ver­sion of my re­sponse (hency why it is more off-the-cuff than I would usu­ally post on the fo­rum):

-------

Hmm, I don’t know. I looked at the cost-effec­tive­ness sec­tion and feel mostly that the post is overem­pha­siz­ing for­mal mod­els. Like, af­ter read­ing the whole thing, and look­ing at the spread­sheet for 5 min­utes I am still un­able to an­swer the fol­low­ing core ques­tions:

• What is the ba­sic ar­gu­ment for Dona­tional?

• Does that ar­gu­ment hold up af­ter look­ing into it in more de­tail?

• How does the qual­ity of that ar­gu­ment com­pare against other things in the space?

• What has dona­tional done so far?

• What ev­i­dence do we have about its op­er­a­tions?

• If you do a naive sim­ple fermi es­ti­mate on dona­tional’s effec­tive­ness, what is the bot­tom line?

I think I would have preferred just one in­di­vi­d­ual writ­ing a post ti­tled “Why I am not ex­cited about Dona­tional”, that just tries to ex­plain clearly, like you would in a con­ver­sa­tion, why they don’t think it’s a good idea, or how they have come to change their mind.

Ob­vi­ously I am strongly in fa­vor of peo­ple do­ing eval­u­a­tions like this, though I don’t think I am a huge fan of the for­mat that this one chose.

------- (end of quote)

On a broader level, I think there might be some philo­soph­i­cal as­sump­tions about the way this post deals with mod­el­ing cause pri­ori­ti­za­tion that I dis­agree with. I have this sense that the pri­mary pur­pose of math­e­mat­i­cal anal­y­sis in most con­texts is to help some­one build a deeper un­der­stand­ing of a prob­lem by helping them make their as­sump­tions ex­plicit and to clar­ify the con­se­quences of their as­sump­tions, and that af­ter writ­ing down their for­mal mod­els and truly un­der­stand­ing their con­se­quences, most de­ci­sion mak­ers are well-ad­vised to throw away the for­mal mod­els and go with what their up­dated gut-sense is.

When I look at this post, I have a lot of trou­ble un­der­stand­ing the ac­tual rea­sons for why some­one might think Dona­tional is a good idea, and what ar­gu­ments would (and maybe have) con­vinced them oth­er­wise. In­stead I see a large amount of rigor be­ing poured into a sin­gle cost-effec­tive­ness model, with a re­sult that I am pretty con­fi­dent could have been re­placed by some pretty straight­for­ward fermi point-es­ti­mates.

I think there is noth­ing wrong with also do­ing sen­si­tivity analy­ses and more com­pli­cated pa­ram­e­ter es­ti­ma­tion, but in this con­text it seems that all of that mostly ob­scures the core as­pects of the un­der­ly­ing un­cer­tainty and makes it harder for both the reader to un­der­stand what the ba­sic case for Dona­tional is (and why it fails), and (in my model) for the peo­ple con­struct­ing the model to ac­tu­ally in­ter­face with the core ques­tions at hand.

All of this doesn’t mean that the tools em­ployed here are never the cor­rect tools to be used, but I do think that when try­ing to pro­duce an eval­u­a­tion that is pri­mar­ily de­signed for ex­ter­nal con­sump­tion, I would pre­fer much more em­pha­sis to be given to clear ex­pla­na­tions of the ba­sic idea be­hind the or­ga­ni­za­tions and an ex­pla­na­tion of a set of cruxes and ob­ser­va­tions that would change the eval­u­a­tors mind, in­stead of this much em­pha­sis on both the cre­ation of de­tailed math­e­mat­i­cal mod­els and the ex­pla­na­tion of those mod­els.

In­tegrity and ac­countabil­ity are core parts of ra­tio­nal­ity [LW-Cross­post]

23 Jul 2019 0:14 UTC
52 points
About the room for fund­ing ques­tion, here are my rough es­ti­mates (this is for money in ad­di­tion to our ex­pected dona­tions of about $1.6M per year): 75% con­fi­dence thresh­old: ~$1M
50%: ~$1.5M 25%: ~$3M