Note: We (the Long Term Future Fund) will likely publish our writeups for the last round of grants within the next few days, which should give applicants some more data on what kind of grants we are likely to fund in the future.
At least in Will’s model, we are among the earliest human generations, so I don’t think this argument holds very much, unless you posit a very fast diminishing prior (which so far nobody has done).
I’d be interested if others thought of very different approaches. It’s possible that I’m trying to pack too much into the concept of ‘most influential’, or that this concept should be kept separate from the idea of moving resources around to different times.
I tried engaging with the post for 2-3 hours and was working on a response, but ended up kind of bouncing off at least in part because the definition of hingyness didn’t seem particularly action-relevant to me, mostly for the reasons that Gregory Lewis and Kit outlined in their comments.
I also think a major issue with the current definition is that I don’t know of any technology or ability to reliably pass on resources to future centuries, which introduces a natural strong discount factor into the system, but which seems like a major consideration in favor of spending resources now instead of trying to pass them on (and likely fail, as illustrated in Robin Hanson’s original “giving later” post).
While I agree with you that maxi(P(century i most leveraged)) is not that action relevant, it is what Will is analyzing in the post, and think that William Kiely’s suggested prior seems basically reasonable for answering that question. As Will said explicitly in another comment:
Agree that it might well be that even though one has a very low credence in HoH, one should still act in the same way. (e.g. because if one is not at HoH, one is a sim, and your actions don’t have much impact).
I do think that the focus on maxi(P(century i most leveraged)) is the part of the post that I am least satisfied by, and that makes it hardest to engage with it, since I don’t really know why we care about the question of “are we in the most influential time in history?”. What we actually care about is the effectiveness of our interventions to give resources to the future, and the marginal effectiveness of those resources in the future, both of which are quite far removed from that question (because of the difficulties of sending resources to the future, and the fact that the answer to that question makes overall only a small difference for the total magnitude of the impact of any individual’s actions).
I was mostly skeptical because the people involved did not seem to have any experience doing any kind of AI Alignment research, or themselves had the technical background they were trying to teach. I think this caused them to focus on the obvious things to teach, instead of the things that are actually useful.
To be clear, I have broadly positive impressions of Toon and think the project had promise, just that the team didn’t actually have the skills to execute on it, which I think few people have.
[Epistemic status: Talked to Geoff a month ago about the state of Leverage, trying to remember the details of what was said, but not super confident I am getting everything right]
My sense is that I would not classify Leverage Research as having been disbanded, though it did scale down quite significantly and does appear to have changed shape in major ways. Leverage Research continues to exist as an organization with about 5 staff, and continues to share a very close relationship with Paradigm Academy, though I do believe that those organizations have become more distinct recently (no longer having any shared staff, and no longer having any shared meetings, but still living in the same building and both being led by Geoff).
A large fraction of “Minding Our Way” is about this: http://mindingourway.com/
Yes! Due to a bunch of other LTFF things taking up my time I was planning to post my reply to this around the same time as the next round of grant announcements.
In his email to us he only mentioned time-constraints (in particular I think his other commitments at Bellroy and helping with MIRI seemed to ramp up around that time, though I also think the fund took more time than he had initially expected).
This updated me a bit, and I think I now at least partially retract that part of my comment.
I think the Information security careers for GCR reduction post is a relatively bad first place, and made me update reasonably strong downwards on the signal of the price.
It’s not that the post is bad, but I didn’t perceive it to contribute much to intellectual progress in any major way, and to me mostly parsed as an organizational announcement. The post obviously got a lot of upvotes, which is good because it was an important announcement, but I think a large part of that is because it was written by Open Phil (which is what makes it an important announcement) [Edit: I believe this less strongly now than I did at the time of writing this comment. See my short thread with Peter_Hurford]. I expect the same post written by someone else would have not received much prominence and I expect would have very unlikely been selected for a price.
I think it’s particularly bad for posts to get prizes that would have been impossible to write when not coming from an established organization. I am much less confident about posts that could have been written by someone else, but that happened to have been written by someone in a full time role at an EA organization.
Thanks for the response!
I think you misunderstood what I was saying at least a bit, in that I did read the post in reasonably close detail (about a total of half an hour of reading) and was aware of most of your comment.
I will try to find the time to write a longer response that tries to explain my case in more detail, but can’t currently make any promises. I expect there are some larger inferential distances here that would take a while to cross for both of us.
First of all, I think evaluations like this are quite important and a core part of what I think of as EA’s value proposition. I applaud the effort and dedication that went into this report, and would like to see more people trying similar things in the future.
Tee Barnett asked me for feedback in a private message. Here is a very slightly edited version of my response (hency why it is more off-the-cuff than I would usually post on the forum):
Hmm, I don’t know. I looked at the cost-effectiveness section and feel mostly that the post is overemphasizing formal models. Like, after reading the whole thing, and looking at the spreadsheet for 5 minutes I am still unable to answer the following core questions:
What is the basic argument for Donational?
Does that argument hold up after looking into it in more detail?
How does the quality of that argument compare against other things in the space?
What has donational done so far?
What evidence do we have about its operations?
If you do a naive simple fermi estimate on donational’s effectiveness, what is the bottom line?
I think I would have preferred just one individual writing a post titled “Why I am not excited about Donational”, that just tries to explain clearly, like you would in a conversation, why they don’t think it’s a good idea, or how they have come to change their mind.
Obviously I am strongly in favor of people doing evaluations like this, though I don’t think I am a huge fan of the format that this one chose.
------- (end of quote)
On a broader level, I think there might be some philosophical assumptions about the way this post deals with modeling cause prioritization that I disagree with. I have this sense that the primary purpose of mathematical analysis in most contexts is to help someone build a deeper understanding of a problem by helping them make their assumptions explicit and to clarify the consequences of their assumptions, and that after writing down their formal models and truly understanding their consequences, most decision makers are well-advised to throw away the formal models and go with what their updated gut-sense is.
When I look at this post, I have a lot of trouble understanding the actual reasons for why someone might think Donational is a good idea, and what arguments would (and maybe have) convinced them otherwise. Instead I see a large amount of rigor being poured into a single cost-effectiveness model, with a result that I am pretty confident could have been replaced by some pretty straightforward fermi point-estimates.
I think there is nothing wrong with also doing sensitivity analyses and more complicated parameter estimation, but in this context it seems that all of that mostly obscures the core aspects of the underlying uncertainty and makes it harder for both the reader to understand what the basic case for Donational is (and why it fails), and (in my model) for the people constructing the model to actually interface with the core questions at hand.
All of this doesn’t mean that the tools employed here are never the correct tools to be used, but I do think that when trying to produce an evaluation that is primarily designed for external consumption, I would prefer much more emphasis to be given to clear explanations of the basic idea behind the organizations and an explanation of a set of cruxes and observations that would change the evaluators mind, instead of this much emphasis on both the creation of detailed mathematical models and the explanation of those models.
Update: I was just wrong, Matt is indeed primarily HK
Stefan Torges from REG recently asked me for our room for funding, and I sent him the following response back:
About the room for funding question, here are my rough estimates (this is for money in addition to our expected donations of about $1.6M per year):
75% confidence threshold: ~$1M
Happy to provide more details on what kind of funding I would expect in the different scenarios.
The value of these marginal grants doesn’t feel like it would go down more than 20% than our current worst grants, since in every round I feel like there is a large number of grants that are highly competitive with the lowest-ranked grants we do make.
In other words, I think we have significant room for funding at about the quality level of grants we are currently making.
On LessWrong we intentionally didn’t want to encourage pictures in the comments, since that provides a way to hijack people’s attention in a way that seemed too easy. You can use markdown syntax to add pictures, both in the markdown editor and the WYSIWYG editor.
Answer turned out to be closer to 3 months.
This seems reasonable. I changed it to say “ethical”.