I am Issa Rice. https://issarice.com/
riceissa(Issa Rice)
I am wondering how the fund managers are thinking more long-term about encouraging more independent researchers and projects to come into existence and stay in existence. So far as I can tell, there hasn’t been much renewed granting to independent individuals and projects (i.e. granting for a second or third time to grantees who have previously already received an LTFF grant). Do most grantees have a solid plan for securing funding after their LTFF grant money runs out, and if so what do they tend to do?
I think LTFF is doing something valuable by giving people the freedom to not “sell out” to more traditional or mass-appeal funding sources (e.g. academia, established orgs, Patreon). I’m worried about a situation where receiving a grant from LTFF isn’t enough to be sustainable, so that people go back to doing more “safe” things like working in academia or at an established org.
Any thoughts on this topic?
Ok I see, thanks for the clarification! I didn’t notice the use of the phrase “the MIRI method”, which does sound like an odd way to phrase it (if MIRI was in fact not involved in coming up with the model).
MIRI and the Future of Humanity Institute each created models for calculating the probability that a new researcher joining MIRI will avert existential catastrophe. MIRI’s model puts it at between and , while the FHI estimates between and .
The wording here makes it seem like MIRI/FHI created the model, but the link in the footnote indicates that the model was created by the Oxford Prioritisation Project. I looked at their blog post for the MIRI model but it looks like MIRI wasn’t involved in creating the model (although the post author seems to have sent it to MIRI before publishing the post). I wonder if I’m missing something though, or misinterpreting what you wrote.
Did you end up writing this post? (I looked through your LW posts since the timestamp of the parent comment but it doesn’t seem like you did.) If not, I would be interested in seeing some sort of outline or short list of points even if you don’t have time to write the full post.
- 30 Oct 2020 9:39 UTC; 2 points) 's comment on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher by (
I think the forum software hides comments from new users by default. You can see here (and click the “play” button) to search for the most recently created users. You can see that Nathan Grant and ssalbdivad have comments on this post that are only visible via their user page, and not yet visible on this post.
Edit: The comments mentioned above are now visible on this post.
So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.
Can you say how you came up with the “moving from 1% to 0.8%” part? Everything else in your comment makes sense to me.
So you think the hazard rate might go from around 20% to around 1%?
I’m not attached to those specific numbers, but I think they are reasonable.
That’s still far from zero, and with enough centuries with 1% risk we’d expect to go extinct.
Right, maybe I shouldn’t have said “near zero”. But I still think my basic point (of needing to lower the hazard rate if growth stops) stands.
What’s doing the work for you? Do you think the probability of anthropogenic x-risk with our current tech is close to zero? Or do you think that it’s not but that if growth stopped we’d keep working on safety (say developing clean energy, improving relationships between US and China etc.) so that we’d eventually be safe?
I think the first option (low probability of x-risk with current technology) is driving my intuition.
Just to take some reasonable-seeming numbers (since I don’t have numbers of my own): in The Precipice, Toby Ord estimates ~19% chance of existential catastrophe from anthropogenic risks within the next 100 years. If growth stopped now, I would take out unaligned AI and unforeseen/other (although “other” includes things like totalitarian regimes so maybe some of the probability mass should be kept), and would also reduce engineered pandemics (not sure by how much), which would bring the chance down to 0.3% to 4%. (Of course, this is a naive analysis since if growth stopped a bunch of other things would change, etc.)
My intuitions depend a lot on when growth stopped. If growth stopped now I would be less worried, but if it stopped after some dangerous-but-not-growth-promoting technology was invented, I would be more worried.
but what about eg. climate change, nuclear war, biorisk, narrow AI systems being used in really bad ways?
I’m curious what kind of story you have in mind for current narrow AI systems leading to existential catastrophe.
Dustin Moskovitz has a relevant thread on Twitter
https://twitter.com/moskov/status/1254922931668279296
The timing of this AMA is pretty awkward, since many people will presumably not have access to the book or will not have finished reading the book. For comparison, Stuart Russell’s new book was published in October, and the AMA was in December, which seems like a much more comfortable length of time for people to process the book. Personally, I will probably have a lot of questions once I read the book, and I also don’t want to waste Toby’s time by asking questions that will be answered in the book. Is there any way to delay the AMA or hold a second one at a later date?
I don’t think you can add the percentages for “top or near top priority” and “at least significant resources”. If you look at the row for global poverty, the percentages add up to over 100% (61.7% + 87.0% = 148.7%), which means the table is double counting some people.
Looking at the bar graph above the table, it looks like “at least significant resources” includes everyone in “significant resources”, “near-top priority”, and “top priority”. For mental health it looks like “significant resources” has 37%, and “near-top priority” and “top priority” combined have 21.5% (shown as 22% in the bar graph).
So your actual calculation would just be 0.585 * .25 which is about 15%.
Stocking ~1 month of nonperishable food and other necessities
Can you say more about why 1 month, instead of 2 weeks or 3 months or some other length of time?
Also can you say something about how to decide when to start eating from stored food, instead of going out to buy new food or ordering food online?
I think that’s one of the common ways for a post to be interesting, but there are other ways (e.g. asking a question that generates interesting discussion in the comments).
This has been the case for quite a while now. There was a small discussion back in December 2016 where some people expressed similar opinions. My guess is that 2015 is the last year the group regularly had interesting posts, but I might be remembering incorrectly.
How did you decide on “blog posts, cross-posted to EA Forum” as the main output format for your organization? How deliberate was this choice, and what were the reasons going into it? There are many other output formats that could have been chosen instead (e.g. papers, wiki pages, interactive/tool website, blog+standalone web pages, online book, timelines).
wikieahuborg_w-20180412-history.xml
contains the dump, which can be imported to a MediaWiki instance.
Re: The old wiki on the EA Hub, I’m afraid the old wiki data got corrupted, it wasn’t backed up properly and it was deemed too difficult to restore at the time :(. So it looks like the information in that wiki is now lost to the winds.
I think a dump of the wiki is available at https://archive.org/details/wiki-wikieahuborg_w.
The full metrics report gives the breakdown of number of donors by donation size and year (for 2016-2018), both as an estimate and for known number of donors.
Do you have any thoughts on Qualia Research Institute?
In the April 2020 payout report, Oliver Habryka wrote:
I’m curious to hear more about this (either from Oliver or any of the other fund managers).