I am Issa Rice. https://issarice.com/
riceissa
What’s doing the work for you? Do you think the probability of anthropogenic x-risk with our current tech is close to zero? Or do you think that it’s not but that if growth stopped we’d keep working on safety (say developing clean energy, improving relationships between US and China etc.) so that we’d eventually be safe?
I think the first option (low probability of x-risk with current technology) is driving my intuition.
Just to take some reasonable-seeming numbers (since I don’t have numbers of my own): in The Precipice, Toby Ord estimates ~19% chance of existential catastrophe from anthropogenic risks within the next 100 years. If growth stopped now, I would take out unaligned AI and unforeseen/other (although “other” includes things like totalitarian regimes so maybe some of the probability mass should be kept), and would also reduce engineered pandemics (not sure by how much), which would bring the chance down to 0.3% to 4%. (Of course, this is a naive analysis since if growth stopped a bunch of other things would change, etc.)
My intuitions depend a lot on when growth stopped. If growth stopped now I would be less worried, but if it stopped after some dangerous-but-not-growth-promoting technology was invented, I would be more worried.
but what about eg. climate change, nuclear war, biorisk, narrow AI systems being used in really bad ways?
I’m curious what kind of story you have in mind for current narrow AI systems leading to existential catastrophe.
Dustin Moskovitz has a relevant thread on Twitter
https://twitter.com/moskov/status/1254922931668279296
The timing of this AMA is pretty awkward, since many people will presumably not have access to the book or will not have finished reading the book. For comparison, Stuart Russell’s new book was published in October, and the AMA was in December, which seems like a much more comfortable length of time for people to process the book. Personally, I will probably have a lot of questions once I read the book, and I also don’t want to waste Toby’s time by asking questions that will be answered in the book. Is there any way to delay the AMA or hold a second one at a later date?
I don’t think you can add the percentages for “top or near top priority” and “at least significant resources”. If you look at the row for global poverty, the percentages add up to over 100% (61.7% + 87.0% = 148.7%), which means the table is double counting some people.
Looking at the bar graph above the table, it looks like “at least significant resources” includes everyone in “significant resources”, “near-top priority”, and “top priority”. For mental health it looks like “significant resources” has 37%, and “near-top priority” and “top priority” combined have 21.5% (shown as 22% in the bar graph).
So your actual calculation would just be 0.585 * .25 which is about 15%.
Stocking ~1 month of nonperishable food and other necessities
Can you say more about why 1 month, instead of 2 weeks or 3 months or some other length of time?
Also can you say something about how to decide when to start eating from stored food, instead of going out to buy new food or ordering food online?
I think that’s one of the common ways for a post to be interesting, but there are other ways (e.g. asking a question that generates interesting discussion in the comments).
This has been the case for quite a while now. There was a small discussion back in December 2016 where some people expressed similar opinions. My guess is that 2015 is the last year the group regularly had interesting posts, but I might be remembering incorrectly.
How did you decide on “blog posts, cross-posted to EA Forum” as the main output format for your organization? How deliberate was this choice, and what were the reasons going into it? There are many other output formats that could have been chosen instead (e.g. papers, wiki pages, interactive/tool website, blog+standalone web pages, online book, timelines).
wikieahuborg_w-20180412-history.xml
contains the dump, which can be imported to a MediaWiki instance.
Re: The old wiki on the EA Hub, I’m afraid the old wiki data got corrupted, it wasn’t backed up properly and it was deemed too difficult to restore at the time :(. So it looks like the information in that wiki is now lost to the winds.
I think a dump of the wiki is available at https://archive.org/details/wiki-wikieahuborg_w.
The full metrics report gives the breakdown of number of donors by donation size and year (for 2016-2018), both as an estimate and for known number of donors.
Do you have any thoughts on Qualia Research Institute?
Over the years, you have published several pieces on ways you’ve changed your mind (e.g. about EA, another about EA, weird ideas, hedonic utilitarianism, and a bunch of other ideas). While I’ve enjoyed reading the posts and the selection of ideas, I’ve also found most of the posts frustrating (the hedonic utilitarianism one is an exception) because they mostly only give the direction of the update, without also giving the reasoning and additional evidence that caused the update* (e.g. in the EA post you write “I am erring on the side of writing this faster and including more of my conclusions, at the cost of not very clearly explaining why I’ve shifted positions”). Is there a reason you keep writing in this style (e.g. you don’t have time, or you don’t want to “give away the answers” to the reader), and if so, what is the reason?
*Why do I find this frustrating? My basic reasoning is something like this: I think this style of writing forces the reader to do a weird kind of Aumann reasoning where they have to guess what evidence/arguments Buck might have had at the start, and what evidence/arguments he subsequently saw, in order to try to reconstruct the update. When I encounter this kind of writing, I mostly just take it as social information about who believes what, without bothering to go through the Aumann reasoning (because it seems impossible or would take way too much effort). See also this comment by Wei Dai.
Do you think non-altruistic interventions for AI alignment (i.e. AI safety “prepping”) make sense? If so, do you have suggestions for concrete actions to take, and if not, why do you think they don’t make sense?
(Note: I previously asked a similar question addressed at someone else, but I am curious for Buck’s thoughts on this.)
How do you see success/an “existential win” playing out in short timeline scenarios (e.g. less than 10 years until AGI) where alignment is non-trivial/turns out to not solve itself “by default”? For example, in these scenarios do you see MIRI building an AGI, or assisting/advising another group to do so, or something else?
[Meta] During the AMA, are you planning to distinguish (e.g. by giving short replies) between the case where you can’t answer a question due to MIRI’s non-disclosure policy vs the case where you won’t answer a question simply because there isn’t enough time/it’s too much effort to answer?
The 2017 MIRI fundraiser post says “We plan to say more in the future about the criteria for strategically adequate projects in 7a” and also “A number of the points above require further explanation and motivation, and we’ll be providing more details on our view of the strategic landscape in the near future”. As far as I can tell, MIRI hasn’t published any further explanation of this strategic plan. Is MIRI still planning to say more about its strategic plan in the near future, and if so, is there a concrete timeframe (e.g. “in a few months”, “in a year”, “in two years”) for publishing such an explanation?
(Note: I asked this question a while ago on LessWrong.)
- Nov 19, 2019, 5:52 PM; 5 points) 's comment on Two clarifications about “Strategic Background” by (LessWrong;
I asked a question on LessWrong recently that I’m curious for your thoughts on. If you don’t want to read the full text on LessWrong, the short version is: Do you think it has become harder recently (say 2013 vs 2019) to become a mathematician at MIRI? Why or why not?
In November 2018 you said “we want to hire as many people as engineers as possible; this would be dozens if we could, but it’s hard to hire, so we’ll more likely end up hiring more like ten over the next year”. As far as I can tell, MIRI has hired 2 engineers (Edward Kmett and James Payor) since you wrote that comment. Can you comment on the discrepancy? Did hiring turn out to be much more difficult than expected? Are there not enough good engineers looking to be hired? Are there a bunch of engineers who aren’t on the team page/haven’t been announced yet?
I’m not attached to those specific numbers, but I think they are reasonable.
Right, maybe I shouldn’t have said “near zero”. But I still think my basic point (of needing to lower the hazard rate if growth stops) stands.