Thank you Max for your years of dedicated service at CEA. Under your leadership as Executive Director, CEA grew significantly, increased its professionalism, and reached more people than it had before. I really appreciate your straightforward but kind communication style, humility, and eagerness to learn and improve. I’m sorry to see you go, and wish you the best of luck in whatever comes next.
Nick_Beckstead
Updates to the Effective Ventures US board
The FTX Future Fund team has resigned
“Develop Anthropomorphic AGI to Save Humanity from Itself” (Future Fund AI Worldview Prize submission)
“AI predictions” (Future Fund AI Worldview Prize submission)
“AGI timelines: ignore the social factor at their peril” (Future Fund AI Worldview Prize submission)
Thanks, I think this is subtle and I don’t think I expressed this perfectly.
> If someone uses AI capabilities to create a synthetic virus (which they wouldn’t have been able to do in the counterfactual world without that AI-generated capability) and caused the extinction or drastic curtailment of humanity, would that count as “AGI being developed”?No, I would not count this.
I’d probably count it if the AI a) somehow formed the intention to do this and then developed the pathogen and released it without human direction, but b) couldn’t yet produce as much economic output as full automation of labor.
No official rules on that. I do think that if you have some back and forth in the comments that’s a way to make your case more convincing, so some edge there.
Looking into it, thanks!
1 - counts for purposes of this question
2 - doesn’t count for purposes of this question (but would be a really big deal!)
Yes
Thanks for this post! Future Fund has removed this project from our projects page in response.
Thanks for the feedback! I think this is a reasonable comment, and the main things that prevented us from doing this are:
(i) I thought it would detract from the simplicity of the prize competition, and would be hard to communicate clearly and simply
(ii) I think the main thing that would make our views more robust is seeing what the best arguments are for having quite different views, and this seems like it is addressed by the competition as it stands.
For simplicity on our end, I’d appreciate if you had one post at the end that was the “official” entry, which links to the other posts. That would be OK!
Plausibility, argumentation, and soundness will be inputs into how much our subjective probabilities change. We framed this in terms of subjective probabilities because it seemed like the easiest way to crisply point at ideas which could change our prioritization in significant ways.
Thanks! The part of the post that was supposed to be most responsive to this on size of AI x-risk was this:
For “Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI.” I am pretty sympathetic to the analysis of Joe Carlsmith here. I think Joe’s estimates of the relevant probabilities are pretty reasonable (though the bottom line is perhaps somewhat low) and if someone convinced me that the probabilities on the premises in his argument should be much higher or lower I’d probably update. There are a number of reviews of Joe Carlsmith’s work that were helpful to varying degrees but would not have won large prizes in this competition.
I think explanations of how Joe’s probabilities should be different would help. Alternatively, an explanation of why some other set of propositions was relevant (with probabilities attached and mapped to a conclusion) could help.
Do you believe that there is something already published that should have moved our subjective probabilities outside of the ranges noted in the post? If so, I’d love to know what it is! Please use this thread to collect potential examples, and include a link. Some info about why it should have done that (if not obvious) would also be welcome. (Only new posts are eligible for the prizes, though.)
Do you believe some statement of this form?
”FTX Foundation will not get submissions that change its mind, but it would have gotten them if only they had [fill in the blank]”E.g., if only they had…
Allowed people to publish not on EA Forum / LessWrong / Alignment Forum
Increased the prize schedule to X
Increased the window of the prize to size Y
Advertised the prize using method Z
Chosen the following judges instead
Explained X aspect of their views better
Even better would be a statement of the form:
“I personally would compete in this prize competition, but only if...”
If you think one of these statements or some other is true, please tell me what it is! I’d love to hear your pre-mortems, and fix the things I can (when sufficiently compelling and simple) so that we can learn as much as possible from this competition!
I also think predictions of this form will help with our learning, even if we don’t have time/energy to implement the changes in question.
There are some better processes that would be used for some smaller groups of high-trust people competing with each other, but I think we don’t really have a good process for this particular use case of:
* Someone wants to publish something
* They are worried it might be an information hazard
* They want someone logical to look at it and assess that before they publish
I think it would be a useful service for someone to solve that problem. I am certainly feeling some pain from it right now, though I’m not sure how general it is. (I would think it’s pretty general, especially in biosecurity, and I don’t think there are good scalable processes in place right now.)
Thanks for all of your hard work on EV, Will! I’ve really appreciated your individual example of generosity and commitment, boldness, initiative-taking, and leadership. I feel like a lot of things would happen more slowly or less ambitiously—or not at all—if it weren’t for your ability to inspire others to dive in and act on the courage of their convictions. I think this was really important for Giving What We Can, 80,000 Hours, Centre for Effective Altruism, the Global Priorities Institute, and your books. Inspiration, enthusiasm, and positivity from you has been a force-multiplier on my own work, and in the lives of many others that I have worked with. I wish you all the best in your upcoming projects.