I’m a program officer on the AI governance team at Open Philanthropy.
Jason Schukraft
AMA: Six Open Philanthropy staffers discuss OP’s new GCR hiring round
Hi Chris,
Thanks for your question. Two quick points:
(1) I wouldn’t model Open Phil as having a single view on these sorts of questions. There’s a healthy diversity of opinions, and as stated in the “caveats” section, I think different Open Phil employees might have chosen different winners.
(2) Even for the subset of Open Phil employees who served as judges, I wouldn’t interpret these entries as collectively moving our views a ton. We were looking for the best challenges to our AI worldview in this contest, and as such I don’t think it should be too surprising that the winning entries are more skeptical of AI risks than we are.
- 26 Nov 2023 18:45 UTC; 29 points) 's comment on Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by (
Announcing the Winners of the 2023 Open Philanthropy AI Worldviews Contest
Hi Paul, thanks for your question. I don’t have an intrinsic preference. We encourage public posting of the entries because we believe that this type of investigation is potentially valuable beyond the narrow halls of Open Philanthropy. If your target audience (aside from the contest panelists) is primarily researchers, then it makes sense to format your entry according to the norms of the research community. If you are aiming for a broader target audience, then it may make sense to structure your entry more informally.
When we grade the entries, we will be focused on the content. The style and reference won’t (I hope) make much of a difference.
Reminder: AI Worldviews Contest Closes May 31
Hi Nicholas,
The details and execution probably matter a lot, but in general I’m fine with bullet-point writing. I would, however, find it hard to engage with an essay that was mostly tables with little prose explaining the relevance of the tables.
Hi Nicholas,
Thanks for your question. It’s a bit difficult to answer in the abstract. If your ideas hang together in a nice way, it makes sense to house them in a single entry. If the ideas are quite distinct and unrelated, it makes more sense to house them in separate entries. Another consideration is length. Per the contest guidelines, we’re advising entrants to shoot for a submission length around 5000 words (though there are no formal word limits). All else equal, I’d prefer three 5000 word entries to one 15,000 word entry, and I’d prefer one 5000 word entry to ten 500 word entries.
Hope this helps.
Jason
Thanks both—I just added the announcement link to the top of this page.
Hi David,
Thanks for your comment. I am also concerned about groupthink within homogenous communities. I hope this contest is one small push against groupthink at Open Phil. By default, I do, unfortunately, expect most of the submissions to come from people who share the same basic worldview as Open Phil staff. And for submissions that come from people with radically different worldviews, there is the danger that we fail to recognize an excellent point because we are less familiar with the stylistic and epistemic conventions within which it is embedded.
For these sorts of reasons, we did explicitly consider including non-Open Phil judges for the contest. Ultimately, we decided that didn’t make sense for this use case. We are, after all, hoping that submissions update our thinking, and it’s harder for an outside judge to represent our point of view.
But this contest is not the only way we are stress-testing our thinking. For example, I’m involved in another project in which we are engaging directly with smart people who disagree with us about AI risk. We hope that as a result of that adversarial collaboration, we can generate a consensus of cruxes so that we have a better handle on how new developments ought to change our credences. I hope to be able to share more details on that project over the summer.
If you want to chat more about groupthink concerns, shoot me a DM. I believe it’s a somewhat underappreciated worry within EA.
Hi Phil—just to clarify: the entries must entirely be the original work of the author(s). You can cite others and you can use AI-generated text as an example, but for everything that is not explicitly flagged as someone else’s work, we will assume it is original to the author.
Hi David,
Thanks for your questions. We’re interested in a wide range of considerations. It’s debatable whether human-originating civilization failing to make good use of its “cosmic endowment” constitutes an existential catastrophe. If you want to focus on more recognizable catastrophes (such as extinction, unrecoverable civilizational collapse, or dystopia) that would be fine.
In a similar vein, if you think there is an important scenario in which humanity suffers an existential catastrophe by collectively losing control over an ecosystem of AGIs, that would also be an acceptable topic.
Let me know if you have any other questions!
Announcing the Open Philanthropy AI Worldviews Contest
We are just ironing out the final legal details. The official announcement will hopefully go live by the end of next week. Thanks for checking!
Thanks for your questions!
We plan to officially launch the contest sometime in Q1 2023, so end of March at the latest.
I asked our in-house counsel about the eligibility of essays submitted to other competitions/publications, and he said it depends on whether by submitting elsewhere you’ve forfeited your ability to grant Open Phil a license to use the essay. His full quote below:
Essays submitted to other competitions or for publication are eligible for submission, so long as the entrant is able to grant Open Phil a license to use the essay. Since we plan to use these essays to inform our future research and grantmaking, we need a license to be able to use the IP. Our contest rules will state that by submitting an entry, each entrant grants a license to Open Phil to use the entry to further our mission. If you had previously submitted an essay to another contest or for publication, you should check the terms and conditions of that contest/publication to confirm they do not now have exclusive rights to the work or in any way prohibit you from granting a license to someone else to use it.
Thanks Jason. I can now confirm that that is indeed the case!
Hi Zach, thanks for the question and apologies for the long delay in my response. I’m happy to confirm that work posted after September 23 2022 (and before whatever deadline we establish) will be eligible for the prize. No need to save your work until the formal announcement.
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Measuring Good Better
I think part of my confusion stems from the distinction between “X is a concern we’re noting” and “X is a parameter in the cost-effectiveness model”
The distinction is largely pragmatic. Charter cities, like many complex interventions, are hard to model quantitatively. For the report, we replicated, adjusted, and extended a quantitative model that Charter Cities Institute originally proposed. If that’s your primary theory of change for charter cities, it seems like the numbers don’t quite work out. But there are many other possible theories of change, and we would love to see charter city advocates spend some time turning those theories of change into quantitative models.
I think PR risks are relevant to most theories of change that involve charter cities, but they are certainly not my main concern.
Hi Vaipan,
Thanks for your questions. I’ll address the last one, on behalf of the cause prio team.
One of the exciting things about this team is that, because it launched so recently, there’s a lot of room to try new things as we explore different ways to be useful. To name a few examples:
We’re working on a constellation of projects that will help us compare our grantmaking focused on risks from advanced AI systems to our grantmaking focused on improving biosecurity and pandemic preparedness.
We’re producing a slew of new BOTECs across different focus areas. If it goes well, this exercise will help us be more quantitative when evaluating and comparing future grantmaking opportunities.
As you can imagine, the result of a given BOTEC depends heavily on the worldview assumptions you plug in. There isn’t an Open Phil house view on key issues like AI timelines or p(doom). One thing the cause prio team might do is periodically survey senior GCR leaders on important questions so we better understand the distribution of answers.
We’re also doing a bunch of work that is aimed at increasing strategic clarity. For instance, we’re thinking a lot about next-generation AI models: how to forecast their capabilities, what dangers those capabilities might imply, how to communicate those dangers to labs and policymakers, and ultimately how to design evals to assess risk levels.
There is no particular background knowledge that is required for a role on our team. For context, a majority of current team members were working on global health and wellbeing issues less than a year ago. For this hiring round, applicants that understand the GCR ecosystem and have at least superficial understanding of frontier AI models will in general do better than applicants that lack that understanding. But I encourage everyone who is interested to apply.