I’m a program officer on the AI governance team at Open Philanthropy.
Jason Schukraft
Hi Chris,
Thanks for your question. Two quick points:
(1) I wouldn’t model Open Phil as having a single view on these sorts of questions. There’s a healthy diversity of opinions, and as stated in the “caveats” section, I think different Open Phil employees might have chosen different winners.
(2) Even for the subset of Open Phil employees who served as judges, I wouldn’t interpret these entries as collectively moving our views a ton. We were looking for the best challenges to our AI worldview in this contest, and as such I don’t think it should be too surprising that the winning entries are more skeptical of AI risks than we are.
- 26 Nov 2023 18:45 UTC; 29 points) 's comment on Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by (
Hi Paul, thanks for your question. I don’t have an intrinsic preference. We encourage public posting of the entries because we believe that this type of investigation is potentially valuable beyond the narrow halls of Open Philanthropy. If your target audience (aside from the contest panelists) is primarily researchers, then it makes sense to format your entry according to the norms of the research community. If you are aiming for a broader target audience, then it may make sense to structure your entry more informally.
When we grade the entries, we will be focused on the content. The style and reference won’t (I hope) make much of a difference.
Hi Nicholas,
The details and execution probably matter a lot, but in general I’m fine with bullet-point writing. I would, however, find it hard to engage with an essay that was mostly tables with little prose explaining the relevance of the tables.
Hi Nicholas,
Thanks for your question. It’s a bit difficult to answer in the abstract. If your ideas hang together in a nice way, it makes sense to house them in a single entry. If the ideas are quite distinct and unrelated, it makes more sense to house them in separate entries. Another consideration is length. Per the contest guidelines, we’re advising entrants to shoot for a submission length around 5000 words (though there are no formal word limits). All else equal, I’d prefer three 5000 word entries to one 15,000 word entry, and I’d prefer one 5000 word entry to ten 500 word entries.
Hope this helps.
Jason
Thanks both—I just added the announcement link to the top of this page.
Hi David,
Thanks for your comment. I am also concerned about groupthink within homogenous communities. I hope this contest is one small push against groupthink at Open Phil. By default, I do, unfortunately, expect most of the submissions to come from people who share the same basic worldview as Open Phil staff. And for submissions that come from people with radically different worldviews, there is the danger that we fail to recognize an excellent point because we are less familiar with the stylistic and epistemic conventions within which it is embedded.
For these sorts of reasons, we did explicitly consider including non-Open Phil judges for the contest. Ultimately, we decided that didn’t make sense for this use case. We are, after all, hoping that submissions update our thinking, and it’s harder for an outside judge to represent our point of view.
But this contest is not the only way we are stress-testing our thinking. For example, I’m involved in another project in which we are engaging directly with smart people who disagree with us about AI risk. We hope that as a result of that adversarial collaboration, we can generate a consensus of cruxes so that we have a better handle on how new developments ought to change our credences. I hope to be able to share more details on that project over the summer.
If you want to chat more about groupthink concerns, shoot me a DM. I believe it’s a somewhat underappreciated worry within EA.
Hi Phil—just to clarify: the entries must entirely be the original work of the author(s). You can cite others and you can use AI-generated text as an example, but for everything that is not explicitly flagged as someone else’s work, we will assume it is original to the author.
Hi David,
Thanks for your questions. We’re interested in a wide range of considerations. It’s debatable whether human-originating civilization failing to make good use of its “cosmic endowment” constitutes an existential catastrophe. If you want to focus on more recognizable catastrophes (such as extinction, unrecoverable civilizational collapse, or dystopia) that would be fine.
In a similar vein, if you think there is an important scenario in which humanity suffers an existential catastrophe by collectively losing control over an ecosystem of AGIs, that would also be an acceptable topic.
Let me know if you have any other questions!
We are just ironing out the final legal details. The official announcement will hopefully go live by the end of next week. Thanks for checking!
Thanks for your questions!
We plan to officially launch the contest sometime in Q1 2023, so end of March at the latest.
I asked our in-house counsel about the eligibility of essays submitted to other competitions/publications, and he said it depends on whether by submitting elsewhere you’ve forfeited your ability to grant Open Phil a license to use the essay. His full quote below:
Essays submitted to other competitions or for publication are eligible for submission, so long as the entrant is able to grant Open Phil a license to use the essay. Since we plan to use these essays to inform our future research and grantmaking, we need a license to be able to use the IP. Our contest rules will state that by submitting an entry, each entrant grants a license to Open Phil to use the entry to further our mission. If you had previously submitted an essay to another contest or for publication, you should check the terms and conditions of that contest/publication to confirm they do not now have exclusive rights to the work or in any way prohibit you from granting a license to someone else to use it.
Thanks Jason. I can now confirm that that is indeed the case!
Hi Zach, thanks for the question and apologies for the long delay in my response. I’m happy to confirm that work posted after September 23 2022 (and before whatever deadline we establish) will be eligible for the prize. No need to save your work until the formal announcement.
I think part of my confusion stems from the distinction between “X is a concern we’re noting” and “X is a parameter in the cost-effectiveness model”
The distinction is largely pragmatic. Charter cities, like many complex interventions, are hard to model quantitatively. For the report, we replicated, adjusted, and extended a quantitative model that Charter Cities Institute originally proposed. If that’s your primary theory of change for charter cities, it seems like the numbers don’t quite work out. But there are many other possible theories of change, and we would love to see charter city advocates spend some time turning those theories of change into quantitative models.
I think PR risks are relevant to most theories of change that involve charter cities, but they are certainly not my main concern.
One of the authors of the charter cities report here. I’ll just add a few remarks to clarify how we intended the quoted passage. I’ll highlight three disagreements with the interpretation offered in the original post.
We should care if neocolonialism is real, if it’s bad, and if it’s induced by Charter Cities. If so, that should impact the cost-effectiveness estimate, not just factor in as a side-comment about PR-risk.
(1) We absolutely care whether neocolonialism is bad (or, if neocolonialism is inherently bad, we care about whether charter cities would instantiate neocolonialism). However, we only had ~100 research hours to devote to this topic, so we bracketed that concern for the time being. These sort of prioritization decisions are difficult but necessary in order to produce research outputs in a timely manner.
We should cite and engage with specific arguments, not imagine and then be haunted by some imagined spectre of Leftism. The authors mention the “neocolonialist critique” three times, never bothering to actually explain what it is, who advocates for it, how harmful it is, or how it could be avoided.
(2) The neocolonial critique of charter cities is well-known in the relevant circles, though it comes in many varieties. (See, among others, van de Sand 2019 and citations therein.) We probably should have included a footnote with examples. The fact that we didn’t engage with the critique more extensively (or really, at all) is some indication of how seriously we take the argument. We could have been more explicit about that.
The question of PR-risk is a purely logistical question that should be bracketed from discussions of cost-effectiveness. In the case that an intervention is found to have high cost-effectiveness and high PR-risk, we should think strategically about how to fund it, perhaps by privately recommending the intervention to individual donors as opposed to foundations.
(3) I’m not entirely sure why PR-risk needs to be excluded from cost effectiveness analysis (it’s just another downside), though I’m not opposed in practice to doing this. I agree that there are ways to mitigate PR risk. At no point in the report did we claim that PR risks ought to disqualify charter cities (or any other intervention) from funding.
The person who replaces me has all my same skills but in addition has many connections to policymakers, more management experience, and stronger quantitative abilities than I do.
Hi James, thanks for your question. The climate change work currently on our research calendar includes:
A look at how climate damages are accounted for in various integrated assessment models
A cost effectiveness analysis of anti-deforestation interventions
A review of the landscape of climate change philanthropy
An analysis of how scalable different carbon offsetting programs are
This is also motivated by having a (still very young) kid we’re thinking about how to eventually engage with our giving.
I have a four-year-old and a six-year-old. We discuss our giving with them regularly. When my daughter turned five, we started giving her a weekly allowance with the strong expectation (though no outright requirement) that she would make her own charitable donation every December. During the giving process, we talk a lot about her values and offer guidance, but the ultimate amount and destination of the donation is up to her. Last year she donated $10 (about 10% of her total allowance) to The Nature Conservancy. It will be interesting to see how her decision making evolves over time. (Unfortunately, she seems to be quite swayed by the fact that The Nature Conservancy sent her a calendar!)
Hi tcelferact,
I have a PhD in philosophy, and I’m a senior research manager at Rethink Priorities. If you want to discuss PhD applications, shoot me a PM and we can set up a call. My main piece of advice is to optimize the writing sample for getting accepted to whatever programs you think are the best fit for you. Optimizing that metric might result in a much different writing sample than trying to find an actual good idea and writing about that.
Despite the skepticism about charter cities that Dave and I express in the report, I would be comfortable recommending @effective_jobs retweet openings at Charter Cities Institute. There are plenty of folks in the EA community who would be a good fit for CCI, and it seems to me that an aggregator like @effective_jobs should lean toward casting a wider rather than narrower net.
Hi Vaipan,
Thanks for your questions. I’ll address the last one, on behalf of the cause prio team.
One of the exciting things about this team is that, because it launched so recently, there’s a lot of room to try new things as we explore different ways to be useful. To name a few examples:
We’re working on a constellation of projects that will help us compare our grantmaking focused on risks from advanced AI systems to our grantmaking focused on improving biosecurity and pandemic preparedness.
We’re producing a slew of new BOTECs across different focus areas. If it goes well, this exercise will help us be more quantitative when evaluating and comparing future grantmaking opportunities.
As you can imagine, the result of a given BOTEC depends heavily on the worldview assumptions you plug in. There isn’t an Open Phil house view on key issues like AI timelines or p(doom). One thing the cause prio team might do is periodically survey senior GCR leaders on important questions so we better understand the distribution of answers.
We’re also doing a bunch of work that is aimed at increasing strategic clarity. For instance, we’re thinking a lot about next-generation AI models: how to forecast their capabilities, what dangers those capabilities might imply, how to communicate those dangers to labs and policymakers, and ultimately how to design evals to assess risk levels.
There is no particular background knowledge that is required for a role on our team. For context, a majority of current team members were working on global health and wellbeing issues less than a year ago. For this hiring round, applicants that understand the GCR ecosystem and have at least superficial understanding of frontier AI models will in general do better than applicants that lack that understanding. But I encourage everyone who is interested to apply.