(Just to correct the record for people who might have been surprised to see this comment: All of these people work for OpenPhilanthropy, not for OpenAI.)
MvK🔸
Is there a reason it’s impossible to find out who is involved with this project? Maybe it’s on purpose, but through the website I couldn’t find out who’s on the team, who supports it, or what kind of organisation (nonprofit? For profit? Etc.) you are legally. If this was a deliberate and strategic choice against being transparent because of the nature of the work you expect to be doing, I’d love to hear why you made it!
[My 2 cents: As an org that is focused on advocacy and campaigns, it might be especially important to be transparent to build trust. It’s projects like yours where I find myself MOST interested who is behind it to evaluate trustworthiness, conflicting incentives, etc. For all I know (from the website), you could be a competitor of the company you are targeting! I am not saying you need Fish-Welfare-Project-level transparency with open budgets etc.,and maybe I am just an overly suspicious website visitor, but I felt it was worth flagging]
This is a great idea. It’s such a good idea that someone else (https://forum.effectivealtruism.org/users/aaronb50) has had it before and has already solved this problem for us:
https://podcasts.apple.com/us/podcast/eag-talks/id1689845820
Have you done some research on the expected demand (e.g. survey the organisers of the mentioned programs, community builders, maybe Wytham Abbey event organisers)? I can imagine the location and how long it takes to get there (unless you are already based in London, though even then it’s quite the trip) could be a deterrent, especially for events <3 days. (Another factor may be “fanciness”—I’ve worked with orgs and attended/organise events where fancy venues were eschewed, and others where they were deemed indispensable. If that building is anything like the EA Hotel—or the average Blackpool building—my expectation is it would rank low on this. Kinda depends on your target audience/user.)
“It’s not common” wouldn’t by itself suffice as a reason though—conducting CEAs “isn’t common” in GHD, donating 10% “isn’t common” in the general population, etc. (cf. Hume, is-and-ought something something).
Obviously, something may be “common” because it reliably protects you from legal exposure, is too much work for too little a benefit etc., but then I’m much more interested in those underlying reasons.
Hey Charlotte! Welcome to the EA Forum. :) Your skillset and interest in consulting work in GHD seems a near-perfect fit for working with one of the charities incubated by Ambitious Impact. As I understand, they are often for looking for people like you! Some even focus on the same problems you mention (STDs, nutritional deficiencies, etc.).
You can find them here: https://www.charityentrepreneurship.com/our-charities
Thanks for the detailed reply. I completely understand the felt need to seize on windows of opportunity to contribute to AI Safety—I myself have changed my focus somewhat radically over the past 12 months.
I remain skeptical on a few of the points you mention, in descending order of importance to your argument (correct me if I’m wrong):
“ERA’s ex-AI-Fellows have a stronger track record” I believe we are dealing with confounding factors here. Most importantly, AI Fellows were (if I recall correctly) significantly more senior on average than other fellows. Some had multiple years of work experience. Naturally, I would expect them to score higher on your metric of “engaging in AI Safety projects” (which we could also debate how good of a metric it is). [The problem here I suspect is the uneven recruitment across cause areas, which limits comparability.] There were also simply a lot more of them (since you mention absolute numbers). I would also think that there have been a lot more AI opportunities opening up compared to e.g. nuclear or climate in the last year, so it shouldn’t surprise us if more Fellows found work and/or funding more easily. (This is somewhat balanced out by the high influx of talent into the space.) Don’t get me wrong: I am incredibly proud of what the Fellows I managed have gone on to do, and helping some of them find roles after the Fellowship may have easily been the most impactful thing I’ve done during my time at ERA. I just don’t think it’s a solid argument in the context in which you bring it up.
“The infrastructure is here” This strikes me as a weird argument at least. First of all, the infrastructure (Leverhulme etc.) has long been there (and AFAIK, the Meridian Office has always been the home of CERI/ERA), so is this a realisation you only came to now? Also: If “the infrastructure is here” is an argument, would the conclusion “you should focus on a broader set of risks because CSER is a good partner nearby” seem right to you?
“It doesn’t diminish the importance of other x-risks or GCR research areas” It may not be what you intended, but there is something interesting about an organisation that used to be called the “Existential Risk Alliance” pivot like this. Would I be right in assuming we can expect a new ToC alongside the change in scope? (https://forum.effectivealtruism.org/posts/9tG7daTLzyxArfQev/era-s-theory-of-change)
AI was—in your words—already “an increasingly capable emerging technology” in 2023. Can you share more information on what made you prioritize it to the exclusion of all other existential risk cause areas (bio, nuclear, etc.) this year?
[Disclaimer: I previously worked for ERA as the Research Manager for AI Governance and—briefly—as Associate Director.]
Interesting idea. Say we DO find that—what implications would this have?
It seems to me that this data point alone wouldn’t be sufficient to derive from it any actionable consequences in the absence of the even more interesting but even harder-to-get-data on WHY this is the case.
Or maybe you think that this is knowledge that is intrinsically rather than instrumentally valuable to have?
“An OpenAI program director, who has very little to actually do with this larger public debate, is suddenly subpoenaed to testify in a congressional hearing where they are forced to answer an ill-tempered congress member’s questions. It might go something like this”
This should be OP, not OpenAI, right?
Have you seen this? According to the job posting, it’s still possible to apply. https://cltc.berkeley.edu/2024/01/16/job-opening-ai-standards-development-researcher/
Hi Thomas! Have you looked at the charities that were incubated by Charity Entrepreneurship (CE)? They have quite some overlap with the things you care about and especially the policy ones might benefit hugely from someone with your expertise! Some are hiring and others are working with volunteers/interns, so this might be a good place to start.
You’ve probably seen this (or have already applied) but this role seems like a potentially good fit: https://www.forethought.org/ra-exa
Hi Luke! Looks like this role could be a good fit: https://www.forethought.org/ra-exa
Hi Chukwubuikem! I might be misunderstanding your point but if you are referring to the Effective Giving charities, this is by design! We should expect most prospective donors to be in countries with high income levels, and so a charity whose main objective is to grow the pool of donors and the amount of funds committed to charitable causes should focus on countries like France, Australia, the UK, or Germany!
If you look at the direct intervention charities that Joey is talking about here (and that have been incubated by CE so far), you can see that many of them operate in and focus on African countries! (e.g. LEEP or FEM)
Hi Diego!
Thanks for your reply. I don’t know your financial situation so I don’t want to make assumptions but I think saving for retirement or building some general runway is important, and I would never want you to think that you aren’t doing enough, especially if you are donating 63% (!) of your income. That’s fantastic! And there will always be someone who donates more than you 😉 it’s not a race! 63% might be what works for you, and that’s great.
I should also note that while I have always been frugal, I was only able to donate this much because for a large part of the year, I didn’t have to pay anything for housing (and, sometimes, meals), and I didn’t count “immediate” donations of the kind mentioned above as income, so this maybe explains the high percentage. In 2024, I will likely move, and other changes in my personal life will likely mean I will get a lot closer to ~50% or less.
They have: https://www.charityentrepreneurship.com/center-for-alcohol-policy-solutions#:~:text=The Center for Alcohol Policy,thus saving millions of lives.
Yep, numbers ranged from 60% to 80% support for approving SB 1047, and it was impressively bipartisan, too.