GCR capacity-building grantmaking and projects at Open Phil.
Eli Rose
Nick Beckstead is leaving the Effective Ventures boards
(I’m a trustee on the EV US board.)
Thanks for checking in. As Linch pointed out, we added Lincoln Quirk to the EV UK board in July (though he didn’t come through the open call). We also have several other candidates at various points in the recruitment pipeline, but we’ve put this a bit on the backburner both because we wanted to resolve some strategic questions before adding people to the board and also because we’ve been lower capacity than we thought.
Having said that, we were grateful for all the applications and nominations which we received in that initial post, and we’re still intending to add additional board members in the coming months.
- Will MacAskill has stepped down as trustee of EV UK by Sep 21, 2023, 3:41 PM; 141 points) (
- Sep 21, 2023, 3:39 PM; 126 points) 's comment on William_MacAskill’s Quick takes by (
Douglas Hoftstadter concerned about AI xrisk
Yep, it’s still active.
I think we should keep “neglectedness” referring to the amount of resources invested in the problem, not P(success). This seems a better fit for the “tractability” bucket.
(+1 to this approach for estimating neglectedness; I think dollars spent is a pretty reasonable place to start, even though quality adjustments might change the picture a lot. I also think it’s reasonable to look at number of people.)
Looks like the estimate in the 80k article is from 2020, though the callout in the biorisk article doesn’t mention it — and yeah, AIS spending has really taken off since then.
I think the OP amount should be higher because I think one should count X% of the spending on longtermist community-building as being AIS spending, for some X. [NB: I work on this team.]
I downloaded the public OP grant database data for 2022 and put it here. For 2022, the sum of all grants tagged AIS and LTist community-building is ~$155m. I think a reasonable choice of X is between 50% and 100%, so taking 75% at a whim, that gives ~$115m for 2022.
Made the front page of Hacker News. Here’s the comments.
The most common pushback (and the first two comments, as of now) are from people who think this is an attempt at regulatory capture by the AI labs, though there’s a good deal of pushback and (I thought) some surprisingly high-quality discussion.
Off topic: There’s a line in the movie A Cinderella Story: Christmas Wish that might be applicable to you: “was also credited with helping shift the Animal Rights movement to a more utilitarian focus including a focus on chicken.”
This is an amazing thing to learn.
“Can We Survive Technology?” by John von Neumann
FWIW several people I spoke to just weren’t aware subforums existed, during the time they were being piloted.
This refers to the amount you were promised from FTXF.
This refers to the amount that was promised, but hasn’t been paid out.
(I work at Open Phil assisting with this effort.)
Thanks for pointing this out; it looks like there was a technical error which excluded these from the email receipt, which we’ve now fixed. The information was still received on our end, so you don’t need to take any extra actions.
(I work at Open Phil assisting with this effort.)
-
Any grantee who is affected by the collapse of FTXFF and whose work falls within our focus areas (biosecurity, AI risk, and community-building) should feel free to apply, even if they have significant runway.
-
For various reasons, we don’t anticipate offering any kind of program like this, and are taking the approach laid out in the post instead. Edit: We’re still working out a number of the details, and as the comment below states, people who are worried about this should still apply.
-
(I work at Open Phil assisting with this effort.)
We think that people in this situation should apply. The language was intended to include this case, but it may not have been clear.
- Nov 16, 2022, 10:27 PM; 25 points) 's comment on Open Phil is seeking applications from grantees impacted by recent events by (
Exactly when does the program begin? I couldn’t find this info above.
If you haven’t already, I’d recommend reading Richard Ngo’s AGI Safety From First Principles, which I think is an unusually rigorous treatment of the issue.
We’ve been paying people based on time spent, rather than by word. The amounts are based on our assessment of market rates for high-quality freelance translators for the language in question online, though my guess is this will be a more attractive than being a freelance translator because it’s a source of steady work for a long period of time (e.g. 6 months).
Have you considered writing a letter to the editor? I think actual worked examples of naive consequentialism failing are kind of rare and cool for people to see .
Hmm yeah, I went East Coast --> Bay and I somewhat miss the irony.
There’s a Parfit thought experiment:
I go camping and leave a bunch of broken glass bottles in the woods. I realize that someone may step on this glass and hurt themselves, so perhaps I should bury it. I do not bury it. As it turns out, 20 years pass before someone is hurt. In 20 years, a young child steps on the glass and cuts their foot badly.
It seems like the contractualist principle above would say that there’s no moral value to burying the glass shard, because for any given individual, the probability that they’ll be the one to step on the glass shard is very low[1]. Is that right?
I think you can sidestep issues with population ethics here by just restricting this to people already alive today (so replace “young child” in the Parfit example with “adult” I guess). Though maybe the pop ethics issues are the crux?