That’s helpful, thanks!
Mathieu Putz
Minor nitpick:
I would’ve found it more helpful to see Haydn’s and Esben’s judgments listed separately.
Why?
Need is a very strong word so I’m voting no. Might sometimes be marginally advantageous though.
Why?
Thanks for writing this up! Was gonna apply anyway, but a post like this might have gotten me to apply last year (which I didn’t, but which would’ve been smart). It also contained some useful sections that I didn’t know about yet!
I’m not sure what my general take is on this, I think it’s quite plausible that keeping it exclusive is net good, maybe more likely good than not. But I want to add one anecdote of my own which pushes the other way.
Over the last two years, while I was a student, I made two career choices in part (though not only) to gain EA credibility:
I was a group organizer at EA Munich (~2 hours a week)
I did a part-time internship at an EA org (~10 hours a week)
Both of these were fun, but I think it’s unlikely that they were good for my career or impact in ways other than gaining EA credibility. I think one non-trivial reason EA credibility was important to me was that I wanted to keep being admitted to things like EAG (maybe more than I admitted to myself in my explicit reasoning at the time).
Having said that, I think EA credibility has also been important to my career in other ways, notably to receive grants, so it’s not clear that this was bad on net.
It might also be that these were unnecessary or ineffective ways of gaining EA credibility—I don’t know what the admissions team cares about. Regardless, I think it’s an update that this is part of what led me to make choices that I otherwise might not have made (though quite plausibly I would have made them anyway).
This is so useful! I love this kind of post and will buy many things from this one in particular.
Probably a very naive question, but why can’t you just take a lot of DHA **and** a lot of EPA to get both supplements’ benefits? Especially if your diet means you’re likely deficient in both (which is true of veganism? vegetarianism?).
Assuming the Reddit folk wisdom about DHA inducing depression was wrong (which it might not be, I don’t want to dismiss it), I don’t understand from the rest of what you wrote why this doesn’t work? Why is there a trade-off?
This seems really exciting!
I skimmed some sections so might have missed it in case you brought it up, but I think one thing that might be tricky about this project is the optics of where your own funding would be coming from. E.g. it might look bad if most (any?) of your funding was coming from OpenPhil and then Dustin Moskovitz and Cari Tuna were very highly ranked (which they probably should be!). In worlds where this project is successful and gathers some public attention, that kind of thing seems quite likely to come up.
So I think conditional on thinking this is a good idea at all, this may be an unusually good funding opportunity for smaller earning-to-givers. Unfortunately, the flip-side is that fundraising for this may be somewhat harder than for other EA projects.
Thanks for pointing this out, wasn’t aware of that, sorry for the mistake. I have retracted my comment.
Thanks for pointing this out, wasn’t aware of that, sorry for the mistake. I have retracted my comment.
Hey, interesting to hear your reaction, thanks.
I can’t respond to all of it now, but do want to point out one thing.
And, of course, if elected he will very visibly owe his win to a single ultra-wealthy individual who is almost guaranteed to have business before the next congress in financial and crypto regulation.
I think this isn’t accurate.
Donations from individuals are capped at $5,800, so whatever money Carrick is getting is not one giant gift from Sam Bankman-Fried, but rather many small ones from individual Americans. Some of them may work for organizations that get a lot of funding from big EA donors, but it’s still their own salary which they are free to spend however they like. As an aside, probably in most cases the funding of these orgs will currently still come from OpenPhil (who give away Dustin Moskovitz’s and Cari Tuna’s wealth), rather than FTX Future Fund (who give away SBF’s wealth among others).
I think it’s important that for the most part, this is money that not-crazy-rich Americans could have spent on themselves, but chose to donate to this campaign instead.
If you’re wondering who you might know in Oregon, you can search your Facebook friends by location:
Search for Oregon (or Salem) in the normal FB search bar, then go to People. You can also select to see “Friends of Friends”.
I assume that will miss a few, so it’s probably worth also actively thinking about your network, but this is probably a good low-effort first start.
Edit: Actually they need to live in district 6. The biggest city in that district is Salem as far as I can tell. Here’s a map.
Very glad to hear this, thanks!!
Thanks for writing this!
I believe there’s a small typo here:
The expected deaths are N+P_nM in the human-combatant case and P_yM in the autonomous combatant case, with a difference in fatalities of (P_y−P_n)(M−N). Given how much larger M (~1-7 Bn) is than N (tens of thousands at most) it only takes a small difference (Py−Pn) for this to be a very poor exchange.
Shouldn’t the difference be (P_y−P_n)M−N ?
This is *so* cool, thanks! Might be nice to have a feature where people can add a second location. E.g. I used to study in Munich, but spend ~2 months per year in Luxembourg. Many friends stayed much longer in Luxembourg. According to the EA survey, there are Luxembourgish EAs other than me, but I have so far failed to find them—I’d expect many of them to be in a similar situation.
Congrats on your success!
I thought this was a great article raising a bunch of points which I hadn’t previously come across, thanks for writing it!
Regarding the risk from non-state actors with extensive resources, one key question is how competent we expect such groups to be. Gwern suggests that terrorists are currently not very effective at killing people or inducing terror—with similar resources, it should be possible to induce far more damage than they actually do. This has somewhat lowered my concern about bioterrorist attacks, especially when considering that successfully causing a global pandemic worse than natural ones is not easy. (Lowered my concern in relative terms that is—I still think this risk is unacceptably high and prevention measures should be taken. I don’t want to rely on terrorists being incompetent.) This suggests both that terrorist groups may not pursue bioterrorism even if it were the best way to achieve their goals and that they may not be able to execute well on such a difficult task. Hence, without having thought about it too much, I think I might rate the risks from non-state actors somewhat lower than you do (though I’m not sure, especially since you don’t give numerical estimates—which is totally reasonable). For instance, I’m not sure whether we should expect risks of GCBRs caused by non-state actors to be higher than risks of GCBRs caused by state actors (as you suggest).
Fair, that makes sense! I agree that if it’s purely about solving a research problem with long timelines, then linear or decreasing returns seem very reasonable.
I would just note that speed-sensitive considerations, in the broad sense you use it, will be relevant to many (most?) people’s careers, including researchers to some extent (reputation helps doing research: more funding, better opportunities for collaboration etc). But I definitely agree there are exceptions and well-established AI safety researchers with long timelines may be in that class.
Can you say more about the 20% per year discount rate for community building?
In particular, is the figure meant to refer to time or money? I.e. does it mean that
you would trade at most 0.8 marginal hours spent on community building in 2024 for 1 marginal hour in 2023?
you would trade at most 0.8 marginal dollars spent on community building in 2024 for 1 marginal dollar spent on community building in 2023?
something else? (possibly not referring to marginal resources?)
(For money a 20% discount rate seems very high to me, barring very short timelines or something similar. It would presumably imply that you think Open Phil should be spending much more on community building until the marginal dollar doesn’t have such high returns anymore?)