Forbes estimates that the seven co-founders are now worth $1-2 billion each.
JoshYou
I’m not proposing any sort of hard rule against concluding that some people’s lives are net negative/harmful. As a heuristic, you shouldn’t think it’s bad to save the lives of ordinary people who seem to be mostly reasonable, but who contribute to harmful animal agriculture.
The pluralism here is between human viewpoints in general. Very naively, if you think every human has equal insight into morality you should maximize the lifespan and resources that go to any and all humans without considering at all what they will do. That’s too much pluralism, of course, but I think refraining from cheaply saving human lives because they’ll eat meat is too far in the other direction.
I think if you put some weight on viewpoint pluralism you should mostly not conclude that other peoples’ lives aren’t valuable because those people will make the wrong moral choices.
Toby Ord’s existential risk estimates in The Precipice were for risk this century (by 2100) IIRC. That book was very influential in x-risk circles around the time it came out, so I have a vague sense that people were accepting his framing and giving their own numbers, though I’m not sure quite how common that was. But these days most people talking about p(doom) probably haven’t read The Precipice, given how mainstream that phrase has become.
Also, in some classic hard-takeoff + decisive-strategic-advantage scenarios, p(doom) in the few years after AGI would be close to p(doom) in general, so these distinctions don’t matter that much. But nowadays I think people are worried about a much greater diversity of threat models.
I’m probably “on the clock” about 45 hours per week—I try to do about 8 hours a day but I go over more often than not. But maybe only about 25-35 hours of that is focused work, using a relatively loose sense of “focused” (not doing something blatantly non-work, like reading Twitter or walking around outside). I think my work output is constrained by energy levels, not clock time, so I don’t really worry about working longer hours or trying to stay more focused, but I do try to optimize work tasks and non-work errands to reduce their mental burdens.
I think you’re overestimating how high EA-org-salary spending is compared to (remaining) total EA funding per year (in the neighborhood of 10%?)
I think the benefits of living in a hub city (SF, NYC, Boston, or DC) are very large and are well worth the higher costs, assuming it’s financially feasible at all, especially if you currently have no personal network in any city. You’ll have easy access to interesting and like-minded people, which will have many many diffuse impact and personal benefits.
Also, those are probably the only American cities besides maybe Chicago and Philly where’s it is easy to live without a car (and arguably it’s only NYC).
I loved this Wikitravel article about American culture for this same reason.
What makes someone good at AI safety work? How does he get feedback on whether his work is useful, makes sense, etc?
see also
For the big-buck EtGers, what sort of donation percentages is this advice assuming? I imagine that if you’re making $1M and even considering direct work then you’re giving >>10% (>50%?) but I’m not sure.
“Neglectedness” is a potentially confusing simplification of true impact
I also actually have no idea how people do this, curious to see answers!
Also, the questions seem to assume that grantees don’t have another (permanent, if not full-time) job. I’m not sure how common that is.
Melatonin supplements can increase the vividness of dreams, which seems counterproductive here. But maybe there is a drug with the opposite effect?
Anyone trying to think about how to do the most good will be very quickly and deeply confused if they aren’t thinking at the margin. E.g. “if everyone buys bednets, what happens to the economy?”
It might help to put some rough numbers on this. Most of the EA org non-technical job postings that I have seen recently have been in the $60-120k/year range or so. I don’t think those are too high, even at the higher end of that range. But value alignment concerns (and maybe PR and other reasons) seem like a good reason to not offer, say, 300k or more for non-executive and non-technical roles at EA orgs.
I think EA orgs generally pay higher salaries than other non-profits, but below-market for the EA labor market (many of whom have software, consulting, etc as alternatives). I don’t think they’re anywhere close to “impact value” based on anecdotal reports of how much EA orgs value labor. I believe 80k did a survey on this (Edit: it’s here).
whoa I used to teach there back in the day. This is cool!
R1 is probably not 6x cheaper than o1-mini and 30x cheaper than o1 in terms of the actual, underlying cost. (meaning that DeepSeek probably charges a much lower gross margin on its API than OpenAI does). R1 has 37B active parameters (though its 671B total parameters are also relevant). We don’t know how many parameters o1-mini or o1 have, but IMO they’re probably a lot less than ~200B and ~1T, respectively.