see also
JoshYou
A couple of years it seemed like the conventional wisdom was that there were serious ops/management/something bottlenecks in converting money into direct work. But now you’ve hired a lot of people in a short time. How did you manage to bypass those bottlenecks and have there been any downsides to hiring so quickly?
Longtermism isn’t just AI risk, but concern with AI-risk is associated with a Elon Musk-technofuturist-technolibertarian-Silicon Valley idea cluster. Many progressives dislike some or all of those things and will judge AI alignment negatively as a result.
If we’re using these numbers to inform whether EA is funding constrained, it would be good if someone followed up and figured out how much these organizations actually ended up raising.
Fundraising is particularly effective in open primaries, such as this one. From the linked article:
But in 2017, Bonica published a study that found, unlike in the general election, early fundraising strongly predicted who would win primary races. That matches up with other research suggesting that advertising can have a serious effect on how people vote if the candidate buying the ads is not already well-known and if the election at hand is less predetermined along partisan lines.
Basically, said Darrell West, vice president and director of governance studies at the Brookings Institution, advertising is useful for making voters aware that a candidate or an issue exists at all. Once you’ve established that you’re real and that enough people are paying attention to you to give you a decent chunk of money, you reach a point of diminishing returns (i.e., Paul Ryan did not have to spend $13 million to earn his seat). But a congressperson running in a close race, with no incumbent — or someone running for small-potatoes local offices that voters often just skip on the ballot — is probably getting a lot more bang for their buck.
It seems that Nick has not been able to leverage his position as EA fund manager to outperform his Open Phil grants (or at least meaningfully distinguish his EA fund grants from his Open Phil grants). This means that we can think of donating to the far future and community funds as having similar cost-effectiveness to individual donations to Open Phil earmarked for those causes. This seems like a problem, since the best individual donations should be able to outperform Open Phil, at least when you account for the benefits of not centralizing donations on too few decisionmakers. I don’t see anyone calling for Open Phil to accept/solicit money from small donors.
The case for finding another manager seems pretty strong. EA funds is a fundamentally sound idea—we should be trying to consolidate donation decisions somewhat to take advantage of different levels of expertise and save small donors’ time and mental energy, but this doesn’t seem like the best way to do it.
I wonder if the lack of tax deductibility and the non-conventional fundraising platform (GoFundMe) nudge people into not donating or donating less than they would to a more respectable-seeming charity.
(As a tangent, there’s a donation swap opportunity for the EA Hotel that most people are probably not aware of).
This post contains an extensive discussion on the difficulty of evaluating AI charities because they do not share all of their work due to info hazards (in the “Openness” section as well as the MIRI review). Will you have access to work that is not shared with the general public, and how will you approach evaluating research that is not shared with you or not shared with the public?
Under what conditions would you consider making a grant directed towards catastrophic risks other than artificial intelligence?
My description was based on Buck’s correction (I don’t have any first-hand knowledge). I think a few white nationalists congregated at Leverage, not that most Leverage employees are white nationalists, which I don’t believe. I don’t mean to imply anything stronger than what Buck claimed about Leverage.
I invoked white nationalists not as a hypothetical representative of ideologies I don’t like but quite deliberately, because they literally exist in substantial numbers in EA-adjacent online spaces and they could view EA as fertile ground if the EA community had different moderation and discursive norms. (Edited to avoid potential collateral reputational damage) I think the neo-reactionary community and their adjacency to rationalist networks are a clear example.
Open Phil would be a good candidate for this, though that’s a difficult proposition due to its sheer size. It is a somewhat odd situation that Open Phil throws huge amounts of money around, much of which happens without any comment from the EA community.
For the big-buck EtGers, what sort of donation percentages is this advice assuming? I imagine that if you’re making $1M and even considering direct work then you’re giving >>10% (>50%?) but I’m not sure.
How do you decide how to allocate research time between cause areas (e.g. animals vs x-risk)?
We’ve already seen white nationalists congregate in some EA-adjacent spaces. My impression is that (especially online) spaces that don’t moderate away or at least discourage such views will tend to attract them—it’s not the pattern of activity you’d see if white nationalists randomly bounce around places or people organically arrive at those views. I think this is quite dangerous for epistemic norms, because white nationalist/supremacist views are very incorrect and deter large swaths of potential participants and also people with those views routinely argue in bad faith by hiding how extreme their actual opinions are while surreptitiously promoting the extreme version. It’s also in my view a fairly clear and present danger to EA given that there are other communities with some white nationalist presence that are quite socially close to EA.
Seems like its mission sits somewhere between GiveWell’s and Charity Navigator’s. GiveWell studies a few charities to find the very highest impact ones according to its criteria. Charity Navigator attempts to rate every charity, but does so purely on procedural considerations like overhead. ImpactMatters is much broader and shallower than GiveWell but unlike Charity Navigator does try to tell you what actually happens as the result of your donation.
Sadly, Jiwoon passed away last year.
“Neglectedness” is a potentially confusing simplification of true impact
To me it feels easier to participate in discussions on Twitter than on (e.g.) the EA Forum, even though you’re allowed to post a forum comment with fewer than 280 characters. This makes me a little worried that people feel intimidated about offering “quick takes” here because most comments are pretty long. I think people should feel free to offer feedback more detailed than an upvote/downvote without investing a lot of time in a long comment.
It’s clear that climate change has at best a small probability (well under 10%) of causing human extinction, but many proponents of working on other x-risks like nuclear war and AI safety would probably give low probabilities of human extinction for those risks as well. I think the positive feedback scenarios you mention (permafrost, wetlands, and ocean hydrates) deserve some attention from an x-risk perspective because they seem to be poorly understood, so the upper bound on how severe they might be may be very high. You cite one simulation that burning all available fossil fuels would increase temperatures by 10 °C, but that isn’t necessarily an upper bound because there are non-fossil fuel sources carbon on Earth that could be released to the atmosphere. It would of course also be necessary to estimate how high the extinction risk conditional on various levels of extreme warming (8°C, 10°C, 15°C, 20°C?) would be.
Regardless, it’s a good idea to have a clear view of how big the risk is. You’re right that the casual claims about extinction or planetary uninhabitability I hear from many people who are concerned about climate change are not justified, and they seem a bit irresponsible.
I think EA orgs generally pay higher salaries than other non-profits, but below-market for the EA labor market (many of whom have software, consulting, etc as alternatives). I don’t think they’re anywhere close to “impact value” based on anecdotal reports of how much EA orgs value labor. I believe 80k did a survey on this (Edit: it’s here).