I think EA orgs generally pay higher salaries than other non-profits, but below-market for the EA labor market (many of whom have software, consulting, etc as alternatives). I don’t think they’re anywhere close to “impact value” based on anecdotal reports of how much EA orgs value labor. I believe 80k did a survey on this (Edit: it’s here).
JoshYou
whoa I used to teach there back in the day. This is cool!
Fundraising is particularly effective in open primaries, such as this one. From the linked article:
But in 2017, Bonica published a study that found, unlike in the general election, early fundraising strongly predicted who would win primary races. That matches up with other research suggesting that advertising can have a serious effect on how people vote if the candidate buying the ads is not already well-known and if the election at hand is less predetermined along partisan lines.
Basically, said Darrell West, vice president and director of governance studies at the Brookings Institution, advertising is useful for making voters aware that a candidate or an issue exists at all. Once you’ve established that you’re real and that enough people are paying attention to you to give you a decent chunk of money, you reach a point of diminishing returns (i.e., Paul Ryan did not have to spend $13 million to earn his seat). But a congressperson running in a close race, with no incumbent — or someone running for small-potatoes local offices that voters often just skip on the ballot — is probably getting a lot more bang for their buck.
Note that large funders such as SBF can and do support political candidates with large donations via PACs, which can advertise on behalf of a candidate but are not allowed to coordinate with them directly. But direct donations are probably substantially more cost-effective than PAC money because campaigns have more options on how to spend the money (door-knocking, events, etc not just ads) and it would look bad if a candidate was exclusively supported by PACs.
If you’re not planning to go to grad school (and maybe even if you are), getting straight As in college probably means a lot of unnecessary effort.
I gave most of my donations to the EA Funds Donor Lottery because I felt pretty uncertain about where to give. I am still undecided on which cause to prioritize, but I have become fairly concerned about existential risk from AI and I don’t think I know enough about the donation opportunities in that space. If I won the lottery, I would then take some more time to research and think about this decision.
I also donated to Wild Animal Initiative and Rethink Priorities because I still want to keep a regular habit of making donation decisions. I think they are the two best organizations working on wild-animal welfare, which is potentially a highly cost-effective cause area because of the very large number of wild animals in existence. I also donated to GiveWell’s maximum impact fund.
I did Metaculus for a while but I wasn’t quite sure how to assess how well I was doing and I lost interest. I know Brier score isn’t the greatest metric. Just try to accumulate points?
What does “consequentialist” mean in this context?
A couple of years it seemed like the conventional wisdom was that there were serious ops/management/something bottlenecks in converting money into direct work. But now you’ve hired a lot of people in a short time. How did you manage to bypass those bottlenecks and have there been any downsides to hiring so quickly?
Longtermism isn’t just AI risk, but concern with AI-risk is associated with a Elon Musk-technofuturist-technolibertarian-Silicon Valley idea cluster. Many progressives dislike some or all of those things and will judge AI alignment negatively as a result.
How’s having two executive directors going?
How do you decide how to allocate research time between cause areas (e.g. animals vs x-risk)?
My description was based on Buck’s correction (I don’t have any first-hand knowledge). I think a few white nationalists congregated at Leverage, not that most Leverage employees are white nationalists, which I don’t believe. I don’t mean to imply anything stronger than what Buck claimed about Leverage.
I invoked white nationalists not as a hypothetical representative of ideologies I don’t like but quite deliberately, because they literally exist in substantial numbers in EA-adjacent online spaces and they could view EA as fertile ground if the EA community had different moderation and discursive norms. (Edited to avoid potential collateral reputational damage) I think the neo-reactionary community and their adjacency to rationalist networks are a clear example.
I also agree that it’s ridiculous when left-wingers smear everyone on the right as Nazis, white nationalists, whatever. I’m not talking about conservatives, or the “IDW”, or people who don’t like the BLM movement or think racism is no big deal. I’d be quite happy for more right-of-center folks to join EA. I do mean literal white nationalists (like on par with the views in Jonah Bennett’s leaked emails. I don’t think his defense is credible at all, by the way).
I don’t think it’s accurate to see white nationalists in online communities as just the right tail that develops organically from a wide distribution of political views. White nationalists are more organized than that and have their own social networks (precisely because they’re not just really conservative conservatives). Regular conservatives outnumber white nationalists by orders of magnitude in the general public, but I don’t think that implies that white nationalists will be virtually non-existent in a space just because the majority are left of center.
We’ve already seen white nationalists congregate in some EA-adjacent spaces. My impression is that (especially online) spaces that don’t moderate away or at least discourage such views will tend to attract them—it’s not the pattern of activity you’d see if white nationalists randomly bounce around places or people organically arrive at those views. I think this is quite dangerous for epistemic norms, because white nationalist/supremacist views are very incorrect and deter large swaths of potential participants and also people with those views routinely argue in bad faith by hiding how extreme their actual opinions are while surreptitiously promoting the extreme version. It’s also in my view a fairly clear and present danger to EA given that there are other communities with some white nationalist presence that are quite socially close to EA.
This is essentially the premise of microfinance, right?
From what I understand, since Three Gorges is a gravity dam, meaning it uses the weight of the dam to hold back water rather than its tensile strength, a failure or collapse would not necessarily be catastrophic one. So if some portion falls, the rest will stay standing. That means there’s a distribution of severity within failures/collapses, it’s not just a binary outcome.
To me it feels easier to participate in discussions on Twitter than on (e.g.) the EA Forum, even though you’re allowed to post a forum comment with fewer than 280 characters. This makes me a little worried that people feel intimidated about offering “quick takes” here because most comments are pretty long. I think people should feel free to offer feedback more detailed than an upvote/downvote without investing a lot of time in a long comment.
Not from the podcast but here’s a talk Rob gave in 2015 about potential arguments against growing the EA community: https://www.youtube.com/watch?v=TH4_ikhAGz0
It might help to put some rough numbers on this. Most of the EA org non-technical job postings that I have seen recently have been in the $60-120k/year range or so. I don’t think those are too high, even at the higher end of that range. But value alignment concerns (and maybe PR and other reasons) seem like a good reason to not offer, say, 300k or more for non-executive and non-technical roles at EA orgs.