Earner-to-give based in Minnesota. Board member at Wild Animal Initiative. Interested in catastrophic risks and wild animal suffering.
JoshYou
How’s having two executive directors going?
How do you decide how to allocate research time between cause areas (e.g. animals vs x-risk)?
My description was based on Buck’s correction (I don’t have any first-hand knowledge). I think a few white nationalists congregated at Leverage, not that most Leverage employees are white nationalists, which I don’t believe. I don’t mean to imply anything stronger than what Buck claimed about Leverage.
I invoked white nationalists not as a hypothetical representative of ideologies I don’t like but quite deliberately, because they literally exist in substantial numbers in EA-adjacent online spaces and they could view EA as fertile ground if the EA community had different moderation and discursive norms. (Edited to avoid potential collateral reputational damage) I think the neo-reactionary community and their adjacency to rationalist networks are a clear example.
I also agree that it’s ridiculous when left-wingers smear everyone on the right as Nazis, white nationalists, whatever. I’m not talking about conservatives, or the “IDW”, or people who don’t like the BLM movement or think racism is no big deal. I’d be quite happy for more right-of-center folks to join EA. I do mean literal white nationalists (like on par with the views in Jonah Bennett’s leaked emails. I don’t think his defense is credible at all, by the way).
I don’t think it’s accurate to see white nationalists in online communities as just the right tail that develops organically from a wide distribution of political views. White nationalists are more organized than that and have their own social networks (precisely because they’re not just really conservative conservatives). Regular conservatives outnumber white nationalists by orders of magnitude in the general public, but I don’t think that implies that white nationalists will be virtually non-existent in a space just because the majority are left of center.
We’ve already seen white nationalists congregate in some EA-adjacent spaces. My impression is that (especially online) spaces that don’t moderate away or at least discourage such views will tend to attract them—it’s not the pattern of activity you’d see if white nationalists randomly bounce around places or people organically arrive at those views. I think this is quite dangerous for epistemic norms, because white nationalist/supremacist views are very incorrect and deter large swaths of potential participants and also people with those views routinely argue in bad faith by hiding how extreme their actual opinions are while surreptitiously promoting the extreme version. It’s also in my view a fairly clear and present danger to EA given that there are other communities with some white nationalist presence that are quite socially close to EA.
This is essentially the premise of microfinance, right?
From what I understand, since Three Gorges is a gravity dam, meaning it uses the weight of the dam to hold back water rather than its tensile strength, a failure or collapse would not necessarily be catastrophic one. So if some portion falls, the rest will stay standing. That means there’s a distribution of severity within failures/collapses, it’s not just a binary outcome.
To me it feels easier to participate in discussions on Twitter than on (e.g.) the EA Forum, even though you’re allowed to post a forum comment with fewer than 280 characters. This makes me a little worried that people feel intimidated about offering “quick takes” here because most comments are pretty long. I think people should feel free to offer feedback more detailed than an upvote/downvote without investing a lot of time in a long comment.
Not from the podcast but here’s a talk Rob gave in 2015 about potential arguments against growing the EA community: https://www.youtube.com/watch?v=TH4_ikhAGz0
EAs are probably more likely than the general public to keep money they intend to donate invested in stocks, since that’s a pretty common bit of financial advice floating around the community. So the large drop in stock prices in the past few weeks (and possible future drops) may affect EA giving more than giving as a whole.
How far do you think we are from completely filling the need for malaria nets, and what are the barriers left to achieving that goal?
What are your high-level goals for improving AI law and policy? And how do you think your work at OpenAI contributes to those goals?
Seems like its mission sits somewhere between GiveWell’s and Charity Navigator’s. GiveWell studies a few charities to find the very highest impact ones according to its criteria. Charity Navigator attempts to rate every charity, but does so purely on procedural considerations like overhead. ImpactMatters is much broader and shallower than GiveWell but unlike Charity Navigator does try to tell you what actually happens as the result of your donation.
I think I would be more likely to share my donations this way compared to sharing them myself, because it would feel easier and less braggadocious (I currently do not really advertise my donations).
Among other things, I feel a sense of pride and accomplishment when I do good, the way I imagine that someone who cares about, say, the size of their house feels when they think about how big their house is.
Absolutely, EAs shouldn’t be toxic, inaccurate, or uncharitable on Twitter or anywhere else. But I’ve seen a few examples of people effectively communicating about EA issues on Twitter, such as Julia Galef and Kelsey Piper, at a level of fidelity and niceness far above the average for that website. On the other hand they are briefer, more flippant, and spend more time responding to critics outside the community than they would on other platforms.
Yep, though I think it takes a while to learn how to tweet, whom to follow, and whom to tweet at before you can get a consistently good experience on Twitter and avoid the nastiness and misunderstandings it’s infamous for.
There’s a bit of an extended universe of Vox writers, economists, and “neoliberals” that are interested in EA and sometimes tweet about it, and I think it would be potentially valuable to add some people who are more knowledgeable about EA into the mix.
On point 4, I wonder if more EAs should use Twitter. There are certainly many options to do more “ruthless” communication there, and it might be a good way to spread and popularize ideas. In any case it’s a pretty concrete example of where fidelity vs. popularity and niceness vs. aggressive promotion trade off.
This all seems to assume that there is only one “observer” in the human mind, so that if you don’t feel or perceive a process, then that process is not felt or perceived by anyone. Have you ruled out the possibility of sentient subroutines within human minds?
Longtermism isn’t just AI risk, but concern with AI-risk is associated with a Elon Musk-technofuturist-technolibertarian-Silicon Valley idea cluster. Many progressives dislike some or all of those things and will judge AI alignment negatively as a result.