Linch
The Grant Decision Boundary: Recent Cases from the Long-Term Future Fund
Less seriously, you might enjoy my 2022 April 1 post on Impact Island.
I think people take this into account but not enough or something? I strongly suspect when evaluating research many people have a vague, and not sufficiently precise, sense of both the numerator and denominator, and their vague intuitions aren’t sufficiently linear. I know I do this myself unless it’s a grant I’m actively investigating.
This is easiest to notice in research because it’s both a) a large fraction of (non-global health and development) EA output and b) very gnarly. But I don’t think research is unusually gnarly in terms of EA outputs or grants, advocacy, comms, etc have similar issues.
It might be too hard to envision an entire grand future, but it’s possible to envision specific wins in the short and medium-term. A short-term win could be large cage-free eggs campaigns succeeding, a medium-term win could be a global ban on caged layer hens. Similarly a short-term win for AI safety could be a specific major technical advance or significant legislation passed, a medium-term win could be AGIs coexisting with humans without the world going to chaos, while still having massive positive benefits (e.g. a cure to Alzheimer’s).
One possible way to get most of the benefits of talking to a real human being while getting around the costs that salius mentions is to have real humans serve as templates for an AI chatbot to train on.
You might imagine a single person per “archetype” to start with. That way if Danny is an unusually open-minded and agreeable Harris supporter, and Rupert is an unusually open-minded and agreeable Trump supporter, you can scale them up to have Dannybots and Rupertbots talk to millions of conflicted people while preserving privacy, helping assure people they aren’t judged by a real human, etc.
For more exploring details and nuances, see Scott Aaronson and Julia Galef on “vote trading” in 2016: https://www.happyscribe.com/public/rationally-speaking-podcast/rationally-speaking-171-scott-aaronson-on-the-ethics-and-strategy-of-vote-trading
I’m really sorry to hear that. This sounds really stressful.
Appreciate the updated thoughts!
One thing I don’t understand is whether this approach is immune to fanaticism/takeover from moral theories that place very little (but nonzero) value in hedonism. Naively a theory that (e.g.) values virtue at 10,000x that of hedonism will just be able to swamp out hedonism-centric views in this approach, unless you additionally normalize in a different way.
Ah, interesting that you think many people put >50% on hedonism and similarly-animal-friendly theories. 50% was intended to be generous; the last animal-welfare-friendly person I asked about this was 20-40% IIRC. Pretty sure I am even lower.
One thing to be careful of re: question framing is to make sure to constrain the set of theories under consideration to altruism-relevant theories. Eg many people will place nontrivial credence in nihilism, egotism, commonsense morality, but most of those theories will not be particularly relevant to the prioritization for altruistic allocation of marginal donations.
You’d either want to stop focusing on infant mortality, or start interventions to increase fertility. (Depending on whether population growth is a priority.)
I’m not sure I buy this disjunctive claim. Many people over humanity’s history have worked on reducing infant mortality (in technology, in policy, in direct aid, and in direct actions that prevent their own children/relatives’ children from dying). While some people worked on this because they primarily intrinsically value reducing infant mortality, I think many others were inspired by the indirect effects. And taking the long view, reducing infant mortality clearly had long-run benefits that are different from (and likely better than) equivalent levels of population growth while keeping infant mortality rates constant.
I guess I still don’t think of “I would need to spend a lot of time as a representative of this position” as being an anti-animal advocate. I spend a lot of time disagreeing with people on many different issues and yet I’d consider myself an advocate for only a tiny minority of them.
Put another way, I view the time spent as just one of the costs of being known as an anti-animal advocate, rather than being one.
I don’t think there is or ought to be an expectation to respond to every subpart of a comment in a reply
To my eyes “be known as an anti-animal advocate” is a much lower bar than “be an anti-animal advocate.”
For example I think some people will (still!) consider me an “anti-climate change advocate” (or “anti-anti-climate change advocate?”) due to a fairly short post I wrote 5+ years ago. I would, from their perspective, take actions consistent with that view (eg I’d be willing to defend my position if challenged, describe ways in which I’ve updated, etc). Moreover, it is not implausible that from their perspective, this is the most important thing I do (since they don’t interact with me at other times, and/or they might think my other actions are useless in either direction).
However, by my lights (and I expect by the lights of e.g. the median EA Forum reader) this would be a bad characterization. I don’t view arguing against climate change interventions as an important aspect of my life, nor do I believe my views on the matter as particularly outside of academic consensus.
Hence the distinction between “known as” vs “become.”
This job posting seems related.
If I understand correctly, the difference in consideration you make between humans and animals seems to boil down to “I can talk to humans, and they can tell me that they have an inner experience, while animals cannot (same for small children)”.
I don’t think this is what Jeff believes, though I guess his literal words are consistent with this interpretation.
Your argument that you would effectively be forced into becoming an anti-animal advocate if you convincingly wrote up your views—sorry I don’t really buy it.
I don’t think this is what Jeff said.
This is why I’m now more convinced to divide EA into the orange, blue, yellow, green, and purple teams. Maybe the purple team is very concerned about maximising philanthropy and also very PR concerned. The red team is a little bit more rationalist influenced and takes up free speech as a core cause and things like that.
I’d love to know what he thinks the other colors should be!
Oh wow just read the whole pilot! It’s really cool! Definitely an angle on doing the most good that I did not expect.