Allow me to articulate the plight myself and other EAs (I assume) go through in trying to do the most good can:
At some point in your life, probably between 16-30 years old, you begin considering your impact on the world. At first, choices seem pretty obvious: Go vegetarian and maybe vegan, give a proportionately large amount of your wealth to charity, individual or collective environmentalism, etc. Some people might stop here and go back to their life as pretty much normal.
You however, when looking into what charities to give to, likely having your first online encounter with EA and EA-adjacent databases and philosophy. The AMF is presented to you as the most cost-effective way to save human lives. In your research, you’re exposed to some new ideas: applying Expected Value to morality, QALYs, and the like.
Learning of these terms, in and of itself, prompts your brain to generate questions: Should we really be so calculative and mathematical with morality? Is saving a life, all things equal, really just as good as the quantity and quality of years they can experience following your intervention? Is creating those years via procreation also a good thing? Is “goodness” measured by what an agent wants, or just the sensations their brain?
By no means are these easy questions, but they’re some I can grapple with. I, and my sense is, most within EA tend to lean towards the more utilitarian, agent-neutral responses.
The Questions too Complex for Many of us to Satisfyingly Answer
But those aren’t the last of them. Later along the line, there comes infinity and fanaticism. There are many things I can’t quite understand: comparing different infinite sets, infinity-”induced” paralysis? The thought experiments are enough to give my head a wobble, but are there ways it practically applies to decision making? Religion, manipulating an infinite multiverse/universe to create an infinite amount of utility? Is an arbitrarily high quality of experience the best you can get, or does combining that with an arbitrarily high amount of time to experience it infinitely better? Is there a finite likelihood of creating infinite utility, or is there a greater likelihood of an infinite amount of disvalue? How can we distinguish a probability from being infinitesimal and extremely small, and what does an infinitesimal probability times an infinite value come out to, in terms of EV?
Many people might adopt an approach of anti-fanaticism (I feel like most arguments in its favor just accept this as true rather than justifying it, and usually end up being intransitive and even more counter-intuitive, however). Others might just say it’s too complicated for a casual do-gooder with a normal profession and a busy life to understand. These people generally support highly-accepted initiatives such as the AMF or animal welfare.
But what about those who aren’t convinced by anti-fanaticism, who think that in order to do the most good by any metric, they need to understand all of the questions I posed above? We don’t have the time, the intelligence, the patience to come up with any really satisfying answers. Any conclusion I reach will have a high chance of being wrong.
The idea
But there are people out there who are much smarter than me, who have the ability to devote a lot of time to answering these sorts of questions. Who also want to do the most good they can in the world. And presumably, they will be much more accurate than me.
What if I could donate my money to one such philosopher directly, expecting that they will be much better at navigating the uncertainty than I will.
Many organizations already have a fairly similar option (GiveWell, ACE, etc.) but these are not organizations who are either targeted at a specific cause area (saving lives, sparing farm animals, etc.) and who are not considering the likelihood that in the far future we can use the infinite energy from our universes collapse to create and design infinitely more universes of infinite bliss.
Potential Benefits
While much of what I’m talking about seems unique to EV fanatics, it still applies to many other people as well. Almost everyone has a fairly distinct set of values which can make it difficult to figure out the best thing to do. One might be anti-fanatical, but how do they go about quantifying it? If one heavily incorporates non-consequentialist considerations into their altruism (such as valuing present good more than future good, people more than animals, their own community slightly higher than those in different continents), how should they go about finding the right charities to give for.
Essentially, my idea is that one could match with EA philosophers and leaders who share common values. Then, they could discuss the best-seeming options for how to use their money and career considering their values, or as I mentioned earlier, donate directly to the philosopher in order to use it.
If it works, it would be a great moral good for both parties. The casual EA knows that in expectation their money is going to a better cause aligning with their values, and the philosopher/mentor can be essentially certain that they are diverting the money to a more effective cause than it would have gone to, in expectation.
Ultimately, I don’t really have much knowledge on the practicability of my idea (from an administrative and a safety standpoint). This is not really an idea I’ve seen discussed very much, and while it feels näive to me, I think it is definitely something worth discussing and potentially pursuing in some analogous form.
(Probably Flawed) Solution to casual EAs trying to maximize impact under uncertainty and confusion
The Plight of the Every-day Altruist
Allow me to articulate the plight myself and other EAs (I assume) go through in trying to do the most good can:
At some point in your life, probably between 16-30 years old, you begin considering your impact on the world. At first, choices seem pretty obvious: Go vegetarian and maybe vegan, give a proportionately large amount of your wealth to charity, individual or collective environmentalism, etc. Some people might stop here and go back to their life as pretty much normal.
You however, when looking into what charities to give to, likely having your first online encounter with EA and EA-adjacent databases and philosophy. The AMF is presented to you as the most cost-effective way to save human lives. In your research, you’re exposed to some new ideas: applying Expected Value to morality, QALYs, and the like.
Learning of these terms, in and of itself, prompts your brain to generate questions: Should we really be so calculative and mathematical with morality? Is saving a life, all things equal, really just as good as the quantity and quality of years they can experience following your intervention? Is creating those years via procreation also a good thing? Is “goodness” measured by what an agent wants, or just the sensations their brain?
By no means are these easy questions, but they’re some I can grapple with. I, and my sense is, most within EA tend to lean towards the more utilitarian, agent-neutral responses.
The Questions too Complex for Many of us to Satisfyingly Answer
But those aren’t the last of them. Later along the line, there comes infinity and fanaticism. There are many things I can’t quite understand: comparing different infinite sets, infinity-”induced” paralysis? The thought experiments are enough to give my head a wobble, but are there ways it practically applies to decision making? Religion, manipulating an infinite multiverse/universe to create an infinite amount of utility? Is an arbitrarily high quality of experience the best you can get, or does combining that with an arbitrarily high amount of time to experience it infinitely better? Is there a finite likelihood of creating infinite utility, or is there a greater likelihood of an infinite amount of disvalue? How can we distinguish a probability from being infinitesimal and extremely small, and what does an infinitesimal probability times an infinite value come out to, in terms of EV?
Many people might adopt an approach of anti-fanaticism (I feel like most arguments in its favor just accept this as true rather than justifying it, and usually end up being intransitive and even more counter-intuitive, however). Others might just say it’s too complicated for a casual do-gooder with a normal profession and a busy life to understand. These people generally support highly-accepted initiatives such as the AMF or animal welfare.
But what about those who aren’t convinced by anti-fanaticism, who think that in order to do the most good by any metric, they need to understand all of the questions I posed above? We don’t have the time, the intelligence, the patience to come up with any really satisfying answers. Any conclusion I reach will have a high chance of being wrong.
The idea
But there are people out there who are much smarter than me, who have the ability to devote a lot of time to answering these sorts of questions. Who also want to do the most good they can in the world. And presumably, they will be much more accurate than me.
What if I could donate my money to one such philosopher directly, expecting that they will be much better at navigating the uncertainty than I will.
Many organizations already have a fairly similar option (GiveWell, ACE, etc.) but these are not organizations who are either targeted at a specific cause area (saving lives, sparing farm animals, etc.) and who are not considering the likelihood that in the far future we can use the infinite energy from our universes collapse to create and design infinitely more universes of infinite bliss.
Potential Benefits
While much of what I’m talking about seems unique to EV fanatics, it still applies to many other people as well. Almost everyone has a fairly distinct set of values which can make it difficult to figure out the best thing to do. One might be anti-fanatical, but how do they go about quantifying it? If one heavily incorporates non-consequentialist considerations into their altruism (such as valuing present good more than future good, people more than animals, their own community slightly higher than those in different continents), how should they go about finding the right charities to give for.
Essentially, my idea is that one could match with EA philosophers and leaders who share common values. Then, they could discuss the best-seeming options for how to use their money and career considering their values, or as I mentioned earlier, donate directly to the philosopher in order to use it.
If it works, it would be a great moral good for both parties. The casual EA knows that in expectation their money is going to a better cause aligning with their values, and the philosopher/mentor can be essentially certain that they are diverting the money to a more effective cause than it would have gone to, in expectation.
Ultimately, I don’t really have much knowledge on the practicability of my idea (from an administrative and a safety standpoint). This is not really an idea I’ve seen discussed very much, and while it feels näive to me, I think it is definitely something worth discussing and potentially pursuing in some analogous form.
I’d appreciate any feedback you have on this!