So suppose you have a cause candidate, and some axis like the ones you mention:
Degree to which the area arguably runs into the cluelessness problem
Degree to which the area affects beings who don’t exist yet / Promisingness of the area under the totalist view.
But also some others, like
Promisingness given a weight of X humans to Y animals
Promisingness given that humans are ~infinitely more valuable than animals
Tractability of finding good people to get a project started
Units of resources needed
Probability that a trusted forecasting system gives to your project still existing in 2 years.
For simplicity, I’m going to just use three axis, but the below applies to more. Right now, the topmost vectors represent my own perspective on the promisingness of a cause candidate, across three axis, but they could eventually represent some more robust measure (e.g., the aggregate of respected elders, or some other measure you like more). The vectors at the bottom are the perspectives of people who disagree with me across some axis.
For example, suppose that the red vector was “ratio of the value of a human to a standard animal”, or “probability that a project in this cause area will successfully influence the long-term future”.
Then person number 2 can say “well, no, humans are worth much more than animals”. Or, “well, no, the probability of this project influencing the long-term future is much lower”. And person number 2 can say something like “well, overall I agree with you, but I value animals a little bit more, so my red axis is somewhat higher”, or “well, no, I think that the probability that this project has of influencing the long-term future is much higher”.
Crucially, they wouldn’t have to do this for every cause candidate. For example, if I value a given animal living a happier life the same as X humans, and someone else values that animal as 0.01X humans, or as 2X humans, they can just apply the transformation to my values.
Similarly, if someone is generally very pessimistic about the tractability of influencing the long-term future they could transform my probabilities as to that happening. They could divide my probabilities by 10x (or, actually, subtract some amount of probability in bits). Then the tranformation might not be linear, but it would still be doable.
Then, knowing the various axis, one could combine them to find out the expected impact. For example, one could multiply three axis to get the volume of the box, or add them as vectors and consider the length of the purple vector, or some other transformation which isn’t a toy example.
So, a difference in perspectives would be transformed into a change of basis
Even in that case there’s a difficulty, in that anyone who disagrees with your stance on these foundational questions would then have the right to throw out all of your funneling work and just do their own.
doesn’t strike me as true. Granted, I haven’t done this yet, and I might never because other avenues might strike me as more interesting, but the possibility exists.
I think this comment is interesting. But I think you might be partially talking past jackmalde’s point (at least if I’m interpreting both of you correctly, which I might not be).
If jackmalde meant “If someone disagrees with your stance on these foundational questions, it would then make sense for them to disagree with or place no major credence in all of the rest of your reasoning regarding certain ideas”, then I think that would indeed be incorrect, for the reasons you suggest. Basically, as I think you suggest, caring more or less than you about nonhumans or creating new happy lives is not a reason why a person should disagree with your beliefs about various other factors that play into the promisingness of an idea (e.g., how much the intervention would cost, whether there’s strong reasoning that it would work for its intended objective).
But I’m guessing jackmalde meant something more like “If someone sufficiently strongly disagrees with your stance on these foundational questions, it would then make sense for them to not pay any attention to all of the rest of your reasoning regarding certain ideas.” That person might think something like:
“I have no particular reason to disagree with any of your reasoning about how promising this idea is for the population it intends to benefit, but I just think benefitting that population is [huge number] less important than benefitting [this other population], so I really don’t care.”
Or “I have no particular reason to disagree with any of your reasoning about how promising this idea is if we aim to simply maximise expected utility even in Pascalian situations, and accept long chains of reasoning with limited empirical data behind them. But I’m just really firmly convinced that those are bad ways to make decisions and form beliefs, so I don’t really care how well the idea performs from that perspective.”
In other words, there may be single factors that are (a) not really evaluated by the activities mentioned in your model, and (b) sufficient to rule an idea out as worthy of consideration for some people, but not others.
Maybe a condensed version of all of that is: If there’s a class of ideas (e.g., things not at all focused on improving the long-term future) which one believes all have extremely small scores on one dimension relative to at least some members of another class of ideas (e.g., things focused on improving the long-term future), then it makes sense to not bother thinking about their scores on other dimensions, if any variation on those dimensions which one could reasonably expect would still be insufficiently to make the total “volume” as high as some other class.
I’m not sure how often extreme versions of this would/should come up, partly because I think it’s relatively common and wise for people to be morally uncertain, decision-theoretically uncertain, uncertain about “what epistemologies they should have” (not sure that’s the right phrase), etc. But I think moderate versions come up a lot. E.g., I personally don’t usually engage deeply with reasoning for near-term human or near-term animal interventions anymore, due to my stance on relatively “foundational questions”.
(None of this would mean people wouldn’t pay attention to any results from the sort of process proposed in your model. It just might mean many people would only pay attention to the subset of results which are focused on the sort of ideas that pass each person’s initial “screening” based on some foundational questions.)
So suppose that the intervention was about cows, and I (the vectors in “1” in the image) gave them some moderate weight X, the length of the red arrow. Then if someone gives them a weight of 0.0001X, their red arrow becomes much smaller (as in 2.), and the total volume enclosed by their cube becomes smaller. I’m thinking that the volume represents promisingness. But they can just apply that division X → 0.0001X to all my ratings, and calculate their new volumes and ratings (which will be different from mine, because cause areas which only affect, say, humans, won’t be affected).
In this case, the red arrow would go completely to 0, and that person would just focus on the area of the square in which the blue and green arrows lie, across all cause candidates. Because I am looking at volume and they are looking at areas, our ratings will again differ.
This cube approach is interesting, but my instinctive response is to agree with MichaelA, if someone doesn’t think influencing the long-run future is tractable then they will probably just want to entirely filter out longtermist cause areas from the very start and focus on shorttermist areas. I’m not sure comparing areas/volumes between shorttermist and longtermist areas will be something they will be that interested in doing. My feeling is the cube approach may be over complicating things.
If I were doing this myself, or starting an ’exploratory altruism’ organisation similar to the one Charity Entrepreneurship is thinking about starting, I would probably take one of the following two approaches:
Similar to 80,000 Hours, just decide what the most important class of cause areas is to focus on at the current margin and ignore everything else. 80K has decided to focus on longtermist cause areas and has outlined clearly why they are doing this (key ideas page has a decent overview). So people know what they are getting from 80K and 80K can freely assume totalism, vastness of the future etc. when they are carrying out their research. The drawback of this approach is it alienates a lot of people, as evidenced by the founding of a new careers org, ‘Probably Good’.
Try to please everyone by carrying out multiple distinct funnelling exercises, one for each class of cause area (say near-term human welfare, near-term animal welfare, x-risk, non-x-risk longtermist areas). Each funnelling exercise would make different foundational assumptions according to that cause area. People could then just choose which funnelling exercise to pay attention to and, in theory, everybody wins. The drawback to this approach that 80K would state is that it probably means spending a lot of time focusing on cause areas that you don’t think are actually that high value, which may just be very inefficient.
I think this decision is tough, but on balance I would probably go for option 1 and would focus on longtermist cause areas, in part because shorttermist areas have historically been given much more thought so there is probably less meaningful progress that can be made there.
Then if someone gives them a weight of 0.0001X, their red arrow becomes much smaller (as in 2.), and the total volume enclosed by their cube becomes smaller.
Yeah, I agree with this.
What I’m saying is that, if they and you disagree sufficiently much on 1 factor in a way that they already know about before this process starts, they might justifiably be confident that this adjustment will mean any ideas in category A (e.g., things focused on helping cows) will be much less promising than some ideas in category B (e.g., things focused on helping humans in the near-term, or things focused on beings in the long-term).
And then they might justifiably be confident that your evaluations of ideas in category A won’t be very useful to them (and thus aren’t worth reading, aren’t worth funding you to make, etc.)
I think this is basically broadly the same sort of reasoning that leads GiveWell to rule out many ideas (e.g., those that focus on benefitting developed-world populations) before even doing shallow reviews. Those ideas could vary substantially on many dimensions GiveWell cares about, but they still predict that almost all of the ideas that are best by their lights would be found in a different category that can be known already to typically be much higher on some other dimensions (I guess neglectedness, in this case).
(I haven’t followed GiveWell’s work very closely for a while, so I may be misrepresenting things.)
(All of this will not be the case if a person is more uncertain about e.g. which population group it’s best to benefit or which epistemic approaches should be used. So, e.g., the ratings for near-term animal welfare focused ideas should still be of interest for some portion of longtermism-leaning people.)
I also agree with this. Again, I’d just say that some ideas might only warrant attention if we do care about the red arrow—we might be able to predict in advance that almost all of the ideas with the largest “areas” (rather than “volumes”) would not be in that category. If so, then people might have reason to not pay attention to your other ratings for those ideas, because their time is limited and they should look elsewhere if they just want high-area ideas.
Another way to frame this would be in terms of crucial considerations: “a consideration such that if it were taken into account it would overturn the conclusions we would otherwise reach about how we should direct our efforts, or an idea or argument that might possibly reveal the need not just for some minor course adjustment in our practical endeavors but a major change of direction or priority.”
A quick example: If Alice currently thinks that a 1 percentage point reduction in existential risk is many orders of magnitude more important than a 1 percentage point increase in the average welfare of people in developing nations*, then I think looking at ratings from this sort of system for ideas focused on improving welfare of people in developing nations is not a good use of Alice’s time.
I think she’d use that time better by doing things like:
looking at ratings of ideas focused on reducing existential risk
looking at ideas focused on proxies that seem more connected to reducing existential risk
looking specifically at crucial-consideration-y things like “How does improving welfare of people in developing nations affect existential risk?” or “What are the strongest arguments for focusing on welfare in developing nations rather than on existential risk”
This wouldn’t be aided much by answers to questions like “Has [idea X] been implemented yet? How costly would it be? What is the evidence that it indeed achieves its stated objective?”
See also Charity Entrepreneurship’s “supporting reports”, which “focus on meta and cross-cutting issues that affect a large number of ideas and would not get covered by our standard reports. Their goal is to support the consideration of different ideas.”
*I chose those proxies and numbers fairly randomly.
To be clear: I am not saying that I don’t think your model, or the sort of work that’s sort-of proposed by the model, wouldn’t be valuable. I think it would be valuable. I’m just explaining why I think some portions of the work won’t be particularly valuable to some portion of EAs. (Just as most of GiveWell’s work or FHI’s work isn’t particularly valuable—at least on the object level—to some EAs.)
Makes sense, thanks
Nitpick: A change of basis might also be combined with a projection into a subspace. In the example, if one doesn’t care about animals, or about the long term future at all, then instead of the volume of the cuboid they’d just consider the area of one of its faces.
Another nitpick: The ratio of humans to animals would depend on the specific animals. However, I sort of feel that the high level disagreements of the sort jackmalde is pointing to are probably in the ratio of the value between a happy human life and a happy cow life, not about the ratio of the life of a happy cow to a happy pig, chicken, insect, etc.