Then if someone gives them a weight of 0.0001X, their red arrow becomes much smaller (as in 2.), and the total volume enclosed by their cube becomes smaller.
Yeah, I agree with this.
What Iām saying is that, if they and you disagree sufficiently much on 1 factor in a way that they already know about before this process starts, they might justifiably be confident that this adjustment will mean any ideas in category A (e.g., things focused on helping cows) will be much less promising than some ideas in category B (e.g., things focused on helping humans in the near-term, or things focused on beings in the long-term).
And then they might justifiably be confident that your evaluations of ideas in category A wonāt be very useful to them (and thus arenāt worth reading, arenāt worth funding you to make, etc.)
I think this is basically broadly the same sort of reasoning that leads GiveWell to rule out many ideas (e.g., those that focus on benefitting developed-world populations) before even doing shallow reviews. Those ideas could vary substantially on many dimensions GiveWell cares about, but they still predict that almost all of the ideas that are best by their lights would be found in a different category that can be known already to typically be much higher on some other dimensions (I guess neglectedness, in this case).
(I havenāt followed GiveWellās work very closely for a while, so I may be misrepresenting things.)
(All of this will not be the case if a person is more uncertain about e.g. which population group itās best to benefit or which epistemic approaches should be used. So, e.g., the ratings for near-term animal welfare focused ideas should still be of interest for some portion of longtermism-leaning people.)
In this case, the red arrow would go completely to 0, and that person would just focus on the area of the square in which the blue and green arrows lie, across all cause candidates. Because I am looking at volume and they are looking at areas, our ratings will again differ.
I also agree with this. Again, Iād just say that some ideas might only warrant attention if we do care about the red arrowāwe might be able to predict in advance that almost all of the ideas with the largest āareasā (rather than āvolumesā) would not be in that category. If so, then people might have reason to not pay attention to your other ratings for those ideas, because their time is limited and they should look elsewhere if they just want high-area ideas.
Another way to frame this would be in terms of crucial considerations: āa consideration such that if it were taken into account it would overturn the conclusions we would otherwise reach about how we should direct our efforts, or an idea or argument that might possibly reveal the need not just for some minor course adjustment in our practical endeavors but a major change of direction or priority.ā
A quick example: If Alice currently thinks that a 1 percentage point reduction in existential risk is many orders of magnitude more important than a 1 percentage point increase in the average welfare of people in developing nations*, then I think looking at ratings from this sort of system for ideas focused on improving welfare of people in developing nations is not a good use of Aliceās time.
I think sheād use that time better by doing things like:
looking at ratings of ideas focused on reducing existential risk
looking at ideas focused on proxies that seem more connected to reducing existential risk
looking specifically at crucial-consideration-y things like āHow does improving welfare of people in developing nations affect existential risk?ā or āWhat are the strongest arguments for focusing on welfare in developing nations rather than on existential riskā
This wouldnāt be aided much by answers to questions like āHas [idea X] been implemented yet? How costly would it be? What is the evidence that it indeed achieves its stated objective?ā
See also Charity Entrepreneurshipās āsupporting reportsā, which āfocus on meta and cross-cutting issues that affect a large number of ideas and would not get covered by our standard reports. Their goal is to support the consideration of different ideas.ā
*I chose those proxies and numbers fairly randomly.
To be clear: I am not saying that I donāt think your model, or the sort of work thatās sort-of proposed by the model, wouldnāt be valuable. I think it would be valuable. Iām just explaining why I think some portions of the work wonāt be particularly valuable to some portion of EAs. (Just as most of GiveWellās work or FHIās work isnāt particularly valuableāat least on the object levelāto some EAs.)
Yeah, I agree with this.
What Iām saying is that, if they and you disagree sufficiently much on 1 factor in a way that they already know about before this process starts, they might justifiably be confident that this adjustment will mean any ideas in category A (e.g., things focused on helping cows) will be much less promising than some ideas in category B (e.g., things focused on helping humans in the near-term, or things focused on beings in the long-term).
And then they might justifiably be confident that your evaluations of ideas in category A wonāt be very useful to them (and thus arenāt worth reading, arenāt worth funding you to make, etc.)
I think this is basically broadly the same sort of reasoning that leads GiveWell to rule out many ideas (e.g., those that focus on benefitting developed-world populations) before even doing shallow reviews. Those ideas could vary substantially on many dimensions GiveWell cares about, but they still predict that almost all of the ideas that are best by their lights would be found in a different category that can be known already to typically be much higher on some other dimensions (I guess neglectedness, in this case).
(I havenāt followed GiveWellās work very closely for a while, so I may be misrepresenting things.)
(All of this will not be the case if a person is more uncertain about e.g. which population group itās best to benefit or which epistemic approaches should be used. So, e.g., the ratings for near-term animal welfare focused ideas should still be of interest for some portion of longtermism-leaning people.)
I also agree with this. Again, Iād just say that some ideas might only warrant attention if we do care about the red arrowāwe might be able to predict in advance that almost all of the ideas with the largest āareasā (rather than āvolumesā) would not be in that category. If so, then people might have reason to not pay attention to your other ratings for those ideas, because their time is limited and they should look elsewhere if they just want high-area ideas.
Another way to frame this would be in terms of crucial considerations: āa consideration such that if it were taken into account it would overturn the conclusions we would otherwise reach about how we should direct our efforts, or an idea or argument that might possibly reveal the need not just for some minor course adjustment in our practical endeavors but a major change of direction or priority.ā
A quick example: If Alice currently thinks that a 1 percentage point reduction in existential risk is many orders of magnitude more important than a 1 percentage point increase in the average welfare of people in developing nations*, then I think looking at ratings from this sort of system for ideas focused on improving welfare of people in developing nations is not a good use of Aliceās time.
I think sheād use that time better by doing things like:
looking at ratings of ideas focused on reducing existential risk
looking at ideas focused on proxies that seem more connected to reducing existential risk
looking specifically at crucial-consideration-y things like āHow does improving welfare of people in developing nations affect existential risk?ā or āWhat are the strongest arguments for focusing on welfare in developing nations rather than on existential riskā
This wouldnāt be aided much by answers to questions like āHas [idea X] been implemented yet? How costly would it be? What is the evidence that it indeed achieves its stated objective?ā
See also Charity Entrepreneurshipās āsupporting reportsā, which āfocus on meta and cross-cutting issues that affect a large number of ideas and would not get covered by our standard reports. Their goal is to support the consideration of different ideas.ā
*I chose those proxies and numbers fairly randomly.
To be clear: I am not saying that I donāt think your model, or the sort of work thatās sort-of proposed by the model, wouldnāt be valuable. I think it would be valuable. Iām just explaining why I think some portions of the work wonāt be particularly valuable to some portion of EAs. (Just as most of GiveWellās work or FHIās work isnāt particularly valuableāat least on the object levelāto some EAs.)
Makes sense, thanks