Then if someone gives them a weight of 0.0001X, their red arrow becomes much smaller (as in 2.), and the total volume enclosed by their cube becomes smaller.
Yeah, I agree with this.
What Iâm saying is that, if they and you disagree sufficiently much on 1 factor in a way that they already know about before this process starts, they might justifiably be confident that this adjustment will mean any ideas in category A (e.g., things focused on helping cows) will be much less promising than some ideas in category B (e.g., things focused on helping humans in the near-term, or things focused on beings in the long-term).
And then they might justifiably be confident that your evaluations of ideas in category A wonât be very useful to them (and thus arenât worth reading, arenât worth funding you to make, etc.)
I think this is basically broadly the same sort of reasoning that leads GiveWell to rule out many ideas (e.g., those that focus on benefitting developed-world populations) before even doing shallow reviews. Those ideas could vary substantially on many dimensions GiveWell cares about, but they still predict that almost all of the ideas that are best by their lights would be found in a different category that can be known already to typically be much higher on some other dimensions (I guess neglectedness, in this case).
(I havenât followed GiveWellâs work very closely for a while, so I may be misrepresenting things.)
(All of this will not be the case if a person is more uncertain about e.g. which population group itâs best to benefit or which epistemic approaches should be used. So, e.g., the ratings for near-term animal welfare focused ideas should still be of interest for some portion of longtermism-leaning people.)
In this case, the red arrow would go completely to 0, and that person would just focus on the area of the square in which the blue and green arrows lie, across all cause candidates. Because I am looking at volume and they are looking at areas, our ratings will again differ.
I also agree with this. Again, Iâd just say that some ideas might only warrant attention if we do care about the red arrowâwe might be able to predict in advance that almost all of the ideas with the largest âareasâ (rather than âvolumesâ) would not be in that category. If so, then people might have reason to not pay attention to your other ratings for those ideas, because their time is limited and they should look elsewhere if they just want high-area ideas.
Another way to frame this would be in terms of crucial considerations: âa consideration such that if it were taken into account it would overturn the conclusions we would otherwise reach about how we should direct our efforts, or an idea or argument that might possibly reveal the need not just for some minor course adjustment in our practical endeavors but a major change of direction or priority.â
A quick example: If Alice currently thinks that a 1 percentage point reduction in existential risk is many orders of magnitude more important than a 1 percentage point increase in the average welfare of people in developing nations*, then I think looking at ratings from this sort of system for ideas focused on improving welfare of people in developing nations is not a good use of Aliceâs time.
I think sheâd use that time better by doing things like:
looking at ratings of ideas focused on reducing existential risk
looking at ideas focused on proxies that seem more connected to reducing existential risk
looking specifically at crucial-consideration-y things like âHow does improving welfare of people in developing nations affect existential risk?â or âWhat are the strongest arguments for focusing on welfare in developing nations rather than on existential riskâ
This wouldnât be aided much by answers to questions like âHas [idea X] been implemented yet? How costly would it be? What is the evidence that it indeed achieves its stated objective?â
See also Charity Entrepreneurshipâs âsupporting reportsâ, which âfocus on meta and cross-cutting issues that affect a large number of ideas and would not get covered by our standard reports. Their goal is to support the consideration of different ideas.â
*I chose those proxies and numbers fairly randomly.
To be clear: I am not saying that I donât think your model, or the sort of work thatâs sort-of proposed by the model, wouldnât be valuable. I think it would be valuable. Iâm just explaining why I think some portions of the work wonât be particularly valuable to some portion of EAs. (Just as most of GiveWellâs work or FHIâs work isnât particularly valuableâat least on the object levelâto some EAs.)
Yeah, I agree with this.
What Iâm saying is that, if they and you disagree sufficiently much on 1 factor in a way that they already know about before this process starts, they might justifiably be confident that this adjustment will mean any ideas in category A (e.g., things focused on helping cows) will be much less promising than some ideas in category B (e.g., things focused on helping humans in the near-term, or things focused on beings in the long-term).
And then they might justifiably be confident that your evaluations of ideas in category A wonât be very useful to them (and thus arenât worth reading, arenât worth funding you to make, etc.)
I think this is basically broadly the same sort of reasoning that leads GiveWell to rule out many ideas (e.g., those that focus on benefitting developed-world populations) before even doing shallow reviews. Those ideas could vary substantially on many dimensions GiveWell cares about, but they still predict that almost all of the ideas that are best by their lights would be found in a different category that can be known already to typically be much higher on some other dimensions (I guess neglectedness, in this case).
(I havenât followed GiveWellâs work very closely for a while, so I may be misrepresenting things.)
(All of this will not be the case if a person is more uncertain about e.g. which population group itâs best to benefit or which epistemic approaches should be used. So, e.g., the ratings for near-term animal welfare focused ideas should still be of interest for some portion of longtermism-leaning people.)
I also agree with this. Again, Iâd just say that some ideas might only warrant attention if we do care about the red arrowâwe might be able to predict in advance that almost all of the ideas with the largest âareasâ (rather than âvolumesâ) would not be in that category. If so, then people might have reason to not pay attention to your other ratings for those ideas, because their time is limited and they should look elsewhere if they just want high-area ideas.
Another way to frame this would be in terms of crucial considerations: âa consideration such that if it were taken into account it would overturn the conclusions we would otherwise reach about how we should direct our efforts, or an idea or argument that might possibly reveal the need not just for some minor course adjustment in our practical endeavors but a major change of direction or priority.â
A quick example: If Alice currently thinks that a 1 percentage point reduction in existential risk is many orders of magnitude more important than a 1 percentage point increase in the average welfare of people in developing nations*, then I think looking at ratings from this sort of system for ideas focused on improving welfare of people in developing nations is not a good use of Aliceâs time.
I think sheâd use that time better by doing things like:
looking at ratings of ideas focused on reducing existential risk
looking at ideas focused on proxies that seem more connected to reducing existential risk
looking specifically at crucial-consideration-y things like âHow does improving welfare of people in developing nations affect existential risk?â or âWhat are the strongest arguments for focusing on welfare in developing nations rather than on existential riskâ
This wouldnât be aided much by answers to questions like âHas [idea X] been implemented yet? How costly would it be? What is the evidence that it indeed achieves its stated objective?â
See also Charity Entrepreneurshipâs âsupporting reportsâ, which âfocus on meta and cross-cutting issues that affect a large number of ideas and would not get covered by our standard reports. Their goal is to support the consideration of different ideas.â
*I chose those proxies and numbers fairly randomly.
To be clear: I am not saying that I donât think your model, or the sort of work thatâs sort-of proposed by the model, wouldnât be valuable. I think it would be valuable. Iâm just explaining why I think some portions of the work wonât be particularly valuable to some portion of EAs. (Just as most of GiveWellâs work or FHIâs work isnât particularly valuableâat least on the object levelâto some EAs.)
Makes sense, thanks