Another way to frame this would be in terms of crucial considerations: âa consideration such that if it were taken into account it would overturn the conclusions we would otherwise reach about how we should direct our efforts, or an idea or argument that might possibly reveal the need not just for some minor course adjustment in our practical endeavors but a major change of direction or priority.â
A quick example: If Alice currently thinks that a 1 percentage point reduction in existential risk is many orders of magnitude more important than a 1 percentage point increase in the average welfare of people in developing nations*, then I think looking at ratings from this sort of system for ideas focused on improving welfare of people in developing nations is not a good use of Aliceâs time.
I think sheâd use that time better by doing things like:
looking at ratings of ideas focused on reducing existential risk
looking at ideas focused on proxies that seem more connected to reducing existential risk
looking specifically at crucial-consideration-y things like âHow does improving welfare of people in developing nations affect existential risk?â or âWhat are the strongest arguments for focusing on welfare in developing nations rather than on existential riskâ
This wouldnât be aided much by answers to questions like âHas [idea X] been implemented yet? How costly would it be? What is the evidence that it indeed achieves its stated objective?â
See also Charity Entrepreneurshipâs âsupporting reportsâ, which âfocus on meta and cross-cutting issues that affect a large number of ideas and would not get covered by our standard reports. Their goal is to support the consideration of different ideas.â
*I chose those proxies and numbers fairly randomly.
To be clear: I am not saying that I donât think your model, or the sort of work thatâs sort-of proposed by the model, wouldnât be valuable. I think it would be valuable. Iâm just explaining why I think some portions of the work wonât be particularly valuable to some portion of EAs. (Just as most of GiveWellâs work or FHIâs work isnât particularly valuableâat least on the object levelâto some EAs.)
Another way to frame this would be in terms of crucial considerations: âa consideration such that if it were taken into account it would overturn the conclusions we would otherwise reach about how we should direct our efforts, or an idea or argument that might possibly reveal the need not just for some minor course adjustment in our practical endeavors but a major change of direction or priority.â
A quick example: If Alice currently thinks that a 1 percentage point reduction in existential risk is many orders of magnitude more important than a 1 percentage point increase in the average welfare of people in developing nations*, then I think looking at ratings from this sort of system for ideas focused on improving welfare of people in developing nations is not a good use of Aliceâs time.
I think sheâd use that time better by doing things like:
looking at ratings of ideas focused on reducing existential risk
looking at ideas focused on proxies that seem more connected to reducing existential risk
looking specifically at crucial-consideration-y things like âHow does improving welfare of people in developing nations affect existential risk?â or âWhat are the strongest arguments for focusing on welfare in developing nations rather than on existential riskâ
This wouldnât be aided much by answers to questions like âHas [idea X] been implemented yet? How costly would it be? What is the evidence that it indeed achieves its stated objective?â
See also Charity Entrepreneurshipâs âsupporting reportsâ, which âfocus on meta and cross-cutting issues that affect a large number of ideas and would not get covered by our standard reports. Their goal is to support the consideration of different ideas.â
*I chose those proxies and numbers fairly randomly.
To be clear: I am not saying that I donât think your model, or the sort of work thatâs sort-of proposed by the model, wouldnât be valuable. I think it would be valuable. Iâm just explaining why I think some portions of the work wonât be particularly valuable to some portion of EAs. (Just as most of GiveWellâs work or FHIâs work isnât particularly valuableâat least on the object levelâto some EAs.)
Makes sense, thanks