I agree with a lot of this, although I’m not sure I see why standardized cost benefit analysis would be necessary for legitimate epistemic progress to be made? There are many empirical questions that seem important from a wide range of ethical views, and people with shared interest in these questions can work together to figure these out, while drawing their own normative conclusions. (This seems to line up with what most organizations affiliated with this community actually do—my impression is that lots more research goes into empirical questions than into drawing ethical conclusions.)
And even if having a big community were not ideal for epistemic progress, it could be worth it on other grounds, e.g. community size being helpful for connecting people to employers, funders, and cofounders.
I think I overstated my case somewhat or used the wrong wording. I don’t think standardized cbas are completely necessary for epistemic progress. In fact as long as the cba is done with outputs per dollar rather than outcomes per dollar or includes the former in the analysis it shouldn’t be much of a problem because as you said people can overlay their normative concerns.
I do think that most posts here aren’t prefaced with normative frameworks, and this is sometimes completely unimportant(in the case of empirical stuff), or in other cases more important(how do we approach funding research, how should we act as a community and individuals as a part of the community). I think a big part of the reason that it isn’t more confusing is that as the other commenter said, almost everyone here is a utilitarian.
I agree that there is a reason to have the ea umbrella outside of epistemic reasons. So again I used overly strongly wording or was maybe just plainly incorrect.
A lot of what was going on in my head with respect to cost benefit analyses when I wrote this comment was about grantmaking. For instance, If a grantmaker says it’s funding based on projects that will help the long term of humanity, I feel like that leaves a lot on the table. Do you care about pain or pleasure? Humans or everyone?
Inevitably they will use some sort of rubric. If they haven’t thought through what normative considerations the rubric is based on, the rubric may be somewhat incoherent to any specific value system or even worse completely aligned with a specific one by accident. I could imagine this creating non Bayesian value drift, since while research cbas allow us to overlay our own normative frameworks, grants are real world decisions. I can’t overlay my own framework over someone else’s decision to give a grant.
Also I do feel a bit bad about my original comment because I meant the comment to really just be a jumping off point for other anti-realists to express confusion about how to talk about their disillusionment or whether there is even a place for that here but I got side tracked ranting as I often do.
I agree with a lot of this, although I’m not sure I see why standardized cost benefit analysis would be necessary for legitimate epistemic progress to be made? There are many empirical questions that seem important from a wide range of ethical views, and people with shared interest in these questions can work together to figure these out, while drawing their own normative conclusions. (This seems to line up with what most organizations affiliated with this community actually do—my impression is that lots more research goes into empirical questions than into drawing ethical conclusions.)
And even if having a big community were not ideal for epistemic progress, it could be worth it on other grounds, e.g. community size being helpful for connecting people to employers, funders, and cofounders.
I think I overstated my case somewhat or used the wrong wording. I don’t think standardized cbas are completely necessary for epistemic progress. In fact as long as the cba is done with outputs per dollar rather than outcomes per dollar or includes the former in the analysis it shouldn’t be much of a problem because as you said people can overlay their normative concerns.
I do think that most posts here aren’t prefaced with normative frameworks, and this is sometimes completely unimportant(in the case of empirical stuff), or in other cases more important(how do we approach funding research, how should we act as a community and individuals as a part of the community). I think a big part of the reason that it isn’t more confusing is that as the other commenter said, almost everyone here is a utilitarian.
I agree that there is a reason to have the ea umbrella outside of epistemic reasons. So again I used overly strongly wording or was maybe just plainly incorrect.
A lot of what was going on in my head with respect to cost benefit analyses when I wrote this comment was about grantmaking. For instance, If a grantmaker says it’s funding based on projects that will help the long term of humanity, I feel like that leaves a lot on the table. Do you care about pain or pleasure? Humans or everyone?
Inevitably they will use some sort of rubric. If they haven’t thought through what normative considerations the rubric is based on, the rubric may be somewhat incoherent to any specific value system or even worse completely aligned with a specific one by accident. I could imagine this creating non Bayesian value drift, since while research cbas allow us to overlay our own normative frameworks, grants are real world decisions. I can’t overlay my own framework over someone else’s decision to give a grant.
Also I do feel a bit bad about my original comment because I meant the comment to really just be a jumping off point for other anti-realists to express confusion about how to talk about their disillusionment or whether there is even a place for that here but I got side tracked ranting as I often do.