EDITS: I made substantial edits to the last section of this comment about 14 hours after posting.
Violet Hour, here are some thoughts on your interesting approach:
Maxims create tension, the same as between context and rules
social movements and ethics-minded communities do have maxims, usually visible in their slogans.
contextualization contrasts with universalizability.
unique contexts can test the universalizability of maxims.
common contexts usually suggest applicable maxims to follow.
context matters but so do rules (maxims), it’s a well-known tension.
Community standards can decline, encouraging self-serving rationalization
intersubjective verification can protect against self-serving rationalizations.
self-serving rationalizations include invalid contextualization and invalid maxim selection.
self-serving rationalization is in service of self-interest not others’ interest.
ethics usually conflict with self-interest, another well-known tension.
intersubjective verification fails when community standards decline.
community standards decline when no one cares about or everyone agrees with the unethical/immoral behavior.
Positive virtues do not prove their worth in how they help define effectiveness of actions taken to benefit others
positive virtues (e.g, forthrightness, discretion, integrity, loyalty) can conflict.
actual consequences, either believed or confirmed, are the final measure of an action’s benefit to others.
benefit to others is independent of intentions, expectations, luck and personal rewards involved.
benefit to others is not, per se, a measure of morality or ethicality of actions.
benefit to others must be measured somehow.
those measures have little to do with positive virtues.
Given a community intending to act ethically, there’s a list of problems that can occur:
rationalizations (for Kantians, invalid contextualization or invalid maxim selection)
conflicts with self-interest
community standards decline
conflict of positive virtues
dissatisfaction with positive virtues impact on efficiacy
In looking at these problems yourself, you pick and choose a path that deals with them.
I think you are suggesting that:
“in the long run” some virtues support better outcomes for a community.
if those virtues support the unique altruistic interests of the community, adopt them community-wide.
treat those virtues as more important than, or independent of, marginal altruistic gains made by individuals.
As far as FTX issues, there’s a difference between:
describing events (what happened?)
interpreting events (what’s it mean?)
evaluating events (how do I feel about it?)
People use hindsight to manifest virtues, but protecting virtues requires foresight
evaluating events is where a lot of virtues manifest.
evaluating events happens in hindsight.
prioritizing a virtue requires foresight and proactive development of expectations.
virtues like honesty and integrity require EAs to create models of context.
EA’s may differ in how they model the contexts (and relevant behaviors) of billionaires.
maxims for deciding whether EA virtues are manifest in selecting a donor therefore have conflicting contextualizations within the community.
In the case of FTX, I believe that indifference to the source of earnings predisposed the community to ignore the behavior of FTX in acquiring those earnings. Not because that’s fair or moral or consistent, but because:
the crypto industry is notoriously unethical but poorly regulated and understood to be risky.
rational, well-informed folks interested in acquiring charitable contributions have reason to ignore their source.
big finance in general is well-tolerated by the community as a source of funds.
In other words, community standards with regard to donors and their fund-raising had already declined. Therefore, nothing was considered wrong with FTX providing funds. I don’t object to that decline, necessarily, if there was in fact some decline in the first place. I’ll note that silicon valley ethics take to risky businesses and crypto as net positive, treating their corruption and harm as negative externalities, not even worthy of regulation, given its costs. Yet crypto is the most obviously corrupt “big thing” around in big finance right now.
All this reveals a tension between:
calculations of expected value: narrow-context calculations with values taken from measures of benefit to others of EA activity
community virtue: wider-context rules guiding decisions about avoiding negative consequences of donor business activities.
In another post(being edited right now, I proposed a four-factor model about calculations of consequences, in terms of harm and help to others and harm and help of oneself, useful mainly for thought experiments. One relevant point to this discussion was that an action can cause both harm and help to others, although, actually, the whole thing seems relevant from where I sit.
How EA’s decide to maximize consequences (causing help but no harm, causing known help and unknown harm, causing known harm and unknown help, causing slightly more help than harm, etc), is a community choice.
The breakdown of community standards is a subtle problem, it’s sometimes a problem of interpretation, so I’m not sure what direction I can give about this myself. I would like to see:
what maxims from a practical Kantian model that you think really apply here, with their context developed in more detail
how you propose to model contexts, particularly given your faith in Bayesian probabilities for credences, and what I anticipate will be your reliance on expected value calculations.
I really don’t think any model of context and consequences dependent on Bayesian probabilities will fit with virtue ethics well at all. You’re welcome to prove me wrong.
Ultimately, if a community decides to be self-serving and cynical in its claims of ethical rigor (ie, to lie), there’s no approach to ethics that will save the community from its ethical failure. On the other hand, a community of individuals interested in virtue or altruism will struggle with all the problems I listed above (rationalizations, community standards decline, virtues in conflict, etc).
EDITS: I made substantial edits to the last section of this comment about 14 hours after posting.
Violet Hour, here are some thoughts on your interesting approach:
Maxims create tension, the same as between context and rules
social movements and ethics-minded communities do have maxims, usually visible in their slogans.
contextualization contrasts with universalizability.
unique contexts can test the universalizability of maxims.
common contexts usually suggest applicable maxims to follow.
context matters but so do rules (maxims), it’s a well-known tension.
Community standards can decline, encouraging self-serving rationalization
intersubjective verification can protect against self-serving rationalizations.
self-serving rationalizations include invalid contextualization and invalid maxim selection.
self-serving rationalization is in service of self-interest not others’ interest.
ethics usually conflict with self-interest, another well-known tension.
intersubjective verification fails when community standards decline.
community standards decline when no one cares about or everyone agrees with the unethical/immoral behavior.
Positive virtues do not prove their worth in how they help define effectiveness of actions taken to benefit others
positive virtues (e.g, forthrightness, discretion, integrity, loyalty) can conflict.
actual consequences, either believed or confirmed, are the final measure of an action’s benefit to others.
benefit to others is independent of intentions, expectations, luck and personal rewards involved.
benefit to others is not, per se, a measure of morality or ethicality of actions.
benefit to others must be measured somehow.
those measures have little to do with positive virtues.
Given a community intending to act ethically, there’s a list of problems that can occur:
rationalizations (for Kantians, invalid contextualization or invalid maxim selection)
conflicts with self-interest
community standards decline
conflict of positive virtues
dissatisfaction with positive virtues impact on efficiacy
In looking at these problems yourself, you pick and choose a path that deals with them. I think you are suggesting that:
“in the long run” some virtues support better outcomes for a community.
if those virtues support the unique altruistic interests of the community, adopt them community-wide.
treat those virtues as more important than, or independent of, marginal altruistic gains made by individuals.
As far as FTX issues, there’s a difference between:
describing events (what happened?)
interpreting events (what’s it mean?)
evaluating events (how do I feel about it?)
People use hindsight to manifest virtues, but protecting virtues requires foresight
evaluating events is where a lot of virtues manifest.
evaluating events happens in hindsight.
prioritizing a virtue requires foresight and proactive development of expectations.
virtues like honesty and integrity require EAs to create models of context.
EA’s may differ in how they model the contexts (and relevant behaviors) of billionaires.
maxims for deciding whether EA virtues are manifest in selecting a donor therefore have conflicting contextualizations within the community.
In the case of FTX, I believe that indifference to the source of earnings predisposed the community to ignore the behavior of FTX in acquiring those earnings. Not because that’s fair or moral or consistent, but because:
the crypto industry is notoriously unethical but poorly regulated and understood to be risky.
rational, well-informed folks interested in acquiring charitable contributions have reason to ignore their source.
big finance in general is well-tolerated by the community as a source of funds.
In other words, community standards with regard to donors and their fund-raising had already declined. Therefore, nothing was considered wrong with FTX providing funds. I don’t object to that decline, necessarily, if there was in fact some decline in the first place. I’ll note that silicon valley ethics take to risky businesses and crypto as net positive, treating their corruption and harm as negative externalities, not even worthy of regulation, given its costs. Yet crypto is the most obviously corrupt “big thing” around in big finance right now.
All this reveals a tension between:
calculations of expected value: narrow-context calculations with values taken from measures of benefit to others of EA activity
community virtue: wider-context rules guiding decisions about avoiding negative consequences of donor business activities.
In another post(being edited right now, I proposed a four-factor model about calculations of consequences, in terms of harm and help to others and harm and help of oneself, useful mainly for thought experiments. One relevant point to this discussion was that an action can cause both harm and help to others, although, actually, the whole thing seems relevant from where I sit.
How EA’s decide to maximize consequences (causing help but no harm, causing known help and unknown harm, causing known harm and unknown help, causing slightly more help than harm, etc), is a community choice.
The breakdown of community standards is a subtle problem, it’s sometimes a problem of interpretation, so I’m not sure what direction I can give about this myself. I would like to see:
what maxims from a practical Kantian model that you think really apply here, with their context developed in more detail
how you propose to model contexts, particularly given your faith in Bayesian probabilities for credences, and what I anticipate will be your reliance on expected value calculations.
I really don’t think any model of context and consequences dependent on Bayesian probabilities will fit with virtue ethics well at all. You’re welcome to prove me wrong.
Ultimately, if a community decides to be self-serving and cynical in its claims of ethical rigor (ie, to lie), there’s no approach to ethics that will save the community from its ethical failure. On the other hand, a community of individuals interested in virtue or altruism will struggle with all the problems I listed above (rationalizations, community standards decline, virtues in conflict, etc).