Short lists seem to be a trend, but longer lists with a different label than “values” appear from other corporations(for example, from Google or General Motors) . They all share the quality of being aspirational, but there’s a difference with the longer lists, they seem closer suited to the specifics of what the corporations do.
You don’t need to be at your desk to need an answer.
You can make money without doing evil. .
There’s always more information out there.
The need for information crosses all borders
You can be serious without a suit
Great just isn’t good enough
Google values are specific. Their values do more than build their brand.
I would like to suggest that EA values are lengthy and should be specific enough to:
identify your unique attributes.
focus your behavior.
reveal your preferred limitations[1].
Having explicit values of that sort:
limit your appeal.
support your integrity .
encourage your honesty.
The values focus and narrow in addition to building your brand. Shell Global, Lockheed-Martin and Cargill are just building their brand. The Google Philosophy says more and speaks to their core business model.
All the values listed as part of Effective Altruism appear to overlap with the concerns that you raise. Obviously, you get into specifics.
You offer specific reforms in some areas. For example:
“A certain proportion EA of funds should be allocated by lottery after a longlisting process to filter out the worst/bad-faith proposals*”
“More people working within EA should be employees, with the associated legal rights and stability of work, rather than e.g. grant-dependent ‘independent researchers’.”
These do not appear obviously appropriate to me. I would want to find out what a longlisting process is, and why employees are a better approach than grant-dependent researchers. A little explanation would be helpful.
However, other reforms do read more like statements of value or truisms to me. For example:
“Work should be judged on its quality...”[rather than its source].
“EAs should be wary of the potential for highly quantitative forms of reasoning to (comparatively easily) justify anything”
It’s a truism that statistics can justify anything as in the Mark Twain saying, “There are three kinds of lies: lies, damned lies, and statistics”.
These reforms might inspire values like:
judge work on its quality alone, not its source
Use quantitative reasoning only when appropriate
*You folks put a lot of work into writing this up for EA’s. You’re smart, well-informed, and I think you’re right, where you make specific claims or assert specific values. All I am thinking about here is how to clarify the idea of aligning with values, the values you have, and how to pursue them. *
You wrote that you started with a list of core principles before writing up your original long post? I would like to see that list, if it’s not too late and you still have the list. If you don’t want to offer the list now, maybe later? As a refinement of what you offered here?
Something like the Google Philosophy, short and to the point, will make it clear that you’re being more than reactive to problems, but instead actually have either:
differences in values from orthodox EA’s
differences in what you perceive as achievement of EA values by orthodox EA’s
Here are a few prompts to help define your version of EA values:
EA’s emphasize quantitative approaches to charity, as part of maximizing their impact cost-effectively. Quantitative approaches have pros and cons, so how to contextualize them? They don’t work in all cases, but that’s not a bad thing. Maybe EA should only pay attention to contexts where quantitative approaches do work well. Maybe that limits EA flexibility and scope of operations, but also keeps EA integrity, accords with EA beliefs, and focuses EA efforts. You have specific suggestions about IBT and what makes a claim of probabilistic knowledge feasible. Those can be incorporated into a value statement. Will you help EA focus and limit its scope or are you aiming to improve EA flexibility because that’s necessary in every context where EA operates?
EA’s emphasize existential risk causes. ConcernedEA’s offer specific suggestions to improve EA research into existential risk. How would you inform EA values about research in general to include what you understand should be the EA approach to existential risk research? You heed concerns about evaluation of cascading and systemic risks. How would those specific concerns inform your values?
You have specific concerns about funding arrangements, nepotism, and revolving doors between organizations. How would those concerns inform your values about research quality or charity impact?
You have concerns about lack of diversity and its impact on group epistemics. What should be values there?
You can see the difference between brand-building:
ethicality
impactfulness
truth-seeking
and getting specific
research quality
existential, cascading, and systemic risks
scalable and impactful charity
quantitative and qualitative reasoning
multi-dimensional diversity
epistemic capability
democratized decision-making
That second list is more specific, plausibly hits the wrong notes for some people, and definitely demonstrates particular preferences and beliefs. As it should! Whatever your list looks like, would alignment with its values imply the ideal EA community for you? That’s something you could take another look at, articulating the values behind specific reforms if those are not yet stated or incorporating specific reforms into the details of a value, like:
democratized decision-making: incorporating decision-making at multiple levels within the EA community, through employee polling, yearly community meetings, and engaging charity recipients.
I don’t know whether you like the specific value descriptors I chose there. Perhaps I misinterpreted your values somewhat. You can make your own list. Making decisions in alignment with values is the point of having values. If you don’t like the decisions, the values, or if the decisions don’t reflect the values, the right course is to suggest alterations somewhere, but in the end, you still have a list of values, principles, or a philosophy that you want EA to follow.
[1] As I wrote in a few places in this post, and taking a cue from Google and the linux philosophy, sometimes doing one thing and doing it well is preferable to offering loads of flexibility. If EA is supposed to be the swiss-army knife of making change in the world, there’s still a lot of better organizations out there for some purposes rather than others, as any user of a swiss-army knife will attest, they are not ideal for all tasks. Also, your beliefs will inform you about what you do well. Does charity without quantitative metrics inevitably result in waste and corruption? Does use of quantitative metrics limit the applicability of EA efforts to specific types of charity work (for example, outreach campaigns)? Do EA quantitative tools limit the value of its work in existential risk? Can they be expanded with better quantitative tools (or qualitative ones)? Maybe EA is self-limiting because of its preferred worldview, beliefs and tools. Therefore, it has preferred limitations. Which is OK, even good.
effectivealtruism.org suggests that EA values include:
proper prioritization: appreciating scale of impact, and trying for larger scale impact (for example, helping more people)
impartial altruism: giving everyone’s interests equal weight
open truth-seeking: including willingness to make radical changes based on new evidence
collaborative spirit: involving honesty, integrity, and compassion, and paying attention to means, not just ends.
Cargill Corporation lists its values as:
Do the Right Thing
Put People First
Reach Higher
Lockheed-Martin Corporation lists its values as:
Do What’s Right
Respect Others
Perform with Excellence
Shell Global Corporation lists its values as:
Integrity
Honesty
Respect
Short lists seem to be a trend, but longer lists with a different label than “values” appear from other corporations(for example, from Google or General Motors) . They all share the quality of being aspirational, but there’s a difference with the longer lists, they seem closer suited to the specifics of what the corporations do.
Consider Google’s values:
Focus on the user and all else will follow.
It’s best to do one thing really, really well.
Fast is better than slow.
Democracy on the web works.
You don’t need to be at your desk to need an answer.
You can make money without doing evil. .
There’s always more information out there.
The need for information crosses all borders
You can be serious without a suit
Great just isn’t good enough
Google values are specific. Their values do more than build their brand.
I would like to suggest that EA values are lengthy and should be specific enough to:
identify your unique attributes.
focus your behavior.
reveal your preferred limitations[1].
Having explicit values of that sort:
limit your appeal.
support your integrity .
encourage your honesty.
The values focus and narrow in addition to building your brand. Shell Global, Lockheed-Martin and Cargill are just building their brand. The Google Philosophy says more and speaks to their core business model.
All the values listed as part of Effective Altruism appear to overlap with the concerns that you raise. Obviously, you get into specifics.
You offer specific reforms in some areas. For example:
“A certain proportion EA of funds should be allocated by lottery after a longlisting process to filter out the worst/bad-faith proposals*”
“More people working within EA should be employees, with the associated legal rights and stability of work, rather than e.g. grant-dependent ‘independent researchers’.”
These do not appear obviously appropriate to me. I would want to find out what a longlisting process is, and why employees are a better approach than grant-dependent researchers. A little explanation would be helpful.
However, other reforms do read more like statements of value or truisms to me. For example:
“Work should be judged on its quality...”[rather than its source].
“EAs should be wary of the potential for highly quantitative forms of reasoning to (comparatively easily) justify anything”
It’s a truism that statistics can justify anything as in the Mark Twain saying, “There are three kinds of lies: lies, damned lies, and statistics”.
These reforms might inspire values like:
judge work on its quality alone, not its source
Use quantitative reasoning only when appropriate
*You folks put a lot of work into writing this up for EA’s. You’re smart, well-informed, and I think you’re right, where you make specific claims or assert specific values. All I am thinking about here is how to clarify the idea of aligning with values, the values you have, and how to pursue them. *
You wrote that you started with a list of core principles before writing up your original long post? I would like to see that list, if it’s not too late and you still have the list. If you don’t want to offer the list now, maybe later? As a refinement of what you offered here?
Something like the Google Philosophy, short and to the point, will make it clear that you’re being more than reactive to problems, but instead actually have either:
differences in values from orthodox EA’s
differences in what you perceive as achievement of EA values by orthodox EA’s
Here are a few prompts to help define your version of EA values:
EA’s emphasize quantitative approaches to charity, as part of maximizing their impact cost-effectively. Quantitative approaches have pros and cons, so how to contextualize them? They don’t work in all cases, but that’s not a bad thing. Maybe EA should only pay attention to contexts where quantitative approaches do work well. Maybe that limits EA flexibility and scope of operations, but also keeps EA integrity, accords with EA beliefs, and focuses EA efforts. You have specific suggestions about IBT and what makes a claim of probabilistic knowledge feasible. Those can be incorporated into a value statement. Will you help EA focus and limit its scope or are you aiming to improve EA flexibility because that’s necessary in every context where EA operates?
EA’s emphasize existential risk causes. ConcernedEA’s offer specific suggestions to improve EA research into existential risk. How would you inform EA values about research in general to include what you understand should be the EA approach to existential risk research? You heed concerns about evaluation of cascading and systemic risks. How would those specific concerns inform your values?
You have specific concerns about funding arrangements, nepotism, and revolving doors between organizations. How would those concerns inform your values about research quality or charity impact?
You have concerns about lack of diversity and its impact on group epistemics. What should be values there?
You can see the difference between brand-building:
ethicality
impactfulness
truth-seeking
and getting specific
research quality
existential, cascading, and systemic risks
scalable and impactful charity
quantitative and qualitative reasoning
multi-dimensional diversity
epistemic capability
democratized decision-making
That second list is more specific, plausibly hits the wrong notes for some people, and definitely demonstrates particular preferences and beliefs. As it should! Whatever your list looks like, would alignment with its values imply the ideal EA community for you? That’s something you could take another look at, articulating the values behind specific reforms if those are not yet stated or incorporating specific reforms into the details of a value, like:
democratized decision-making: incorporating decision-making at multiple levels within the EA community, through employee polling, yearly community meetings, and engaging charity recipients.
I don’t know whether you like the specific value descriptors I chose there. Perhaps I misinterpreted your values somewhat. You can make your own list. Making decisions in alignment with values is the point of having values. If you don’t like the decisions, the values, or if the decisions don’t reflect the values, the right course is to suggest alterations somewhere, but in the end, you still have a list of values, principles, or a philosophy that you want EA to follow.
[1] As I wrote in a few places in this post, and taking a cue from Google and the linux philosophy, sometimes doing one thing and doing it well is preferable to offering loads of flexibility. If EA is supposed to be the swiss-army knife of making change in the world, there’s still a lot of better organizations out there for some purposes rather than others, as any user of a swiss-army knife will attest, they are not ideal for all tasks. Also, your beliefs will inform you about what you do well. Does charity without quantitative metrics inevitably result in waste and corruption? Does use of quantitative metrics limit the applicability of EA efforts to specific types of charity work (for example, outreach campaigns)? Do EA quantitative tools limit the value of its work in existential risk? Can they be expanded with better quantitative tools (or qualitative ones)? Maybe EA is self-limiting because of its preferred worldview, beliefs and tools. Therefore, it has preferred limitations. Which is OK, even good.