I think that people (especially smart people) are often pretty good at getting a vibe that someone is trying to steer them into something (even from relatively little data). …we want to shape the social incentive landscape so that people aren’t rewarded for trying to manipulate us.
I studied lobbying in Washington, DC, from US trade diplomats, and we were learning that this 5 billion industry is benefiting decisionmakers by sharing research biased in various ways from which they can make decisions unbiased with respect to their own values.[1] So, ‘smartness,’ if it is interpreted as direct decisionmaking privilege, can be positively correlated with accepting what could be perceived as manipulation.
Also, people who are ‘smart’ in their ability to process, connect, or repeat a lot of information to give the ‘right’ answers[2] but do not critically think about the structures which they thus advance may be relatively ‘immune’ toward negative perceptions of manipulation due to the norms of these structures. These people can be more comfortable if they perceive ‘steering’ or manipulation, because they could be against ‘submitting’ to a relatively unaggressive entity. So, in this case, manipulation[3] can be positively correlated with (community builders’) individual consideration in a relationship.
‘Specific’ objective optimization should be refrained from only among people who are ‘smart’ in emotional/reasoning and would not[4] engage in a dialogue.[5] These people would perceive manipulation negatively[6] and would not support community builders in developing (yet) better ways of engaging people with various viewpoints on doing good effectively.[7]
Still, many people in EA may not mind some manipulation,[8] because they are intrinsically motivated to do good effectively, and there are little alternatives for such. This is not to say that if it is possible to avoid ‘specific’ optimization, this should not be done but developing this skill can be deprioritized relative to advancing community building projects that attract intrinsically motivated individuals or make changes[9] where the changemakers perceive some ‘unfriendliness.’
I would like to ask you if you think that some EA materials that optimize for agreement with a specific thesis, which community builders would use, should be edited, further explained, or discouraged.[10]
For example, a fellow introductory EA fellowship participant pointed out that the comparison between the effectiveness of the treatment of Kaposi sarcoma and information for high-risk groups to prevent HIV/AIDS makes sense because a skin mark is much less serious that HIV/AIDS but this did not discourage anyone from engagement.
I studied lobbying in Washington, DC, from US trade diplomats, and we were learning that this 5 billion industry is benefiting decisionmakers by sharing research biased in various ways from which they can make decisions unbiased with respect to their own values.[1] So, ‘smartness,’ if it is interpreted as direct decisionmaking privilege, can be positively correlated with accepting what could be perceived as manipulation.
Also, people who are ‘smart’ in their ability to process, connect, or repeat a lot of information to give the ‘right’ answers[2] but do not critically think about the structures which they thus advance may be relatively ‘immune’ toward negative perceptions of manipulation due to the norms of these structures. These people can be more comfortable if they perceive ‘steering’ or manipulation, because they could be against ‘submitting’ to a relatively unaggressive entity. So, in this case, manipulation[3] can be positively correlated with (community builders’) individual consideration in a relationship.
‘Specific’ objective optimization should be refrained from only among people who are ‘smart’ in emotional/reasoning and would not[4] engage in a dialogue.[5] These people would perceive manipulation negatively[6] and would not support community builders in developing (yet) better ways of engaging people with various viewpoints on doing good effectively.[7]
Still, many people in EA may not mind some manipulation,[8] because they are intrinsically motivated to do good effectively, and there are little alternatives for such. This is not to say that if it is possible to avoid ‘specific’ optimization, this should not be done but developing this skill can be deprioritized relative to advancing community building projects that attract intrinsically motivated individuals or make changes[9] where the changemakers perceive some ‘unfriendliness.’
I would like to ask you if you think that some EA materials that optimize for agreement with a specific thesis, which community builders would use, should be edited, further explained, or discouraged.[10]
See Allard, 2008 for further discussion on the informational value of privately funded lobbying.
Including factually right answers or those which they assess as best for their further social or professional status progress.
ideally while its use is acknowledged and possibly the discussant is implicitly included in its critique
or, the discussion would be set up in a way that prevents dialogue
of course, regardless of their decisionmaking influence
also due to their limited ability to contribute
or anything else relevant to EA or the friendship
For example, a fellow introductory EA fellowship participant pointed out that the comparison between the effectiveness of the treatment of Kaposi sarcoma and information for high-risk groups to prevent HIV/AIDS makes sense because a skin mark is much less serious that HIV/AIDS but this did not discourage anyone from engagement.
such as vegan lunches in a canteen because community builders optimize for the canteen managers agreeing that this should be done
for example, see my recent comment on the use of stylistic devices to attract attention and limit critical thinking