I’m a researcher in psychology and philosophy.
Stefan_Schubert
How much of this is lost by compressing to something like: virtue ethics is an effective consequentialist heuristic?
It doesn’t just say that virtue ethics is an effective consequentialist heuristic (if it says that) but also has a specific theory about the importance of altruism (a virtue) and how to cultivate it.
There’s not been a lot of systematic discussion on which specific virtues consequentialists or effective altruists should cultivate. I’d like to see more of it.
@Lucius Caviola and I have written a paper where we put forward a specific theory of which virtues utilitarians should cultivate. (I gave a talk along similar lines here.) We discuss altruism but also five other virtues.
Another factor is that recruitment to the EA community may be more difficult if it’s perceived as very demanding.
I’m also not convinced by the costly signalling-arguments discussed in the post. (This is from a series of posts on this topic.)
I think this discussion is a bit too abstract. It could be helpful with concrete examples of non-academic EA research that you think should have been published in academic outlets. It would also help if you would give some details of what changes they would need to make to get their research past peer reviewers.
I’m saying that there are many cases where well-placed people do step up/have stepped up.
Assume by default that if something is missing in EA, nobody else is going to step up.
In many cases, it actually seems reasonable to believe that others will step up; e.g. because they are well-placed to do so/because it falls within a domain they have a unique competence in.
One aspect is that we might expect people who believe unusually strongly in an idea to be more likely to publish on it (winner’s curse/unilateralist’s curse).
Yeah the latter is good.
He does, but at the same time I think it matters that he uses that shorthand rather than some other expression (say CNGS), since it makes the EA connection more salient.
Yes, I think the title should be changed.
Some evidence that people tend to underuse social information, suggesting they’re not by default epistemically modest:
Social information is immensely valuable. Yet we waste it. The information we get from observing other humans and from communicating with them is a cheap and reliable informational resource. It is considered the backbone of human cultural evolution. Theories and models focused on the evolution of social learning show the great adaptive benefits of evolving cognitive tools to process it. In spite of this, human adults in the experimental literature use social information quite inefficiently: they do not take it sufficiently into account. A comprehensive review of the literature on five experimental tasks documented 45 studies showing social information waste, and four studies showing social information being over-used. These studies cover ‘egocentric discounting’ phenomena as studied by social psychology, but also include experimental social learning studies. Social information waste means that human adults fail to give social information its optimal weight. Both proximal explanations and accounts derived from evolutionary theory leave crucial aspects of the phenomenon unaccounted for: egocentric discounting is a pervasive effect that no single unifying explanation fully captures. Cultural evolutionary theory’s insistence on the power and benefits of social influence is to be balanced against this phenomenon.There is a discussion on “the producer-scrounger dilemma for information use” of potential interest:
Social information is only useful when others also gather information asocially. Cultural evolutionary models contain a possible explanation of egocentric discounting. Rogers’ influential model [81] showed that social learning may not provide any advantage over individual learning when the environment changes. The advantage of using social learning depends on the frequency of social learners in the population: if those are too numerous, social learning is useless. When there are mostly individual learners, copying is effective, because it saves the costs of individual exploration, and because the probability of copying a correct behaviour is high. However, when there are mostly social learners, the risk of copying an outdated behaviour increases and individual learners are advantaged. This means the advantages of social learning are inversely frequency-dependent: the more other people learn socially, the less efficient it is to learn from them. The same logic is reflected, on a smaller scale, in models of information cascades, where social learning can (with a small probability) become detrimental for an individual when too many other individuals resort to it. More generally, a broad range of models converge upon the view that social information use can be likened, in terms of evolutionary game theory, to a producer–scrounger dynamic [37,77,82]. At equilibrium, these games typically yield a mixed population of producers (individual learners) and scroungers (social learners), where neither type does better than the other [83,84]. Egocentric discounting might emerge from a producer–scrounger dilemma, as a response to the devaluation of social information which may occur when too many other agents rely on social learning.
Note that this seems to assume that people don’t use the “credence by my lights” vs. “credence all things considered”-distinction discussed in the comments.
The post seems to confuse the postdoctoral fellowship and the PhD fellowship (assuming the text on the grant interface is correct). It’s the postdoc fellowship that has an $80,000 stipend, whereas the PhD fellowship stipend is $40,000.
I think “Changes in funding in the AI safety field” was published by the Centre for Effective Altruism.
The transcript can be found on this link as well.
You may want to have a look at the list of topics. Some of the terms above are listed there; e.g. Bayesian epistemology, counterfactual reasoning, and the unilateralist’s curse.
Nice comment, you make several good points. Fwiw, I don’t think our paper is conflict with anything you say here.
On this theme: @Lucius Caviola and myself have written a paper on virtues for real-world utilitarians. See also Lucius’s talk Against naive effective altruism.
I gave an argument for why I don’t think the cry wolf-effects would be as large as one might think in World A. Afaict your comment doesn’t engage with my argument.
I’m not sure what you’re trying to say with your comment about World B. If we manage to permanently solve the risks relating to AI, then we’ve solved the problem. Whether some people will then be accused of having cried wolf seems far less important relative to that.
I also guess cry wolf-effects won’t be as large as one might think—e.g. I think people will look more at how strong AI systems appear at a given point than at whether people have previously warned about AI risk.
If that’s so, one might wonder why that happens.
In these cases, it seems that there are three questions; e.g.:
1) Is consequentialism correct?
2) Does consequentialism entail Machiavellianism?
3) Ought we to be Machiavellian?
You claim that people get the answer to the two first questions wrong, but the answer to the third question right, since the two mistakes cancel out each other. In effect, two incorrect premises lead to a correct conclusion.
It’s possible that in the cases you discuss, people tend to have the firmest intuitions about question 3) (“the conclusion”). E.g. they are more convinced that we ought not to be Machiavellian than that consequentialism is correct/incorrect or that consequentialism entails/does not entail Machiavellianism.
If that’s the case, then it would be unsurprising that mistakes would cancel each other out. E.g. someone who would start to believe that consequentialism entails Machiavellianism would be inclined to reject consequentialism, since they otherwise would need to accept that we ought to be Machiavellian (which they by hypothesis don’t do).
(Effectively, I’m saying that people reason holistically, reflective equilibrium-style; and not just from premises to conclusions.)
A corollary of this is that it’s maybe not as common as one might think that “a little knowledge” is as dangerous as one might believe. Suppose that someone initially believes that consequentialism is wrong (Question 1), that consequentialism entails Machiavellianism (Question 2), and that we ought not to be Machiavellian (Question 3). They then change their view on Question 1, adopting consequentialism. That creates an inconsistency between their three beliefs. But if they have firmer beliefs about Question 3 (the conclusion) than about Question 2 (the other premise), they’ll resolve this inconsistency by rejecting the other incorrect premise, not by endorsing the dangerous conclusion that we ought to be Machiavellian.
My argument is of course schematic and how plausible it is will no doubt vary depending which of the six cases you discuss we consider. I do think that “a little knowledge” is sometimes dangerous in the way you suggest. Nevertheless, I think the mechanism I discuss is worth remembering.
In general, I think a little knowledge is usually beneficial, meaning our prior that it’s harmful in an individual case should be reasonably low. However, priors can of course be overturned by evidence in specific cases.