I’m a researcher in psychology and philosophy.
Stefan_Schubert
Assume by default that if something is missing in EA, nobody else is going to step up.
In many cases, it actually seems reasonable to believe that others will step up; e.g. because they are well-placed to do so/because it falls within a domain they have a unique competence in.
One aspect is that we might expect people who believe unusually strongly in an idea to be more likely to publish on it (winner’s curse/unilateralist’s curse).
Yeah the latter is good.
He does, but at the same time I think it matters that he uses that shorthand rather than some other expression (say CNGS), since it makes the EA connection more salient.
Yes, I think the title should be changed.
Some evidence that people tend to underuse social information, suggesting they’re not by default epistemically modest:
Social information is immensely valuable. Yet we waste it. The information we get from observing other humans and from communicating with them is a cheap and reliable informational resource. It is considered the backbone of human cultural evolution. Theories and models focused on the evolution of social learning show the great adaptive benefits of evolving cognitive tools to process it. In spite of this, human adults in the experimental literature use social information quite inefficiently: they do not take it sufficiently into account. A comprehensive review of the literature on five experimental tasks documented 45 studies showing social information waste, and four studies showing social information being over-used. These studies cover ‘egocentric discounting’ phenomena as studied by social psychology, but also include experimental social learning studies. Social information waste means that human adults fail to give social information its optimal weight. Both proximal explanations and accounts derived from evolutionary theory leave crucial aspects of the phenomenon unaccounted for: egocentric discounting is a pervasive effect that no single unifying explanation fully captures. Cultural evolutionary theory’s insistence on the power and benefits of social influence is to be balanced against this phenomenon.There is a discussion on “the producer-scrounger dilemma for information use” of potential interest:
Social information is only useful when others also gather information asocially. Cultural evolutionary models contain a possible explanation of egocentric discounting. Rogers’ influential model [81] showed that social learning may not provide any advantage over individual learning when the environment changes. The advantage of using social learning depends on the frequency of social learners in the population: if those are too numerous, social learning is useless. When there are mostly individual learners, copying is effective, because it saves the costs of individual exploration, and because the probability of copying a correct behaviour is high. However, when there are mostly social learners, the risk of copying an outdated behaviour increases and individual learners are advantaged. This means the advantages of social learning are inversely frequency-dependent: the more other people learn socially, the less efficient it is to learn from them. The same logic is reflected, on a smaller scale, in models of information cascades, where social learning can (with a small probability) become detrimental for an individual when too many other individuals resort to it. More generally, a broad range of models converge upon the view that social information use can be likened, in terms of evolutionary game theory, to a producer–scrounger dynamic [37,77,82]. At equilibrium, these games typically yield a mixed population of producers (individual learners) and scroungers (social learners), where neither type does better than the other [83,84]. Egocentric discounting might emerge from a producer–scrounger dilemma, as a response to the devaluation of social information which may occur when too many other agents rely on social learning.
Note that this seems to assume that people don’t use the “credence by my lights” vs. “credence all things considered”-distinction discussed in the comments.
The post seems to confuse the postdoctoral fellowship and the PhD fellowship (assuming the text on the grant interface is correct). It’s the postdoc fellowship that has an $80,000 stipend, whereas the PhD fellowship stipend is $40,000.
I think “Changes in funding in the AI safety field” was published by the Centre for Effective Altruism.
The transcript can be found on this link as well.
Linkpost: Dwarkesh Patel interviewing Carl Shulman
You may want to have a look at the list of topics. Some of the terms above are listed there; e.g. Bayesian epistemology, counterfactual reasoning, and the unilateralist’s curse.
Nice comment, you make several good points. Fwiw, I don’t think our paper is conflict with anything you say here.
On this theme: @Lucius Caviola and myself have written a paper on virtues for real-world utilitarians. See also Lucius’s talk Against naive effective altruism.
I gave an argument for why I don’t think the cry wolf-effects would be as large as one might think in World A. Afaict your comment doesn’t engage with my argument.
I’m not sure what you’re trying to say with your comment about World B. If we manage to permanently solve the risks relating to AI, then we’ve solved the problem. Whether some people will then be accused of having cried wolf seems far less important relative to that.
I also guess cry wolf-effects won’t be as large as one might think—e.g. I think people will look more at how strong AI systems appear at a given point than at whether people have previously warned about AI risk.
Thanks, very interesting.
Regarding the political views, there are two graphs, showing different numbers. Does the first include people who didn’t respond to the political views question, whereas the second exclude them? If so, it might be good to clarify that. You might also clarify that the first graph/sets of numbers don’t sum to 100%. Alternatively, you could just present the data that excludes non-responses, since that’s in my view the more interesting data.
Yes, I think that him, e.g. being interviewed by 80K didn’t make much of a difference. I think that EA’s reputation would inevitably be tied to his to an extent given how much money they donated and the context in which that occurred. People often overrate how much you can influence perceptions by framing things differently.
Yes. The Life You Can Save and Doing Good Better are pretty old. I think it’s natural to write new content to clarify what EA is about.
I’m saying that there are many cases where well-placed people do step up/have stepped up.