I’m a researcher in psychology and philosophy.
Stefan_Schubert
Thanks, I thought this was the best-written and most carefully argued of the recent posts on this theme.
Fwiw, anecdotally my impression is that a more common problem is that people engage in motivated reasoning to justify projects that aren’t very good, and that they just haven’t thought through their projects very carefully. In my experience, that’s more common than outright, deliberate fraud—but the latter may get more attention since it’s more emotionally salient (see my other comment). But this is just my impression, and it’s possible that it’s outdated. And I do of course think that EA should be on its guard against fraud.
- 11 Jun 2022 13:23 UTC; 5 points) 's comment on The dangers of high salaries within EA organisations by (
Thanks, I think this post is thoughtfully written. I think that arguments for lower salary sometimes are quite moralising/moral purity-based; as opposed to focused on impact. By contrast, you give clear and detached impact-based arguments.
I don’t quite agree with the analysis, however.
You seem to equate “value-alignment” with “willingness to work for a lower salary”. And you argue that it’s important to have value-aligned staff, since they will make better decisions in a range of situations:
A researcher will often decide which research questions to prioritise and tackle. A value-aligned one might seek to tackle questions around which interventions are the most impactful, whereas a less value-aligned researcher might choose to prioritise questions which are the most intellectually stimulating.
An operations manager might make decisions regarding hiring within organisations. Therefore, a less value-aligned operations manager might attract similarly less value-aligned candidates, leading to a gradual worsening in altruistic alignment over time. It’s a common bias to hire people who are like you which could lead to serious consequences over time e.g. a gradual erosion of altruistic motivations to the point where less-value aligned folks could become the majority within an organisation.
I rather think that value-alignment and willingness to work for a lower salary come apart. I think there are non-trivial numbers of highly committed effective altruists—who would make very careful decisions regarding what research questions to prioritise and tackle, and who would be very careful about hiring decisions—who would not be willing to work for a lower salary. Conversely, I think there are many people—e.g. people from the larger non-profit or do-gooding world—who would be willing to work for a lower salary, but who wouldn’t be very committed to effective altruist principles. So I don’t think we have any particular reason to expect that lower salaries would be the most effective way of ensuring that decisions about, e.g. research prioritisation or hiring are value-aligned. That is particularly so since, as you notice in the introduction, lower salaries have other downsides.
As far as I understand, you are effectively saying that effective altruists should pay lower salaries, since lower salaries are a costly signal of general value-alignment—value-aligned people would accept a lower salary, whereas people who are not value-aligned would not. This is an argument that’s been given multiple times lately in the context of EA salaries and EA demandingness, but I’m not convinced of it. For instance, in research on the general population led by Lucius Caviola, we found a relatively weak correlation between what we call “expansive altruism” (willingness to give resources to others, including distant others) and “effectiveness-focus” (willingness to choose the most effective ways of helping others). Expansive altruism isn’t precisely the same thing as willingness to work for a lower salary, and things may look a bit differently among potential applicants to effective altruist jobs—but it nevertheless suggests that willingness to work for a lower salary need not be as useful a costly signal as it may seem.
More generally, I think what underlies these ideas of using lower salaries as a costly signal of value-alignment is the tacit assumption that value-alignment is a relatively cohesive, unidimensional trait. But I think that assumption isn’t quite right—as stated, our factor analyses rather suggested there are two core psychological traits defining positive inclinations to effective altruism (expansive altruism and effectiveness-focus), which aren’t that strongly related. (And I wouldn’t be surprised if we found further sub-facets if we did more extensive research on this.)
For these reasons, I think it’s better for EA recruiters to try to gauge, e.g. inclinations towards cause-neutrality, willingness to overcome motivated reasoning, and other important effective altruist traits, directly, rather than to try to infer them via their willingness to accept a lower salary—since those inferences will typically not have a high degree of accuracy.
[Slightly edited]
This post is great, thanks for writing it.
I’m not quite sure about the idea that we should have certain demanding norms because they are costly signals of altruism. It seems to me that the main reason to have demanding norms isn’t that they are costly signals, but rather that they are directly impactful. For instance, I think that the norm that we should admit that we’re wrong is a good one, but primarily because it’s directly impactful. If we don’t admit that we’re wrong, then there’s a risk we continue pursuing failed projects even as we get strong evidence that they have failed. So having a norm that counteracts our natural tendency not to want to admit when we’re wrong seems good.
Relatedly, and in line with your reasoning, I think that effective altruism should be more demanding in terms of epistemics than in terms of material resources. Again, that’s not because that’s a better costly signal, but rather because better epistemics likely makes a greater impact difference than extreme material sacrifices do. I developed these ideas here; see also our paper on real-world virtues for utilitarians.
I disagree with that. Downvotes are often valuable information, and requiring people to explain all downvotes would introduce too high a bar for downvoting.
I agree, and I’m a bit confused that the top-level post does not violate forum rules in its current form.
That seems like a considerable overstatement to me. I think it would be bad if the forum rules said an article like this couldn’t be posted.
I find some of the comments here a bit implausible and unrealistic.
What people write online will often affect their reputation, positively or negatively. It may not necessarily mean they, e.g. have no chance of getting an EA job, but there are many other reputational consequences.
I also don’t think that updating one’s views of someone based on what they write on the EA Forum is necessarily always wrong (even though there are no doubt many updates that are unfair or unwarranted).
One issue is that decentralised grant-making could increase the risk that projects that are net negative get funding, as per the logic of the unilateralist’s curse. The risk of that probably varies with cause area and type of project.
My hunch is that many people have a bit of an intuitive bias against centralised funding; e.g. because it conjures up images of centralised bureaucracies (cf. the reference to the USSR) or appears elitist. I think that in the end it’s a tricky empirical question and that the hypothesis that relatively centralised funding is indeed best shouldn’t be discarded prematurely.
I should also say that how centralised or coordinated grant-makers are isn’t just a function of how many grant-makers there are, but also of how much they communicate with each other. There might be ways of getting many of the benefits of decentralisation while reducing the risks, e.g. by the right kinds of coordination.
The fact that some organizations cannot get funded does not seem like strong evidence that EA as a whole is funding-constrained. Given that other organizations can raise large funds, an alternative explanation is that donors think that the expected impact of the organizations that cannot get funding is low. I also don’t think it follows from your argument that earning to give is a great idea.
There might be a risk that some view the (very) long-run future as a “luxury problem”, and that focusing on that, rather than short-term problems in your own country, reveals your privilege. (That attitude may be particularly common concerning causes like AI risk.) My guess is that people are less likely to have such an attitude towards someone who is focusing on global poverty.
Thanks for this post. I think discussions about career prioritisation often become quite emotional and personal in a way that clouds people’s judgements. Sometimes I think I’ve observed the following dynamic.
1. It’s argued, more or less explicitly, that EAs should switch career into one of a small number of causes.
2. Some EAs are either not attracted to those careers, or are (or at least believe that they are) unable to successfully pursue those careers.
3. The preceding point means that there is a painful tension between the desire to do the most good, and one’s personal career prospects. There is a strong desire to resolve that tension.
4. That gives strong incentives to engage in motivated reasoning: to arrive at the conclusion that actually, this tension is illusory; one doesn’t need to engage in tough trade-offs to do the most good. One can stay on doing roughly what one currently does.
5. The EAs who believe in point 1 - that EAs should switch career to other causes—are often unwilling to criticise the reasoning described in 4. That’s because these issues are rather emotional and personal, and that some may think it’s insensitive to criticise people’s personal career choices.
I think similar dynamics play out with regards to cause prioritisation more generally, decisions whether to fund specific projects which many feel strongly about, and so on. The key aspects of these dynamics are 1) that people often are quite emotional about their choice, and therefore reluctant to give up on it even in the face of better evidence and 2) that others are reluctant to engage in serious criticism of the former group, precisely because the issue is so clearly emotional and personal to them.
One way to mitigate these problems and to improve the level of debate on these issues is to discuss the object-level considerations in a detached, unemotional way (e.g. obviously without snark); and to do so in some detail. That’s precisely what this post does.
EA’s CEO says Sam Bankmann-Fried was never an effective altruist
I don’t think the piece says that.
The first paragraph is this:
“If the rise of Sam Bankman-Fried was a modern tale about cryptocurrency tokens and “effective altruism,” his fall seems to be as old as original sin. “This is really old-fashioned embezzlement,” John Ray, the caretaker CEO of the failed crypto exchange FTX, told the House on Tuesday. “This is just taking money from customers and using it for your own purpose, not sophisticated at all.””
I don’t think that amounts to depicting EA as banditry. The subject is Sam Bankman-Fried, not the effective altruism movement.
Meta-comment: the level of discussion here has been fantastic. It’s nice that these complex issues are discussed in this format; publically and relatively informally (though other formats obviously have their advantages too). Thanks to all contributors.
My sense is that this post—as well as many other recent posts on the forum—focuses too much on PR/reputation relative to direct impact. Also, I think that insofar as we try to build a reputation, part of that reputation should be that we do things because we think they’re right for direct, non-reputational reasons. I think that gives a (correct) impression of greater integrity.
I think that in a relevant sense, there is an EA Leadership, even if EA isn’t an organisation. E.g. CEA/EV has been set up to have a central place in the community, and runs many coordinating functions, including the EA Forum, EA Global, the community health team, etc. Plus it publishes much of the key content. I think this comment overstates how decentralised the EA community is (for better or worse).
There are many considerations of relevance for these choices besides the risk of becoming or appearing like a cult. My sense is that this post may overestimate the importance of that risk relative to those other considerations.
I also think that in some cases, you could well argue that the sign is the opposite to that suggested here. E.g. frugality could rather be seen as evidence of cultishness.
That will give me 3-6 times the strong voting power of a forum beginner, which seems like way too much.
Personally I’d rather want the difference to be bigger, since I find it much more informative what the best-informed users think.
Ideally the karma system would also be more sensitive to the average quality of users’ comments/posts. Right now sheer quantity is awarded more than ideal, in my view. But I realise it’s non-trivial to improve on the current system.
- Doing EA Better by 17 Jan 2023 20:09 UTC; 258 points) (
- 31 May 2022 23:42 UTC; 6 points) 's comment on EA Forum feature suggestion thread by (
I agree with those who say that the analogy with the Cultural Revolution isn’t ideal.
Yes, there are some relevant similarities with the Cultural Revolution. But the fact that many millions were killed in the Cultural Revolution, and that the regime was a dictatorship, are extremely salient features. It doesn’t usually work to say that “I mean that it’s like the Cultural Revolution in other respects—just not those respects”. Those features are so central and so salient that it’s difficult to dissociate them in that way.
Relatedly, I think that comparisons to the Cultural Revolution tend to function as motte and baileys (specifically, hyperboles). They have a rhetorical punch precisely because the Cultural Revolution was so brutal. People find the analogy powerful precisely because of the associations to that brutality.
But then when you get criticised, you can retreat and say “well, I didn’t mean those features of the Cultural Revolution—I just meant that there was ideological conformity, etc”—and it’s more defensible to say that parts of the US have those features today.
This post uses an alarmist tone to trigger emotions (“the vultures are circling”). I’d like to see more light and less heat. How common is this? What’s the evidence?
People have strong aversions to cheating and corruption, which is largely a good thing—but it can also lead to conversations on such issues getting overly emotional in a way that’s not helpful.