I like the idea here a great deal, but I expect there’s going to be a lot of variation in what creates what effect in whom. I wonder if there’s better ways to come up with aggregate recommendations, so we can find out what seems to be consistent in its EA appeal, vs. what’s idiosyncratic
There’s an unanswered question here of why Good Ventures makes grants that OpenPhil doesn’t recommend, given that GV believes in the OpenPhil approach broadly. But I guess I don’t find it that surprising that they do so. People like to do more than one thing?
Have you attempted to contact GV or OpenPhil directly about this?
I think this is only true with a very narrow conception of what the “EA things that we are doing” are. I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world.
That’s all I believe constitutes “EA things” in your usage. Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally “the experts” on. If the global research community on poverty interventions came to the consensus “actually we think bednets are bad now” then EA orgs would need to listen to that and change course.
“Politicized” questions and values are no different, so we need to be open to feedback and input from external experts, whatever constitutes expertise in the field in question.
Downvotes aren’t primarily to help the person being downvoted. They help other readers, which after all there are many more of than writers. Creating an expectation that they should all be explained increases the burden on the downvoter significantly, making them less likely to be used and therefore less useful.
Just to remark on the “criminal law” point – I think it’s appropriate to apply a different, and laxer, standard here than we do for criminal law, because:
the penalties are not criminal penalties, and in particular do not deprive anyone of anything they have a right to, like their property or freedom – CEA are free to exclude anyone from EAG who in their best judgement would make it a worse event to attend,
we don’t have access to the kinds of evidence or evidence-gathering resources that criminal courts do, so realistically it’s pretty likely that in most cases of misconduct or abuse we won’t have criminal-standard evidence that it happened, and we’ll have to either act despite that or never act at all. Some would defend never acting at all, I’m sure (or acting in only the most clear-cut cases), but I don’t think it’s the mainstream view.
And this is a clear case in which I would have first-person authority on whether I did anything wrong.
I think this is the main point of disagreement here. Generally when you make sexual or romantic advances on someone and those advances make them uncomfortable, you’re often not aware of the effect that you’re having (and they may not feel safe telling you), so you’re not the authority on whether you did something wrong.
Which is not to say that you’re guilty because they accused you! It’s possible to behave perfectly reasonably and for people around you to get upset, even to blame you for it. In that scenario you would not be guilty of doing anything wrong necessarily. But more often it looks like this:
someone does something inappropriate without realizing it,
impartial observers agree, having heard the facts, that it was inappropriate,
it seems clearly-enough inappropriate that the offender had a moral duty to identify it as such in advance and not do it.
Then they need to apologize and do what’s necessary to prevent it happening again, including withdrawing from the community if necessary.
If I heard that a lot of people were feeling uncomfortable following interactions with me, I think it’s likely that I would apologize and back off before understanding why they felt that way, and perhaps without even understanding what behaviour was at issue.
I’d trust someone else’s judgement comparably with or more than my own, particularly when there were multiple other someones, because I’m aware of many cases where people were oblivious to the harm their own behaviour was causing (and indeed, I don’t always know how other people feel about the way I interact with them and put a lot of effort into giving them opportunities to tell me). Obviously I’d apply some common sense to accusations that e.g. I knew to be factually wrong.
In the abstract, which of these do you think happens more often?
Someone makes people uncomfortable without being aware that they are doing so. Other people inform them.
Someone doesn’t make anyone feel uncomfortable (above the base rate of awkward social interactions). People erroneously tell them that they are doing so.
Now, the second is probably somewhat more likely than I’ve made it sound, but the first just seems way more ordinary to me. So my outside view is that the most likely reason for people to tell you that you’re making others uncomfortable is that you are actually doing that. You’re entitled to play this off against what you know of the inside view, but I think it would be pretty weird to just dismiss it entirely.
This is a relatively minor issue, perhaps, but the graph you show from the EggTrack report seems to have its “n=” numbers wrong. Looking at the report itself, the graph has the same values as (and immediately follows) another one which only includes the reported-against commitments, so I’m betting they just copied the numbers from that one accidentally.
(I haven’t yet tried to contact CIWF about this and probably won’t get around to it, but I’ll update this post if I do)
What was the largest amount that any individual got matched on GT? Given that this year there were only 15 seconds of matching funds, can one person get through enough forms in time to give a lot?
I think 2-10x is the wrong average multiplier across lottery winners (though, in fairness, you didn’t explicitly claim it was an average). In order to make good grants to new small high-risk things, you need to hear about them, and I suspect most lottery participants don’t have the necessary networks and don’t have special access to significant private information – after all, private information doesn’t spread well.
Concretely I’m suggesting that the median lottery participant doesn’t get any benefit at all from the ability to use private information.
We can imagine three categories of grants:
A. Publically justifiable
B. Privately justifiable
C. Unjustifiable :)
I agree reports like Adam’s will move people from B to A, but I think they will also move people from C to A, by forcing them to examine their choices more carefully and hold themselves to a higher standard.
This model prompts two possible sources of disagreement: you could disagree about the relative proportions of people moving from B vs. from C, or you could disagree about how bad it is to have a mix of B and C vs. more A.
To address the second question, if you think that B is 2-10x more valuable than A, then even if donations in category C are worthless (leaving aside the chance they are net negative), an equal mix of B and C is better than just A, and towards the 10x end of that spectrum, you can justify up to 90% C and 10% B.
But let’s return to that parenthetical – could more C donations be net negative, even aside from opportunity cost? I think this risk is underexamined. I suspect most projects won’t directly do harm, but well-funded blunders are more visible and reputationally damaging.
Or because their best granting opportunity can’t be justified with publically-available knowledge, or has other weird optics / reputational concerns.
So, I’m instinctively creeped out by any attempt to reduce the number of humans, and my initial reaction to this idea was basically “yikes”. Having taken time to reflect and read the report, I’ve come around a little, in that improving access to contraception seems hard to oppose even if you’re broadly in favour of more humans rather than less (though note that it’s often opposed by some religious groups).
That said, I still think there’s greater potential for extreme negative reactions to this idea than you appreciate. In particular, white wealthy people targeting low-income countries with the explicit aim of reducing their population has a chance of tripping people’s “eugenics sirens” and drawing comparisons with the long and racist history of compulsory sterilizations. I’m not saying I would agree with those comparisons – it seems very clear that your motivations are different, and the ethnicity of your target group is coincidental / irrelevant – but I don’t think that everyone would believe in your good faith as much as I do; some compulsory or semi-coercive sterilization was done covertly and in the guise of helping the recipients, so some may feel obliged to be especially wary of anything superficially similar.
You briefly addressed reputational risk in this passage:
The intervention is middling in terms of reputational and field buildingeffects, because there is no significant risk of turning people off animaladvocacy or vegetarianism if the organization wouldn’t be promoted as adirectly animal-focused charity.
Bluntly, this comes across as dishonest. Aren’t you worried that people might discover your true motivations aren’t the same as your apparent ones, and distrust animal advocates in future?
In the UK, there is the All-Party Parliamentary Group for Future Generations, although I’m not sure how much they actually do.
Also, if you do this, please come back and tell us what you discovered :)
On what grounds do you expect EAs to have better personal ability?
Something I’ve been idly concerned about in the past is the possibility that EAs might be systematically more ambitious than equivalently competent people, and thus at a given level of ambition, EAs would be systematically less competent. I don’t have a huge amount of evidence for this being borne out in practice, but it’s one not-so-implausible way that EA charity founders might be worse than average at the skills needed to found charities.
I think this framing is a good one, but I don’t immediately agree with the conclusion you make about which level to prioritize.
Firstly, consider the benefits we expect from a change in someone’s view at each level. Do most people stand to improve their impact the most by choosing the best implementation within their cause area, or switching to an average implementation in a more pressing cause area? I don’t think this is obvious, but I lean to the latter.
Higher levels are more generalizable: cross-implementation comparisons are only relevant to people within that cause, whereas cross-cause comparisons are relevant to everyone who shares approximately the same values, so focusing on lower levels limits the size of the audience that can benefit from what you have to say.
Low-level comparisons tend to require domain-specific expertise, which we won’t be able to have across a wide range of domains.
I also think there’s just a much greater deficit of high-quality discussion of the higher levels. They’re virtually unexamined by most people. Speaking personally, my introduction to EA was approximately that I knew I was confused about the medium-level question, so I was directly looking for answers to that: I’m not sure a good discussion of the low-level question would have captured me as effectively.
I don’t think you should update too much on people being unkind on the internet :)
There are many, many possible altruistic targets. I think to be suitable for the EA forum, a presentation of an altruistic goal should include some analysis of how it compares with existing goals, or what heuristics lead you to believe it’s worthy of particular attention.