I don’t think that “going silent” or failing to report donations is indication that people are not meeting the pledge. Nowadays I don’t pay GWWC as an organisation much / any attention, but I’m still donating 10% a year (and then some).
To be honest I haven’t read closely enough to understand where you do and don’t account for “quiet pledge-keepers” in your analysis, but I at least think stuff like this is just plain wrong:
total number of people ceasing reporting donations (and very likely ceasing keeping the pledge)
total number of people ceasing reporting donations (and very likely ceasing keeping the pledge)
I couldn’t find The Clear Fund when I looked just now. Would be interested in someone confirming that it’s still there.
If you want to look up the maths elsewhere, it may be helpful to know that a constant, independent chance of death (or survival) per year is modelled by a negative binomial distribution.
Sounds like the fact there was already substantial doubt over whether the program worked was a key part of the decision to shut it down. That suggests that if the same kind of scandal had affected a current top charity, they would have worked harder to continue the project.
I actually think even justifying yourself only to yourself, being accountable only to yourself, is probably still too low a standard. No-one is an island, so we all have a responsibility to the communities we interact with, and it is to some extent up to those communities, not the individuals in isolation, what that means. If Ben Hoffman wants to have a relationship with EAs (individually or collectively), it’s necessary to meet the standards of those individuals or the community as a whole about what’s acceptable.
When you say “you don’t need to justify your actions to EAs”, then I have sympathy with that, because EAs aren’t special, we’re no particular authority and don’t have internal consensus anyway. But you seem to be also arguing “you don’t need to justify your actions to yourself / at all”. I’m not confident that’s what you’re saying, but if it is I think you’re setting too low a standard. If people aren’t required to live in accordance with even their own values, what’s the point in having values?
It’s odd to call Boris an opponent of the government. He’s a sitting MP—he’s part of the state. To me this seems to be more about the courts being able to hold Parliament accountable.
I like the idea here a great deal, but I expect there’s going to be a lot of variation in what creates what effect in whom. I wonder if there’s better ways to come up with aggregate recommendations, so we can find out what seems to be consistent in its EA appeal, vs. what’s idiosyncratic
There’s an unanswered question here of why Good Ventures makes grants that OpenPhil doesn’t recommend, given that GV believes in the OpenPhil approach broadly. But I guess I don’t find it that surprising that they do so. People like to do more than one thing?
Have you attempted to contact GV or OpenPhil directly about this?
I think this is only true with a very narrow conception of what the “EA things that we are doing” are. I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world.
That’s all I believe constitutes “EA things” in your usage. Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally “the experts” on. If the global research community on poverty interventions came to the consensus “actually we think bednets are bad now” then EA orgs would need to listen to that and change course.
“Politicized” questions and values are no different, so we need to be open to feedback and input from external experts, whatever constitutes expertise in the field in question.
Downvotes aren’t primarily to help the person being downvoted. They help other readers, which after all there are many more of than writers. Creating an expectation that they should all be explained increases the burden on the downvoter significantly, making them less likely to be used and therefore less useful.
Just to remark on the “criminal law” point – I think it’s appropriate to apply a different, and laxer, standard here than we do for criminal law, because:
the penalties are not criminal penalties, and in particular do not deprive anyone of anything they have a right to, like their property or freedom – CEA are free to exclude anyone from EAG who in their best judgement would make it a worse event to attend,
we don’t have access to the kinds of evidence or evidence-gathering resources that criminal courts do, so realistically it’s pretty likely that in most cases of misconduct or abuse we won’t have criminal-standard evidence that it happened, and we’ll have to either act despite that or never act at all. Some would defend never acting at all, I’m sure (or acting in only the most clear-cut cases), but I don’t think it’s the mainstream view.
And this is a clear case in which I would have first-person authority on whether I did anything wrong.
I think this is the main point of disagreement here. Generally when you make sexual or romantic advances on someone and those advances make them uncomfortable, you’re often not aware of the effect that you’re having (and they may not feel safe telling you), so you’re not the authority on whether you did something wrong.
Which is not to say that you’re guilty because they accused you! It’s possible to behave perfectly reasonably and for people around you to get upset, even to blame you for it. In that scenario you would not be guilty of doing anything wrong necessarily. But more often it looks like this:
someone does something inappropriate without realizing it,
impartial observers agree, having heard the facts, that it was inappropriate,
it seems clearly-enough inappropriate that the offender had a moral duty to identify it as such in advance and not do it.
Then they need to apologize and do what’s necessary to prevent it happening again, including withdrawing from the community if necessary.
If I heard that a lot of people were feeling uncomfortable following interactions with me, I think it’s likely that I would apologize and back off before understanding why they felt that way, and perhaps without even understanding what behaviour was at issue.
I’d trust someone else’s judgement comparably with or more than my own, particularly when there were multiple other someones, because I’m aware of many cases where people were oblivious to the harm their own behaviour was causing (and indeed, I don’t always know how other people feel about the way I interact with them and put a lot of effort into giving them opportunities to tell me). Obviously I’d apply some common sense to accusations that e.g. I knew to be factually wrong.
In the abstract, which of these do you think happens more often?
Someone makes people uncomfortable without being aware that they are doing so. Other people inform them.
Someone doesn’t make anyone feel uncomfortable (above the base rate of awkward social interactions). People erroneously tell them that they are doing so.
Now, the second is probably somewhat more likely than I’ve made it sound, but the first just seems way more ordinary to me. So my outside view is that the most likely reason for people to tell you that you’re making others uncomfortable is that you are actually doing that. You’re entitled to play this off against what you know of the inside view, but I think it would be pretty weird to just dismiss it entirely.
This is a relatively minor issue, perhaps, but the graph you show from the EggTrack report seems to have its “n=” numbers wrong. Looking at the report itself, the graph has the same values as (and immediately follows) another one which only includes the reported-against commitments, so I’m betting they just copied the numbers from that one accidentally.
(I haven’t yet tried to contact CIWF about this and probably won’t get around to it, but I’ll update this post if I do)
What was the largest amount that any individual got matched on GT? Given that this year there were only 15 seconds of matching funds, can one person get through enough forms in time to give a lot?
I think 2-10x is the wrong average multiplier across lottery winners (though, in fairness, you didn’t explicitly claim it was an average). In order to make good grants to new small high-risk things, you need to hear about them, and I suspect most lottery participants don’t have the necessary networks and don’t have special access to significant private information – after all, private information doesn’t spread well.
Concretely I’m suggesting that the median lottery participant doesn’t get any benefit at all from the ability to use private information.
We can imagine three categories of grants:
A. Publically justifiable
B. Privately justifiable
C. Unjustifiable :)
I agree reports like Adam’s will move people from B to A, but I think they will also move people from C to A, by forcing them to examine their choices more carefully and hold themselves to a higher standard.
This model prompts two possible sources of disagreement: you could disagree about the relative proportions of people moving from B vs. from C, or you could disagree about how bad it is to have a mix of B and C vs. more A.
To address the second question, if you think that B is 2-10x more valuable than A, then even if donations in category C are worthless (leaving aside the chance they are net negative), an equal mix of B and C is better than just A, and towards the 10x end of that spectrum, you can justify up to 90% C and 10% B.
But let’s return to that parenthetical – could more C donations be net negative, even aside from opportunity cost? I think this risk is underexamined. I suspect most projects won’t directly do harm, but well-funded blunders are more visible and reputationally damaging.
Or because their best granting opportunity can’t be justified with publically-available knowledge, or has other weird optics / reputational concerns.