Sorry I can’t provide more specific details about Malaria Consortium either. You might find some helpful stuff in the Givewell reviews.
Nick Whitaker
How to start an advance market commitment
Why prediction markets aren’t popular
A perfect example of the dual and sometimes diametrically opposed meanings of “neutrality” in EA: to some it means neutral between cause areas, to some it means neutral in our approach of how to do the most good.
Hi there! Glad to hear you are taking an interest in these questions. I wanted to offer you a few general observations that might be helpful.
arguments against institutions like GiveWell that focus on giving away bednets, that talk about how it ends up making these communities dependent on donations and unable to produce their own bednets.
I think a few different questions might be getting linked together here. One question is the best way to get people an effective public health intervention, like malaria nets. Another is how we can ensure economic development, so that communities need not be dependent on foreign support.
To answer the first: from my perspective, there’s no reason any particular community needs to be able to make their own malaria nets. Rather, they should be a made by whoever has a comparative advantage in making malaria nets. It would be highly inefficient if every community threatened by malaria needed to make their own nets. That’s why we trade. So I’m not sure about an argument that would require we need to teach any malaria stricken community how to make nets themselves.
But that does lead to the second question: of how a community can become economically self sufficient. This is a much more difficult question—in fact it’s one of the big questions of economics, particularly developmental economics.
My understanding (noting this is a huge subject) is that we don’t know of any silver bullets, but it’s well-accepted that better health, education, and institutions is a key part of the story. Because we don’t have a silver bullet, we can at least offer to alleviate a preventable health problem like malaria. Hopefully, a healthier society will create the foundation for future prosperity and wellness—so that they won’t be dependent on donations. In particular, when the long term effects of malaria nets have been looked at, they’ve been highly effective in reducing the overall mosquito population.
This also means that mosquito nets have ‘positive externalities’. That is to say, they help people beyond the purchaser of the net. When goods have positive externalities, they tend to be undersupplied. That might help explain why communities aren’t already trading for more malaria nets, as well as the need for subsidy.
Also: I worry a bit about the word ‘sustainability’ in these contexts. One might have to, say, runaway from a mugger at an unsustainable pace, but that’s alright because it’s an exigent circumstance. You aren’t going to be running forever.
I think when we say ‘unsustainable’ we usually mean something has negative externalities, like carbon emissions, so we can’t keep on the same path for ever. But there are plenty of temporary measures that are at once unsustainable but certainly worth doing. I agree with you we should focus on which actions will have the best long run consequences. But that doesn’t necessarily mean sustainable.
One last thing: Your point about weak currencies is very thoughtful. You might want to investigate the concept of purchasing power parity and Will MacAskill’s concept of the 100x multiplier.
Hopefully someone else can give further details about malaria nets or GiveWell, if you are curious to learn more!
It’s a bit undertheorized in this post why people are longtermist, and thus why longtermism now has such a large role in EA. You paraphrase a comment from Buck:
why these longtermists will not be receptive to conventional EA arguments
This suggests a misunderstanding to me. It was these conventional arguments that led EA funders and leaders to longtermism! If EA is a question of how to do the most good, longtermism is simply a widely agreed upon answer.
In fact, per the 2020 Rethink survey, more engagement in EA was associated with more support for longtermist causes. (Note this was before FTX became widely known, the Future Fund existed, and What We Owe the Future released).
I think there may be good reasons to create some distance between cause areas, but telling the most engaged EAs they need to start their own organization doesn’t seem very sensible.
(Note also that the EA community does not own its donor’s money.)
“Mom, can we have more EA whistleblowers”
″No, we have EA Whistleblowers at home”EA whistleblowers at home
See here. I’m not sure how EA Funds is pitched to donors. A new fund probably would be better regardless.
(Adding for context: I had heard EA Funds was being reorganized at one point, which suggested to me it might be looking for a new funding model)
Collin seems awesome. Really glad you thought to interview him.
I would hope that a majority of the EA community would agree that there aren’t good reasons for someone to claim ownership to billions of dollars. Perhaps there are those that disagree.
I would certainly disagree vehemently with this claim, and would hope the majority of EAs also disagree. I might clarify that this isn’t about arbitrarily claiming ownership of billions of dollars—it’s a question of whether you can earn billions of dollars through mutual exchange consistent with legal rules.
We might believe, as EAs, that it is either a duty or a supererogatory action to spend one’s money (especially as a billionaire) to do good, but this need not imply that one does not “own their money.”
(“EA community, in that some of it prioritizes helping the worst off, has a much stronger moral claim to donors money than do the donors” may also prove a bit too much in light of some recent events)
Sorry, it’s not clear what the screenshot even implies?
If the film maker asked a person at CEA, ”Do you think X would be a good fit for the documentary” and they said “No, I don’t think so” and gave substantive reasons (“not really having a social impact beyond our circle”) that doesn’t even necessarily imply the single person didn’t want them in the documentary (could be a casual judgement), much less the institution of CEA didn’t want X in the documentary. And given the filmmaker says “might not include Zvi” presumably his inclusion was still up to them!
Appreciate you engaging!
This is a really confusing post. I think if you are going to make an allegation, you should give a bit more context, and be a bit more specific as to what you are accusing CEA of. Do you mean Zvi Mowshowitz and Zeynep Tufekci? Both seem like well respected people? Are you sure that it wasn’t ultimately at the documentary makers discretion? Are these just off hand remarks by someone who works at CEA?
I understand wanting the impulse to bring things to light, but I have no idea reading this what is going on here.
Thanks for raising this. I haven’t been particularly persuaded by work in that vein but it’s certainly worth engaging with.
Central EA organizations should not make any major reforms for 6 months to allow for a period of reflection and avoid hasty decisions
Thank you for this. These are very interesting points. I have two (lightly held) qualms with this that I’m not sure obtain.
I suspect in the status quo, highly engaged, productive EAs who do work like yours do have a certain amount of influence over funding decisions. It certainly seems like most Future Fund regrantors fit into this pool. Obviously I don’t mean to imply everyone has the influence they deserve, but I do think this is meaningful when we consider the current state of EA vs a potential new system.
I worry this attitude also plays into some potentially harmful dynamics, where each EA feels like they have ownership over and responsibility for the entirety of EA. This may fuel things like a community organizer feeling the weight of every EA controversy on their own shoulders (I don’t know what was at play in that specific case) or an enthusiastic but naive 15 year old who feels that they deserve to make funding decisions because they have a forums account. Perhaps there could be some sort of demarcation between people who are actually making this trade with a willing counterparty(ies) from anyone who is currently working an EA job or otherwise associated.
Again, just the thoughts that came to mind, both tentative.
My apologies if this proves uncharitable. I interpreted Carla Zoe’s classification of this proposal as:
ideas I’m pretty sure about and thus believe we should now hire someone full time to work out different implementation options and implement one of them”)
as potentially endorsing grassroots attempts to democratize EA funding without funder buy-in. I do find the general ambiguity frustrating:
If you are going to make these proposals, please consider:
Who you are actually asking to change their behavior?
What actions you would be willing to take if they did not change their behavior?
But if no one interested in reform would endorse a strategy like this, it’s simply my mistake.
I agree, creating an EA bureaucracy seems like the biggest problem lurking within these proposals
I think it’s good that EA Funds are distributed in a technocratic way, rather than a democratic way, although I agree that more transparency would help people at least understand the decision processes behind granting decisions and allow for them to be criticized and improved.
I generally agree with this, though I don’t have a strong sense of how good EA Funds grants are. It just seems like a more reasonable grounds for debate than making demands of EA donors in general.
- 20 Jan 2023 18:17 UTC; 0 points) 's comment on The EA community does not own its donors’ money by (
If they mostly care about AI timelines, subsidize some markets on it. Funding platforms and research doesn’t seem particularly useful here (as opposed to much more direct research).