I thinks there’s a decent chance that if one were to dive deeper into the farming of invertebrates then this could lead to discoveries of tens of billions of additional farmed animals the movement largely currently neglects.
Say this is the case, what would be the implications? It does seem that more general anti-speciecism efforts become comparatively more effective the more animal suffering is widely dispersed.
Hey Holly! Since I read this post I found an accountability partner a bit later. The system works well and we’ve built it out according to our own needs, but I like that you’ve kept it simple. A lot of systems and lifehacks are very specific which makes them a pain to implement.
So thank you and good job! :)
IF I’d make an addition to your advice, it would probably be: plan a brief evaluation after x (3? 6?) months to take a bigger picture:
Has the system actually helped you accomplish things, rather than just being productive?
How do you like the role of each other? Would you like them to be more strict? Would you like them to focus on specific questions or pitfalls? (E.g. I asked my buddy to guard my from taking on too many responsibilities—when I plan a new project she asks me “ok, so what are you going to drop to make place for it?“)
Thanks for posting this. Some comments and questions:
I echo Habryka’s reluctance about cause-partial research, and I would have appreciated if you’d shared a little more context for EA’s, given that this is the EA Forum. (for example, why Founders Pledge decided to research this, how it’s relevant from an EA perspective and how it’s not, which assumptions it makes)
Some specific questions:
1. Why haven’t you compared charities on a comparably metric, such as the DALY, or Life-Satisfaction Points? Is this because the report assumes what is (intrinsically or robustly) important is empowerment, and that is hard to measure?
2. Why do you recommend both Village Enterprise and Bandhan’s ‘Targeting the Hardcore Poor’ programme? The latter appears 3.5 times as cost-effective (in terms of increasing purchasing power), and the organisations appear to have similar strength, room for more funding, and wider benefits.
3. Do you have a publicly available cost-effectiveness model for the 15 charities considered? By how much did these charities differ on comparable measures?
We always point out that the fund is focused on reducing suffering in the long-term future.
Also, why should they donate to that other fund instead? E.g., the Long-Term Future Fund is also importantly motivated by “astronomical waste” type considerations which those donors don’t understand either, and might not agree with.
Yes, I’m not saying you’re misleading your donors, nor that they are less informed than donors of other funds. Just that there are many reasons people are donating to a particular fund, and I think properly naming a fund is a step in the right direction.
I wouldn’t call it a coordination problem in the game-theoretic sense
I see it as coordination between different fund managers, where each wants to maximize the amount of funds for their own fund. As such, there are some incentives to not maximally inform one’s donors if other funding possibilities, or the best arguments against donating to the fund they are fundraising for.
I’m not saying that this type of selfish behavior is very present in the EA community—I’ve heard that it is quite the opposite. But I do think that the situation is not yet optimal: current allocation of resources is not based largely on careful weighing the relevant evidence and arguments. I also think we can move closer to this optimal allocation.
Anyway, I didn’t mean to make this into a large debate :) I’m glad the fund exists and I’d be happy if the name changes to something more distinguishable!
I still perceive it as suboptimal, although I understand you don’t like any of the potential names.
We seriously considered “S-risk Fund” but ultimately decided against because it seems harder to fundraise for from people who are less familiar with advanced EA concepts (e.g., poker pros interested in improving the long-term future).
I think this touches on a serious worry with fundraising from people unfamiliar with EA concepts: why should they donate to your fund rather than another EA fund if they don’t understand the basic goal you are aiming for with your fund?
I can imagine that people donate to the EAF fund for social reasons (e.g. you happen to be well-connected to poker players) more than intellectual reasons (i.e. funders donate because they prioritize s-risk reduction). If that is the case, I’d find it problematic that the fund is not clearly named: it makes it less likely that people donate to particular funds for the right reasons.
Of course, this is part of a larger coordination problem in which all kinds of non-intellectual reasons are driving donation decisions. I am not sure what the ideal solution is*, but I wanted to flag this issue.
*Perhaps it should be a best practice for EA fundraisers to always recommend funders to all go through a (to be created) donation decision tool that takes them through some of the relevant questions. That tool would be a bit like the flowchart from the Global Priorities Project, but more user-friendly.
Great to see this set up!
Small note on the name: is not at all clear for people unfamiliar with EAF what the fund is about, and a name change for the fund would probably get you more attention. Something like “suffering prevention fund”, “suffering reduction fund”, “long-term suffering reduction fund”, “suffering risk fund” would all be significantly clearer (even though they feel inadequate to describe the fund’s goals).
Okay, points taken. I should have been much more careful given the strength of the accusation, and the accusation that DGB was written “in bad faith” seems (far) too strong.
I guess I have a tendency to support efforts that challenge common beliefs that might not be held for the right reasons (in this case “DGB is a rigourously written book, and a good introduction to effective altruism”). This seemed to outweigh the costs of criticism, likely because my intuition often underestimates the costs of criticism. However, the OP challenged a much stronger common belief (“Will MacAskill is not an untrustworthy person”) and I should have better distinguished those (both in my mind and in writing).
When I was writing it, I was very doubtful about whether I was phrasing it correctly, and I don’t think I succeeded. I think my intention for “written in bad faith” was meant less strongly, but a bit more than ‘written with less attention to detail than it could have been’: i.e. that less attention was given to details that wouldn’t pan out in favour of EA. More along the lines of this:
“sloppy, and perhaps with a subconscious finger on the scale tilting the errors to be favourable to the thesis of the book” rather than deceit, malice, or other ‘bad faith’.
I also have a lower credence in this now. I should add that my use of “convincing” was also too strong a term, as it might be interpreted as >95% credence, instead of the >60% credence I observed at the time of writing.
[EDIT: this was not a very careful comment and multiple claims were stated more strongly than I believed them, as well that my beliefs might have been not so well-supported]
I admire the amount of effort that has gone into this post and its level of rigor. I think it’s very important for an epistemically healthy movement that high-status people can be criticised successfully.
I think your premises do not fully support the conclusion that MacAskill is completely untrustworthy. However, I agree that the book misrepresents sources structurally, and this is a convincing sign it is written in bad faith.
I hope that MacAskill has already realized the book was not up to the standards he now promotes. Writing an introduction to effective altruism was and remains a very difficult task, and at the time there was still a mindset of “push EA even if it’s at the cost of some epistemic honesty”. I think the community has been moving away from this mindset since, and this post is a good addition to that.
We need a better introductory book. (Also because it’s outdated.)
But we might oppose efforts like the Nuclear Threat Initiative, which disproportionately save violent-psychology worlds.
Does it? It seems a lot of risk comes from accidental catastrophes rather than intentional ones. Accidental catastrophes to me don’t seem like proof of the future being violent.
Also, I think that we should treat our efforts to reduce the risk of intentional catastrophe or inflicted suffering as evidence. Why wouldn’t the fact that we choose to reduce the impact of malicious actors be proof that malicious actors’ impact will be curtailed by other actors in the future?
I’ve been reading Phil Torres’s book on existential risks and I agree with him to the extent that people have been too dismissive about the amount of omnicidal agents or their capability to destroy the world. I think his reaction to Pinker would be that the level of competence needed to create disruption is decreasing because of technological development. Therefore, historical precedent is not a great guide. See: Who would destroy the world? Omnicidal agents and related phenomena
The capacity for small groups and even single individuals to wreak unprecedented havoc on civilization is growing as a result of dual-use emerging technologies. This means that scholars should be increasingly concerned about individuals who express omnicidal, mass genocidal, anti-civilizational, or apocalyptic beliefs/desires. The present article offers a comprehensive and systematic survey of actual individuals who have harbored a death wish for humanity or destruction wish for civilization. This paper thus provides a strong foundation for future research on “agential risks” and related issues. It could also serve as a helpful resource for counterterrorism experts and global risk scholars who wish to better understand our evolving threat environment.
It’s an interesting question to ask how likely it is to recover from civilizational collapse, and talking about ‘stepping down in complexity’ might be useful. I’ve previously only seen it discussed as whether we lose agriculture, science, or industry (see e.g. Baum et al., 2018). It seems the author is implicitly referring to the Energy-Complexity Spiral by Joseph Tainter, a fascinating concept:
The common view of history assumes that complexity and resource consumption have emerged through innovation facilitated by surplus energy. This view leads to the supposition that complexity and consumption are voluntary, and that we can therefore achieve a sustainable future through conservation. Such an assumption is substantially incorrect. History suggests that complexity most commonly increases to solve problems, and compels increase in resource use. This process is illustrated by the history of the Roman Empire and its collapse. Problems are inevitable, requiring increasing complexity, and conservation is therefore insufficient to produce sustainability.
It seems most x-risk scholars believe the probability of recovery is really high (>90%) as long as something like the scientific method is preserved (last few people problem). I think this is likely to be correct, and that the failed recoveries are either by extinction (70% of failed recoveries) or by loss of the scientific method (30% of failed recoveries). Permanent loss of technology seems unlikely to me, as technological development offers many advantages and is observed in most cultures.
To help communicate this, it may be helpful if organisations published typical ratios of applicants to hires to let people plan accordingly.
This would be helpful indeed! Although I suspect organizations would be reluctant to do this as it might put off applications. On the other hand, it might turn off applications that would not be good anyway and save a lot of work. This data would also be useful for job seekers to decide whether they should apply mostly within EA, or focus efforts on impactful opportunities outside EA (like me). If there are 1,000 applicants for 50 positions yearly, I should not bet too much on getting into an EA org. I think this should especially encourage applicants who don’t have the right skills or knowledge yet to apply outside EA to skill up, instead of hanging around in EA limbo.
I think it is worth thinking more about how the application processes of EA orgs can create more value (for the community). I think it is an opportunity for the organization to help the wider community: reviewing hundreds of applicants is a unique data set. One concrete action is publishing or sending general feedback about what differentiated top applicants from non-top applicants, which skills/backgrounds were in wide supply and which were rare and useful. These can also be referred to for later hiring processes for similar roles. There’s more discussion about this topic in this Facebook post.
Something along those lines. Thanks for interpreting! :)
What I was getting at was mostly that praise/recognition should be a smooth function, such that things not branded as EA still get recognition if they’re only 1/10th effective, instead of the current situation (as I perceive it) that when something is not maximally effective it’s not recognized at all. I notice in myself that I find it harder to assess and recognize the impact of non-EA branded projects.
I expect this is partly because I don’t have access/don’t understand the reasoning so can’t assess the expected value, but partly because I’m normally leaning on status for EA-branded projects. For example, if Will MacAskill is starting a new project I will predict it’s going to be quite effective without knowing anything about the project, while I’d be skeptical about an unknown EA.
Absolutely agree! :) I think this also extends to “non-EA” causes and projects that do good: sure, they’re not most effective, but they’re still improving or saving lives and that’s praiseworthy.
Relatedly, I think it’s hard to be motivated by subjective expected value, even if that’s what most people think we should maximize. When something turns out to not be successful although it was really high expected value (so not the result of bad analysis), the action should be praised. I’m afraid that the ranking of actions by expected value diverges significantly from the ranking by expected recognition (from oneself and others) and I think this should be somewhat worrying.
Coming back to the post, I also think the drop in recognition is too large when absolute value realized is not maximal. I’m curious to figure out what is the optimal recognition function is (square root of expected value?), but I think that’s a bit besides the point of this post!
It would be useful to define what you mean by impressions and credences. Especially “credences” is not a word commonly used outside of EA or fields related to decision theory.
Hey JP, I don’t mean to sound condescending or policing, but I find this comment to be full of (unnecessary) jargon which would be very hard to read for newcomers. What would help:
Writing out abbreviations (“EV” --> “expected value”)
Writing out the reasoning instead of a concept (“the value of the opportunity cost” --> “the benefits of pursuing a ‘traditional’ EA-aligned career)
I just want to note that not every rejected application has been burnt value for me and most have actually been positive, especially in terms of things learned. In the ones I got far it has resulted in more rather than less motivation. In the case I had to do work-related tasks (write research proposal, or execute a sample of typical research) I learned a lot.
On the other hand, increasing the applicants:hired-ratio would mostly increase the proportion of people not getting far in the application process which is where least of the value positive factors are and most of the negative.
The narrative that “EA is talent constrained” has had, as mentioned, some negative and unintended consequences. One more I’d like to add is that on this narrative the advised action “go work for a larger organization, like the government” might feel for many people as failure or “not living up to their goals.” I’m afraid this leads to more value drift—people’s values siding away from EA-aligned values—because they feel they’re not “living EA” anyway.
A first thought I have is “if the current organizations cannot grow fast enough to use all the available talent, why not increase the growth rate of the movement?” That is, should we start more (paid) projects?
I think there is a case to be made that we should: there seem to be many small projects with capable people seeking funding. They are not as effective per employee as some of the core organizations, but the core organizations don’t need the money. I don’t hold this opinion strongly, and I’m curious what other people think.
Roughly when will the new forum be launched, if I may ask?