I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
Jason
Are the projects of most EA individuals truly independent in the sense of their EVs being essentially uncorrelated with each other? That would be surprising to me, given that many of those projects are conditional on positive evaluation from a small number of funders, and many arose out of the same meta (so would be expected to have meaningful correlations with other active projects).
So my prediction is that most EA stuff falls into one of your two caveats. What I don’t have a good sense of is how correlated the average EA work is, and thus the degree of caution / risk aversion implied by the caveats.
EA global health & development is still largely focused—in Saunders-Hastings’ terminology—on “low-hanging fruit” like bednets. Its overall spending is somewhere about $1 per person in extreme poverty per year. In my view, those practical realities make me a lot less concerned about these types of criticisms than I might be if the spending were ~ an order of magnitude higher. Most of the proposed mechanisms of harm, like undermining governments, seem rather speculative at low levels of expenditure.
Also, at least the GiveWell top charities target specific causes of under-five child mortality. In general, I’m less concerned about paternalism toward infant beneficiaries than I would be for a program that risked overriding the preferences of adult intended beneficiaries who are capable of making their own choices.
Finally, I think alternatives like funding “local organizations with the power to shape change from within” could risk more significant neo-colonialism, especially if not done very carefully. Investing tens to hundreds of millions a year into fostering social change in the Global South would give EAs a whole lot more directional influence over the lives of the people who live there than distributing bednets. Not only would we be picking which types of change agents got funded in the first place, the preferences and values of an organization’s funding source can have a powerful influence on the organization(s) in question. I think that would be a very hard problem to fix, especially because assessing effectiveness (and eligibility for continued funding) in this context would be much more subjective and value-laden than in the anti-malarial context.
The article relies on an analysis by an IP law professor, which in turn rests on an analysis of statutory damages under copyright law and Judge Alsup’s findings. “Business-ending” is a direct quote from said professor. That strikes me as a reasonable basis on which to characterize the liability as potentially business-ending, even if people on Manifold do not seem to agree.
Juries are hard to predict, especially where the allowable range for statutory damages is so wide. That the infringement was willful and by a sophisticated actor doesn’t help Anthropic here.
I’d love to hear what their lawyers had to say about all this before the piracy happened (or maybe they weren’t even consulted?) They had to expect copyright suits from the get go, did they not understand that upping the ante with mass piracy was likely to make things much worse?
I think many talented EAs are looking for EA jobs, but often it’s a question of “fit” over just raw competence.
For the significant majority of EAs, does there exist an “EA job” that is a sufficiently good fit as to be superior to the individual’s EtG alternative? To count, the job needs to be practically obtainable (e.g., the job is funded, the would-be worker can get it, the would-be worker does not have personal characteristics or situations that prevent them from accepting the job or doing it well).
I would find it at least mildly surprising for the closeness of fit between the personal characteristics of the EA population and the jobs available to be that tight.[1]
- ^
For most social movements, funding only allows a small percentage of the potentially-interested population to secure employment in the movement (such as clergy or other religious workers in a religious movement. So they do not face this sort of question. But I’d be skeptical that (e.g.) 85% of pretty religious people are well-suited to work as clergy or in other religious occupations.
- ^
I speculate that there are enough differences at play that a significant fraction of people should choose direct work and a significant fraction who should choose EtG.
It is often asserted that impact/success is significantly right-tailed. If that’s so, a modest raw difference in an individual’s suitability for EtG vs. suitability for direct work might create a large difference in expected outcome. Making numbers up, even the difference between being 99.99th percentile in one suitability (compared to the general population) versus the 99.9th percentile in the other might make a big difference. And it’s plausible to think that even if the two suitabilities are highly correlated, people could easily have these sorts of differences. I don’t think the difference needs to be anywhere near being “very mediocre [at] EA jobs” vs. “great at earning money.”
There have been discussions about the relative importance of value alignment in hiring for direct work. Although the concept is slippery to define, it seems less likely that the concept as applied to direct work significantly predicts success at EtG. There is a specific virtue that is needed for success at EtG—related to following through on your donation plans once the money rolls in—but it’s questionable whether this is strongly correlated with the kind of alignment that factors into success for “EA jobs.”
The differences involve not only skill sets but also idiosyncratic personal attributes such as presence of other obligations (e.g., family commitments), psychological traits, individual passions, and so on. These things differ among potential workers, and one would expect them to point in different directions.
It seems clear to me that both individuals and societies regularly trade off between human life extension and other human goals, including the reduction of human suffering. One has to at least implicitly make that tradeoff when deciding on a governmental budget, or deciding how often you will have a colonoscopy. If it’s not possible to decide how to trade off these things, I think we have a problem that is practically much bigger than effective altruism.
The less well-trodden question to me, then, is whether we can estimate a tradeoff between animal suffering and human suffering. For most people, I think that’s where more of the uncertainty lies. But I’m not sure whether that is the case for you or not.
If we can compare the moral value of a year’s worth of human life extension to the value of reducing human suffering caused by a stimulus of specified severity, and then compare that to the value of reducing animal suffering caused by the same stimulus, then we should be able to compare the human life extension to the reduction of animal suffering.
I’m curious whether the crux is more on the first half of the equation, or the second. (Or whether you think the transitive logic just doesn’t work here.)
I like this. Along the same lines, explicitly communicating to desk-rejected candidates that they are clearly out of scope may discourage repeat out-of-scope applications (if this isn’t already being done).
Given that the significant majority of applications resulted in desk rejections, does it follow that you are seeing a significant number of applications that are simply not viable? I am wondering whether it would be helpful to say something about those, to hopefully assist non-viable applicants from devoting the time spent on the application process.
Although you haven’t asked for advice, and imply that you’re on your way out of the EA community, I am going to offer some advice anyway. I don’t think the issue you describe is particularly uncommon here, and individuals with somewhat similar issues may be reading this thread.
This isn’t the advice I would give after most first or even second complaints, but I think it is warranted after a string of incidents significant enough to warrant a year-long EAG ban (and probably at least some version of it was warranted at some juncture before that).
I submit that, if you were to return to in-person EA spaces,[1] you really need to employ some bright-line rules to prevent future harm while in those spaces. For instance: No flirting. No touching women.[2]
Based on your narrative, it seems that your best efforts at avoiding harm while employing more flexible standards are not succeeding. If following judgment-based standards are not avoiding harassment,[3] then I don’t see another viable option other than bright-line rules. And those will have the usual downside of bright-line rules—they will rule out some behavior that would actually have been OK.
Although you characterize the situation as the community deciding “to value the comfort of some subset of women over guys like me” (emphasis added), I would characterize it as the community valuing fellow community members’ right to bodily integrity and right to a harassment-free experience over your interest in continuing to interact with women in ways a number of them find inappropriate and/or offensive enough to have reached out to Community Health.
These sorts of rules are necessary in some professional spaces for some actors (for instance, psychiatrists and other mental health professionals). It seems that they would be necessary for you in EA spaces.[4] While I’m not suggesting that a bright-line rule against flirting is necessary for EA spaces in general, I think it may often be necessary for those who have a history of harassing behavior.
I recognize that creates a burden for you that others do not have to bear. But it does not impair your ability to participate in the core of what EA is. And some differential burden is unavoidable in life.
For example, there are a number of reasons that are not a person’s fault that nonetheless can make them unsuitable to drive a motor vehicle (or do so only under limited conditions) -- extreme clumsiness, severe anxiety disorders, blindness, seizure disorders, etc. People are morally culpable (and expose themselves to criminal liability) if they recklessly [5] keep driving even after it is clear that their condition renders their continued driving an unacceptable risk to other road users. Ultimately, it is not enough that these individuals care about avoiding harm, they have to stop driving at least once it is clear enough that no lesser alternative will protect other road users’ rights.[6] Based on your post, I think you may be in a roughly analogous situation here.
- ^
I recognize that it may be difficult to discern exactly what counts as an “EA space,” and am cognizant of the downsides of extending EA “jurisdiction” as it were too far into people’s private lives. I’d suggest that anything that happens in the city of an out-of-town EA event, or on the day of an in-town one, is very likely to involve an EA space. This definition is doubtless underinclusive. For instance, I would consider most parties with a bunch of people who are EAs to be “EA spaces,” similar to how I would usually treat a party with a bunch of people from the same office/employer as a work-related event for harassment purposes.
I also recognize that the question of interactions with EAs outside out “EA spaces” is outside the scope of this response.
- ^
I am assuming from your framing that all the complaints have involved women.
- ^
Given the number of reports to Community Health, I feel confident that this is the correct characterization.
- ^
What rules you should follow outside EA spaces is largely beyond the scope of this comment. It is likely that you should follow similar rules in at least some contexts—e.g., where the other party is on the job, and thus their ability to extricate themselves from an uncomfortable situation is limited by that status. On the other hand, certain other social spaces may warrant less restrictive rules.
- ^
Lightly adapting from the Model Penal Code, recklessness exists when an individual “consciously disregards a substantial and unjustifiable risk that” a certain kind of harm “will result from his conduct.”
- ^
Readers who live in areas with good public transportation may underestimate how severe the impact of losing the ability to drive is for most people in the United States.
- ^
In theory, many hospitals and universities in the US should fit into the “positive externalities” model—they significantly rely on both program service revenue and subsidies from private donors. My understanding is that nonprofit hospitals and universities are often poorly governed, so I’m curious whether this is related to the serving-two-masters problem as opposed to sector-specific pathologies.
ETA: Someone who knows more than I do about microfinance may be able to comment on how experience with microfinance orgs updates toward or against the subsidizing positive externalities in a unprofitable business model being viable more generally.
The tagline for the job board is: “Handpicked to help you tackle the world’s most pressing problems with your career.” I think that gives the reader the impression that, at least by default, the listed jobs are expected to have positive impact on the world, that they are better off being done well/faithfully than being unfilled or filled by incompetent candidates, etc.
Based on what I take to be Geoffrey’s position here, the best case that could be made for listing these positions would be: it could be impactful to fill a position one thinks is net harmful to prevent it from being filled by someone else in a way that causes even more net harm. But if that’s the theory of impact, I think one has to be very, very clear with the would-be applicant on the what the theory is. I question whether you can do that effectively on a public job board.
For example, if one thinks that working in prisons is an deplorable thing to do, I submit that it would be low integrity to encourage people to work as prison guards by painting that work in a positive light (e.g., handpicked careers to help you tackle the nation’s most pressing social-justice problems).
[The broader question of whether we’re better off with safety-conscious people in these kinds of roles has been discussed in prior posts at some length, so I haven’t attempted to restate that prior conversation.]
We could start with the survey data suggesting that the difference between the first- and second-place choices being about 1⁄3 of the total value of position and then adjust downward from there.
I would adjust downward considerably, especially for more junior positions, for various reasons:
Although I admittedly done my research, these results are not consistent with my intuition about the distribution of ability levels in applicant pools more generally. So that would be my starting point, and I’d want to see strong reasons for a much bigger delta in EA between the first- and second choices in a largish candidate pool than in other professional fields.
As an intuition pump: Taken literally, these results suggest an indifference between the first choice working ~0.68 FTE for the same salary, management overhead, etc. as the second choice working a full FTE. Or: that an organization with a 3-person team covering function X would be largely indifferent between hiring the first and second choices (and leaving slot 3 unfilled) vs. hiring the 3rd/4th/5th, even keeping cost and other factors constant. While it’s plausible there are roles and applicant pools for which this tradeoff would be worth making, I would not presume it applies to most roles and pools.
The respondents likely knew who their first choices were and what they had done on the job, while the backup choices would be more of an abstraction.
As @Brad West🔸 notes, there are some psychological reasons for respondents to overestimate the importance of the “best” choice.
Even if we knew how much better the best candidate was than the second-best candidate, there’s still measurement error in the hiring process. That comes both from noise in the hiring competition itself, and from imperfect fit between the hiring process and true candidate quality. Respondents may overestimate how reliable their hiring processes were—if they re-ran the process with the same applicants but different work trial questions and other “random” factors, what are the odds that the same person would have been selected?
At least as of 2010, the standard error of difference for a section of the SAT was about 40-45 points (out of a range of 200-800). So—despite having a very high reliability (at/over .9) due to tried-and-true design and lots of questions, an administration of the SAT will have enough measurement error that it likely won’t identify the single best candidate out of a medium-to-large size group of good students who is the best at SAT critical reading tasks (much less the candidate who is best at critical reading itself!)
Although organizations hiring have some advantages over the SAT test writers, it seems to me that they also have some real disadvantages too (e.g., fewer scored items, subjective scoring, a need to reject most candidates after only a few items have been scored).
On the whole, I’m not convinced that the reliability of most hiring processes is as high as the reliability of the SAT. And if re-running the hiring process five times might get us 3-4 different top picks, that would make me skeptical of a proposition that the #1 candidate on a particular run of a hiring process was likely to be heads and shoulders above the #2 candidate on that run, or even the #5 candidate in a sufficiently large pool.
Also, if we want more people cross posting good content, we will want to be cognizant that the Forum reader was not the original intended audience for crossposted material. So the cost/benefit analysis could be different for the original audience (although I suspect it was not materially different here).
Sure, but it’s still broadly “similar” in a way in a way donations are not. Even if going vegan is 5x easier for Bill Gates, that is much more similar than the difference in difficulty of making donations.
I assume each of the AMA, APA, and IEEE have substantial barriers to entry (e.g., professional education and/or licensing) that serve the screening-for-investment function Julia describes.
I also would not assume these organizations do a good job at representing their populations—e.g., about 75 percent of US physicians aren’t members of the AMA, which isn’t a big vote of confidence.
Does anyone know roughly what this would cost, either financially or in terms of what the people involved would be doing counterfactually?
(Obviously the amount would depend on the precise scope of work, but given that funding seems to be the bottleneck, throwing a range out there might sharpen the discussion.)
[vote comment, very shallow take waiting for train] Moderate depopulation isn’t bad, at least not right now. The limiting factor on climate change is more likely political will / willingness to make financial sacrifices, not a lack of people in 25-30 years to propose and execute new ideas. Lowering the percentage of the population that consists of productive workers (and thus increasing pressure on those workers) seems likely to increase resistance to making the economic sacrifices necessary for addressing climate change.
On the flip side, I would assume that the persistence of reduced fertility rates is the result of the continuance of the factors that led to them in the first place, rather than an irreversible consequence of rates dipping below replacement value. Therefore, it seems this issue can be deferred until we see improvement on climate.
It’s understandable for a donor to have that concern. However, I submit that this goes both ways—it’s also reasonable for smaller donors to be concerned that the big donors will adjust their own funding levels to account for smaller donations, reducing the big donor’s incentives to donate. It’s not obvious to me which of these concerns predominates, although my starting assumption is that the big donors are more capable of adjusting than a large number of smaller donors.
Much electronic ink has been spilled over the need for more diversification of funding control. Given that, I’d be hesitant to endorse anything that gives even more influence over funding levels to the entities that already have a lot of it. Unless paired with something else, I worry that embracing matching campaigns would worsen the problem of funding influence being too concentrated.
A Portfolio Approach: We should consider formally splitting funding into distinct portfolios. For example, a fund might allocate 70% of its resources to proven, highly measurable interventions, while dedicating 30% to a “high-risk, high-reward” fund for systemic change. This would allow us to continue supporting reliable interventions while also creating space for potentially transformative work.
EAs may only control a small fraction of resources in most cause areas (depending on exactly how one defines the cause area). If the portfolio approach is correct, I submit that the hypothetical fund should care about improving the total allocation of resources between the two approaches, not making its own allocation match what would be ideal for the charitable sector as a whole to do. Unless the charitable sector already has the balance between portfolios in a cause area approximately correct, it seems that a fund whose objective was to improve the overall sector balance between the two approaches would be close to all-in on one or the other.[1]
It would be fair to apply a downward adjustment before converting into an externality. At the same time, a worker’s salary is only a portion of their productivity, the correlation between productivity and wages may not be particularly strong, and some of the costs nominally borne by the worker end up being borne by society (via lost tax revenue, increased demand on need-based social service programs, etc.)
(As an aside, the article is paywalled, but I’d need more convincing on the $220B figure. I quickly saw a study in the Netherlands that suggested a cost there of 2.56 billion euros [or roughly 60 billion if you scaled to the size of the US economy]. Not suggesting that is the right figure either, but this strikes me as a case in which the methodological assumptions could make a big difference.).