Researcher (on bio) at FHI
I’ve been accused of many things in my time, but inarticulate is a new one. ;)
I strongly agree with all this. Another downside I’ve felt from this exercise is it feels like I’ve been dragged into a community ritual I’m not really a fan of where my options are a) tacitly support (even if it is just deleting the email where I got the codes with a flicker of irritation) b) an ostentatious and disproportionate show of disapproval. I generally think EA- and longtermist- land could benefit from more ‘professional distance’: that folks can contribute to these things without having to adopt an identity or community that steadily metastasises over the rest of their life—with at-best-murky EV to both themselves and the ‘cause’. I also think particular attempts at ritual often feel kitsch and prone to bathos: I imagine my feelings towards the ‘big red button’ at the top of the site might be similar to how many Christians react to some of their brethren ‘reenacting’ the crucifixion themselves.But hey, I’m (thankfully) not the one carrying down the stone tablets of community norms from the point of view of the universe here—to each their own. Alas this restraint is not universal, as this is becoming a (capital C) Community ritual, where ‘success’ or ‘failure’ is taken to be important (at least by some) not only for those who do or don’t hit the button, but corporate praxis generally.
As someone who is already ambivalent, it rankles that my inaction will be taken as tacit support for some after-action pean to some sticky-back-plastic icon of ‘who we are as a Community’. Yet although ‘protesting’ by ‴″nuking‴″ [sic] ([sic]) LW has some benefit of a) probably won’t get opted in again and b) maybe make this less likely to be an ongoing ‘thing’, it has some downsides. I’m less worried about ‘losing rep’ (I have more than enough of both e-clout and ego to make counter-signalling an attractive proposition; ‴″nuking‴″ LW in a fit of ‘take this and shove it’ pique is pretty on-brand for me), but more that some people take this (very) seriously and would be sad if this self-imposed risk is realised. Even I disagree (and think this is borderline infantile), protesting in this way feels a bit like trying to refute a child’s belief their beloved toy is sapient by destroying it in front of them. I guess we can all be thankful ‘writing asperous forum comments’ provides a means of de-escalation.
Thanks, but I’ve already seen them. Presuming the implication here is something like “Given these developments, don’t you think you should walk back what you originally said?”, the answer is “Not really, no”: subsequent responses may be better, but that is irrelevant to whether earlier ones were objectionable; one may be making good points, but one can still behave badly whilst making them.(Apologies if I mistake what you are trying to say here. If it helps generally, I expect—per my parent comment—to continue to affirm what I’ve said before however the morass of commentary elsewhere on this post shakes out.)
For the avoidance of doubt, I remain entirely comfortable with the position expressed in my comment: I wholeheartedly and emphatically stand behind everything I said. I am cheerfully reconciled to the prospect some of those replying to or reading my earlier comment judge me adversely for it—I invite these folks to take my endorsement here as reinforcing whatever negative impressions they formed from what I said there. The only thing I am uncomfortable with is that someone felt they had to be anonymous to criticise something I wrote. I hope the measure I mete out to others makes it clear I am happy for similar to be meted out to me in turn. I also hope reasonable folks like the anonymous commenter are encouraged to be forthright when they think I err—this is something I would be generally grateful to them for, regardless of whether I agree with their admonishment in a particular instance. I regret to whatever degree my behaviour has led others to doubt this is the case.
[Own views]I’m not sure ‘enjoy’ is the right word, but I also noticed the various attempts to patronize Hoskin. This ranges from the straightforward “I’m sure once you know more about your own subject you’ll discover I am right”:
I would say I expect you to be surprised by certain realities of neuroscience as you complete your PhD
‘Well-meaning suggestions’ alongside implication her criticism arises from some emotional reaction rather than her strong and adverse judgement of its merit.
I’m a little baffled by the emotional intensity here but I’d suggest approaching this as an opportunity to learn about a new neuroimaging method, literally pioneered by your alma mater. :)
[Adding a smiley after something insulting or patronizing doesn’t magically make you the ‘nice guy’ in the conversation, but makes you read like a passive-aggressive ass who is nonetheless too craven for candid confrontation. I’m sure once you reflect on what I said and grow up a bit you’ll improve so your writing inflicts less of a tax on our collective intelligence and good taste. I know you’ll make us proud! :)]
Or just straight-up belittling her knowledge and expertise with varying degrees of passive-aggressiveness.
I understand it may feel significant that you have published work using fMRI, and that you hold a master’s degree in neuroscience.
I’m glad to hear you feel good about your background and are filled with confidence in yourself and your field.
I think this sort of smug and catty talking down would be odious even if the OP really did have much more expertise than their critic: I hope I wouldn’t write similarly in response to criticism (however strident) from someone more junior in my own field. What makes this kinda amusing, though, is although the OP is trying to set himself up as some guru trying to dismiss his critic with the textual equivalent of patting her on the head, virtually any reasonable third party would judge the balance of expertise to weigh in the other direction. Typically we’d take, “Post-graduate degree, current doctoral student, and relevant publication record” over “Basically nothing I could put on an academic CV, but I’ve written loads of stuff about my grand theory of neuroscience.” In that context (plus the genders of the participants) I guess you could call it ‘mansplaining’.
[Predictable disclaimers, although in my defence, I’ve been banging this drum long before I had (or anticipated to have) a conflict of interest.]I also find the reluctance to wholeheartedly endorse the ‘econ-101’ story (i.e. if you want more labour, try offering more money for people to sell labour to you) perplexing:
EA-land tends sympathetic using ‘econ-101’ accounts reflexively on basically everything else in creation. I thought the received wisdom these approaches are reasonable at least for first-pass analysis, and we’d need persuading to depart greatly from them.
Considerations why ‘econ-101’ won’t (significantly) apply here don’t seem to extend to closely analogous cases: we don’t fret (and typically argue against others fretting) about other charity’s paying their staff too much, we don’t think (cf. reversal test) that google could improve its human capital by cutting pay—keeping the ‘truly committed googlers’, generally sympathetic to public servants getting paid more if they add much more social value (and don’t presume these people are insensitive to compensation beyond some limit), prefer simple market mechs over more elaborate tacit transfer system (e.g. just give people money) etc. etc.
The precise situation makes the ‘econ-101’ intervention particularly appetising: if you value labour much more than the current price, and you are sitting atop a ungodly pile of lucre so vast you earnestly worry about how you can spend big enough chunks of it at once, ‘try throwing money at your long-standing labour shortages’ seems all the more promising.
Insofar as it goes, the observed track record looks pretty supportive of the econ-101 story—besides all the points Ryan mentions, compare “price suppression results in shortages” to the years-long (and still going strong) record of orgs lamenting they can’t get the staff.
Perhaps the underlying story is as EA-land is generally on the same team, one might hope you can do better than taking one’s cue from ‘econ-101’, given the typically adversarial/competitive dynamics it presumes between firms, and employee/employer. I think this hope is forlorn: EA-land might be full aspiring moral saints, but aspiring moral saints remain approximate to homo economicus. So the usual stories about the general benefits econ efficiency prove hard to better- and (play-pumps style) attempts to try feel apt to backfire (1, 2, 3, 4 - ad nauseum). However, although I don’t think ‘PR concerns’ should guide behaviour (if X really is better than ¬X, the costs of people reasonably—if mistakenly—thinking less of you for doing X is typically better than strategising to hide this disagreement), many things look bad because they are bad.In the good old days, I realised I was behind on my GWWC pledge so used some of my holiday to volunteer for a week of night-shifts as a junior doctor on a cancer ward. If in the future my ‘EA praxis’ is tantamount to splashing billionaire largess on a lifestyle for myself of comfort and affluence scarcely conceivable to my erstwhile beneficiaries, spending my days on intangible labour in well-appointed offices located among the richest places heretofore observed in human history, an outside observer may wonder what went wrong. I doubt they would be persuaded by my defence is any better than obscene: “Not all heroes wear capes; some nobly spend thousands on yuppie accoutrements they deem strictly necessary for them to do the most good!”. Nor would they be moved by my remorse: self-effacing acknowledgement is not expiation, nor complaisance to my own vices atonement. I still think jacking up pay may be good policy, but personally, perhaps I should doubt myself too.
If anything, income seems to be unusually heavy-tailed compared to direct work (the top two donors in EA account for the majority of the capital, but I don’t think the top 2 direct workers account for the majority of the value of the labour).
Although I think this stylized fact remains interesting, I wonder if there’s an ex-ante/ ex-post issue lurking here. You get to see the endpoint with money a lot earlier than direct work contributions, and there’s probably a lot of lottery-esque dynamics. I’d guess these as corollaries:First, the ex ante ‘expected $ raised’ from folks aiming at E2G (e.g. at a similar early career stage) is much more even than the ex post distribution. Algo-trader Alice and Entrepreneur Edward may have similar expected lifetime income, but Edward has much higher variance—ditto one of entrepreneur Edward and Edith may swamp the other if one (but not the other) hits the jackpot. Second, part of the reason direct work contributions look more even is this is largely an ex ante estimate—a clairvoyant ex post assessment would likely be much more starkly skewed. E.g. If work on AI paradigm X alone was sufficient to avert existential catastrophe (which turned out to be the only such danger), the impact of the lead researcher(s) re. X is astronomically larger than everything else everyone else is doing. Third, I also wonder that raw $ value may mislead in credit assignment for donation impact. The entrepreneur who makes a billion $ company hasn’t done all the work themselves, and it’s facially plausible some shapley/whatever credit sharing between these founders and (e.g.) current junior staff would not be as disproportionate as the money which ends up in their respective bank accounts.Maybe not: perhaps the reward in terms of ‘getting things off the ground’, taking lots of risk, etc. do mean the tech founder megadonor bucks should be attributed ~wholly to them. But similar reasoning could be applied to direct work as well. Perhaps the lion’s share of all contributions for global health work up to now should be accorded to (e.g.) Peter Singer, as all subsequent work is essentially ‘footnotes to Famine, Affluence, and Morality’; or AI work to those who toiled in the vineyards over a decade ago, even if now their work is a much smaller proportion of the contemporary aggregate contribution.
I’d guess the story might be a) ‘XR primacy’ (~~ that x-risk reduction has far bigger bang for one’s buck than anything else re. impact) and b) conditional on a), an equivocal view on the value of technological progress: although some elements likely good, others likely bad, so the value of generally ‘buying the index’ of technological development (as I take Progress Studies to be keen on) to be uncertain.”XR primacy”Other comments have already illustrated the main points here, sparing readers from another belaboured rehearsal from me. The rough story, borrowing from the initial car analogy, is you have piles of open road/runway available if you need to use it, so velocity and acceleration are in themselves much less important than direction—you can cover much more ground in expectation if you make sure you’re not headed into a crash first. This typically (but not necessarily, cf.) implies longtermism. ‘Global catastrophic risk’, as a longtermist term of art, plausibly excludes the vast majority of things common sense would call ‘global catastrophes’. E.g.:
[W]e use the term “global catastrophic risks” to refer to risks that could be globally destabilizing enough to permanently worsen humanity’s future or lead to human extinction. (Open Phil)
My impression is a ‘century more poverty’ probably isn’t a GCR in this sense. As the (pre-industrial) normal, the track record suggests it wasn’t globally destabilising to humanity or human civilisation. Even moreso if the matter is of a somewhat-greater versus somewhat-lower rate in its elimination. This makes it’s continued existence no less an outrage to human condition. But, across the scales from threats to humankind’s entire future, it becomes a lower priority. Insofar as these things are traded-off (which seems to be implicit in any prioritisation given both compete for resources, whether or not there’s any direct cross-purposes in activity) the currency of XR reduction has much greater value.Per discussion, there are a variety of ways the story sketched above could be wrong:
Longtermist consequentialism (the typical, if not uniquely necessary motivation for the above) is false, so we our exchange rate for common sense global catastrophes (inter alia) versus XR should be higher.
XR is either very low, or intractable, so XR reduction isn’t a good buy even on the exchange rate XR views endorse.
Perhaps the promise of the future could be lost with less of a bang but a whimper. Perhaps prolonged periods of economic or stagnation should be substantial subjects of XR concern in their own right, so PS-land and XR-land converge on PS-y aspirations.
I don’t see Pascalian worries as looming particularly large apart from these. XR-land typically takes the disjunction of risks and envelope of mitigation to be substantial/non-pascalian values. Although costly activity that buys an absolute risk reduction of 1/trillions looks dubious to common sense, 1/thousands + (e.g.) is commonplace (and commonsensical) when stakes are high enough. It’s not clear how much of a strike that Pascalian counter-examples are constructable from the resources of a given view, and although the view wouldn’t endorse them, it doesn’t have a crisp story of decision theoretic arcana why not. Facially, PS seems susceptible to the same (e.g. a PS-ers work is worth billions per year, given the yield if you compound an (in expectation) 0.0000001% marginal increase in world GDP growth for centuries).Buying the technological progress index?Granting the story sketched above, there’s not a straightforward upshot on whether this makes technological progress generally good or bad. The ramifications of any given technological advance for XR are hard to forecast; aggregating over all of them to get a moving average harder still. Yet there seems a lot to temper fairly unalloyed enthusiasm around technological progress I take as the typical attitude in PS-land.
There’s obviously the appeal to the above sense of uncertainty: if at least significant bits of the technological progress portfolio credibly have very bad dividends for XR, you probably hope humanity is pretty selective and cautious in its corporate investments. It’d also generally surprise for what is best for XR to also be best for ‘progress’ (cf.)
The recent track record doesn’t seem greatly reassuring. The dual-use worries around nuclear technology remain profound 70+ years after their initial development, and ‘derisking’ these downsides remain remote. It’s hard to assess the true ex ante probability of a strategic nuclear exchange during the cold war, nor exactly how disastrous it would have been, but pricing in reasonable estimates of both probably takes a large chunk out of the generally sunny story of progress we observe ex post over the last century.
Insofar as folks consider disasters arising from emerging technologies (like AI) to represent the bulk of XR, this supplies concern against their rapid development in particular, and against exuberant technological development which may generate further dangers in general.
Some of this may just be a confusion of messaging (e.g. even though PS folks portray themselves as more enthusiastic and XR folks less so, both would actually be similarly un/enthusiastic for each particular case). I’d guess more of it is more substantive around the balance of promise and danger posed by given technologies (and the prospects/best means to mitigate the latter), which then feeds into more or less ‘generalized techno-optimism’.But I’d guess the majority of the action is around the ‘modal XR account’ of XR being a great moral priority, which can be significantly reduced, and is substantially composed of risks from emerging technology. “Technocircumspection” seems a fairly sound corollary from this set of controversial conjuncts.
[Own views etc.]I’m unsure why this got downvoted, but I strongly agree with the sentiment in the parent. Although I understand the impulse of “We’re all roughly on the same team here, so we can try and sculpt something better than the typically competitive/adversarial relationships between firms, or employers and employees”, I think this is apt to mislead one into ideas which are typically economically short-sighted, often morally objectionable, and occasionally legally dubious. In the extreme case, it’s obviously unacceptable for Org X to not hire candidate A (their best applicant), because they believe its better they stay at Org Y. Not only (per the parent) that A is probably a better judge of where they are best placed, but Org X screws over not only itself (they now appoint someone they think are not quite as good) and A themselves (who doesn’t get the job they want), for the benefit of Org Y. These sort of oligosponic machinations are at best a breach of various fiduciary duties (e.g. Org X to their donors to use their money to get the best staff rather than opaque de facto transfer contributions of labour to another organisation), and at least colourably illegal in many jurisdictions due to labour law around anti-trust, non-discrimination, etc. (see)Similar sentiments apply to less extreme examples, such as ‘not proactively ‘poaching″ (the linked case above was about alleged “no cold call” agreements). The typical story for why these practices are disliked is a mix of econ efficiency arguments (e.g. labour market liquidity, competition over conditions is a mechanism for higher performing staff to match into higher performing orgs) and worker welfare ones (e.g. the net result typically disadvantages workers by suppressing their pay, conditions, and reducing their ability to change to roles they prefer).I think these rationales apply roughly as well to EA-land as anywhere else-land. Orgs should accept that staff may occasionally leave to other orgs for a variety of reasons. If they find that they consistently lose out for familiar reasons, they should either get better or accept the consequences for remaining worse.: Although, for the avoidance of doubt, I think it is wholly acceptable for people to switch EA jobs for wholly ‘non-EA’ reasons—e.g. “Yeah, I expect I’d do less good at Org X than Org Y, but Org X will pay me 20% more and I want a higher standard of living.” Moral sainthood is scarce as well as precious. It is unrealistic that all candidates are saintly in this sense, and mutual pretence to the contrary unhelpful.If anything, ‘no poaching’ (etc.) practices are even worse in these cases than the more saintly ‘moving so I can do even more good!’ rationale. In the latter case, Orgs are merely being immodest in presuming to know better than applicants what their best opportunity to contribute is; in the former, Orgs conspire to make their employees’ lives worse than they could otherwise be.
Maybe not ‘insight’, but re. ‘accuracy’ this sort of decomposition is often in the tool box of better forecasters. I think the longest path I evaluated in a question had 4 steps rather than 6, and I think I’ve seen other forecasters do similar things on occasion. (The general practice of ‘breaking down problems’ to evaluate sub-issues is recommended in Superforecasting IIRC).I guess the story why this works in geopolitical forecasting is folks tend to overestimate the chance ‘something happens’ and tend to be underdamped in increasing the likelihood of something based on suggestive antecedents (e.g. chance of a war given an altercation, etc.) So attending to “Even if A, for it to lead to D one should attend to P(B|A), P(C|B) etc. etc.”, tend to lead to downwards corrections. Naturally, you can mess this up. Although it’s not obvious you are at greater risk if you arrange your decomposed considerations conjunctively or disjunctively: “All of A-E must be true for P to be true” ~also means “if any of ¬A-¬E are true, then ¬P”. In natural language and heuristics, I can imagine “Here are several different paths to P, and each of these seem not-too-improbable, so P must be highly likely” could also lead one astray.
Similar to Ozzie, I would guess the ‘over-qualified’ hesitation often has less to do with, “I fear I would be under-utilised and become disinterested if I took a more junior role, and thus do less than the most good I could”, but a more straightforward, “Roles which are junior, have unclear progression and don’t look amazing on my CV if I move on aren’t as good for my career as other opportunities available to me.”
This opportunity cost (as the OP notes) is not always huge, and it can be outweighed by other considerations. But my guess is it is often a substantial disincentive:
In terms of traditional/typical kudos/cred/whatever, getting in early on something which is going up like a rocket offers a great return on invested reputational or human capital. It is a riskier return, though: by analogy I’d guess being “employee #10” for some start-ups is much better than working at google, but I’d guess for the median start-up it is worse.
Many EA orgs have been around for a few years now, and their track-record so far might incline one against rocketing success by conventional and legible metrics. (Not least, many of them are targeting a very different sort of success than a tech enterprise, consulting firm, hedge fund, etc. etc.)
Junior positions at conventionally shiny high-status things have good career capital. I’d guess my stint as a junior doctor ‘looks good’ on my CV even when applying to roles with ~nothing to do with clinical practice. Ditto stuff like ex-googler, ex-management consultant, ?ex-military officer, etc. “Ex-junior-staffer-at-smallish-nonprofit” usually won’t carry the same cachet.
As careers have a lot of cumulative/progressive characteristics, ‘sideways’ moves earlier on may have disproportionate impact on ones trajectory. E.g. ‘longtermist careerists’ might want very outsized compensation for such a ‘tour’ to make up for compounded loss of earnings (in expectation) for pausing their climb up various ladders.
None of this means ‘EA jobs’ are only for suckers. There are a lot of upsides even from a ‘pure careerism’ perspective (especially for particular career plans), and obvious pluses for folks who value the mission/impact too. But insofar as folks aren’t perfectly noble, and care somewhat about the former as well as the latter (ditto other things like lifestyle, pay, etc. etc.) these disincentives are likely to be stronger pushes for more ‘overqualified’ folks. And insofar as EA orgs would like to recruit more ‘overqualified’ folks for their positions (despite, as I understand it, their job openings being broadly oversubscribed with willing and able—but perhaps not ‘overqualified’ - applicants), I’d guess it’s fairly heavy-going as these disincentives are hard to ‘fix’.
Although I understand the nationalism example isn’t meant to be analogous, but my impression is this structural objection only really applies when our situation is analogous. If historically EA paid a lot of attention to nationalism (or trans-humanism, the scepticism community, or whatever else) but had by-and-large collectively ‘moved on’ from these, contemporary introductions to the field shouldn’t feel obliged to cover them extensively, nor treat it the relative merits of what they focus on now versus then as an open question.Yet, however you slice it, EA as it stands now hasn’t by-and-large ‘moved on’ to be ‘basically longtermism’, where its interest in (e.g) global health is clearly atavistic. I’d be willing to go to bat for substantial slants to longtermism, as (I aver) its over-representation amongst the more highly engaged and the disproportionate migration of folks to longtermism from other areas warrants claims that epistocratic weighting of consensus would favour longtermism over anything else. But even this has limits which ‘greatly favouring longtermism over everything else’ exceeds. How you choose to frame an introduction is up for grabs, and I don’t think ‘the big three’ is the only appropriate game in town. Yet if your alternative way of framing an introduction to X ends up strongly favouring one aspect (further, the one you are sympathetic to) disproportionate to any reasonable account of its prominence within X, something has gone wrong.
Per others: This selection isn’t really ‘leans towards a focus on longtermism’, but rather ‘almost exclusively focuses on longtermism’: roughly any ‘object level’ cause which isn’t longtermism gets a passing mention, whilst longtermism is the subject of 3⁄10 of the selection. Even some not-explicitly-longtermist inclusions (e.g. Tetlock, MacAskill, Greaves) ‘lean towards’ longtermism either in terms of subject matter or affinity.Despite being a longtermist myself, I think this is dubious for a purported ‘introduction to EA as a whole’: EA isn’t all-but-exclusively longtermist in either corporate thought or deed.Were I a more suspicious sort, I’d also find the ‘impartial’ rationales offered for why non-longtermist things keep getting the short (if not pointy) end of the stick scarcely credible:
i) we decided to focus on our overall worldview and way of thinking rather than specific cause areas (we also didn’t include a dedicated episode on biosecurity, one of our ‘top problems’), and ii) both are covered in the first episode with Holden Karnofsky, and we prominently refer people to the Bollard and Glennerster interviews in our ‘episode 0’, as well as the outro to Holden’s episode.
The first episode with Karnofsky also covers longtermism and AI—at least as much as global health and animals. Yet this didn’t stop episodes on the specific cause areas of longtermism (Ord) and AI (Christiano) being included. Ditto the instance of “entrepreneurship, independent thinking, and general creativity” one wanted to highlight just-so-happens to be a longtermist intervention (versus, e.g. this).
I also thought along similar lines, although (lacking subtlety) I thought you could shove in a light cone from the dot, which can serve double duty as the expanding future. Another thing you could do is play with a gradient so this curve/the future gets brighter as well as bigger, but perhaps someone who can at least successfully colour in have a comparative advantage here.
A less important motivation/mechanism is probabilities/ratios (instead of odds) are bounded above by one. For rare events ‘doubling the probability’ versus ‘doubling the odds’ get basically the same answer, but not so for more common events. Loosely, flipping a coin three times ‘trebles’ my risk of observing it landing tails, but the probability isn’t 1.5. (cf).
Sibling abuse rates are something like 20% (or 80% depending on your definition). And is the most frequent form of household abuse. This means by adopting a child you are adding something like an additional 60% chance of your other child going through at least some level of abuse (and I would estimate something like a 15% chance of serious abuse). [my emphasis]
If you used the 80% definition instead of 20%, then the ‘4x’ risk factor implied by 60% additional chance (with 20% base rate) would give instead an additional 240% chance.[(Of interest, 20% to 38% absolute likelihood would correspond to an odds ratio of ~2.5, in the ballpark of 3-4x risk factors discussed before. So maybe extrapolating extreme event ratios to less-extreme event ratios can do okay if you keep them in odds form. The underlying story might have something to do with logistic distributions closely resemble normal distributions (save at the tails), so thinking about shifting a normal distribution across the x axis so (non-linearly) more or less of it lies over a threshold loosely resembles adding increments to log-odds (equivalent to multiplying odds by a constant multiple) giving (non-linear) changes when traversing a logistic CDF.But it still breaks down when extrapolating very large ORs from very rare events. Perhaps the underlying story here may have something to do with higher kurtosis : ‘>2SD events’ are only (I think) ~5X more likely than >3SD events for logistic distributions, versus ~20X in normal distribution land. So large shifts in likelihood of rare(r) events would imply large logistic-land shifts (which dramatically change the whole distribution, e.g. an OR of 10 makes evens --> >90%) much more modest in normal-land (e.g. moving up an SD gives OR>10 for previously 3SD events, but ~2 for previously ‘above average’ ones)]
Most views in population ethics can entail weird/intuitively toxic conclusions (cf. the large number of’X conclusion’s out there). Trying to weigh these up comparatively are fraught.In your comparison, it seems there’s a straightforward dominance argument if the ‘OC’ and ‘RC’ are the things we should be paying attention to. Your archetypal classical utilitarian is also committed to the OC as ‘large increase in suffering for one individual’ can be outweighed by a large enough number of smaller decreases in suffering for others—aggregation still applies to negative numbers for classical utilitarians. So the negative view fares better as the classical one has to bite one extra bullet.There’s also the worry in a pairwise comparison one might inadvertently pick a counterexample for one ‘side’ that turns the screws less than the counterexample for the other one. Most people find the ‘very repugnant conclusion’ (where not only Z > A, but ‘large enough Z and some arbitrary number having awful lives > A’) even more costly than the ‘standard’ RC. So using the more or less costly variant on one side of the scales may alter intuitive responses.By my lights, it seems better to have some procedure for picking and comparing cases which isolates the principle being evaluated. Ideally, the putative counterexamples share counterintuitive features both theories endorse, but differ in one is trying to explore the worst case that can be constructed which the principle would avoid, whilst the other the worst case that can be constructed with its inclusion.It seems the main engine of RC-like examples is the aggregation—it feels like one is being nickel-and-dimed taking a lot of very small things to outweigh one very large thing, even though the aggregate is much higher. The typical worry a negative view avoids is trading major suffering for sufficient amounts of minor happiness—most typically think this is priced too cheaply, particularly at extremes. The typical worry of the (absolute) negative view itself is it fails to price happiness at all—yet often we’re inclined to say enduring some suffering (or accepting some risk of suffering) is a good deal at least at some extreme of ‘upside’.So with this procedure the putative counter-example to the classical view would be the vRC. Although negative views may not give crisp recommendations against the RC (e.g. if we stipulate no one ever suffers in any of the worlds, but are more or less happy), its addition clearly recommends against the vRC: the great suffering isn’t outweighed by the large amounts of relatively trivial happiness (but it would be on the classical view).Yet with this procedure, we can construct a much worse counterexample to the negative view than the OC—by my lights, far more intuitively toxic than the already costly vRC. (Owed to Carl Shulman). Suppose A is a vast but trivially-imperfect utopia—Trillions (or googleplexes, or TREE(TREE(3))) lives lives of all-but-perfect bliss, but for each enduring an episode of trivial discomfort or suffering (e.g. a pin-prick, waiting a queue for an hour). Suppose Z is a world with a (relatively) much smaller number of people (e.g. a billion) living like the child in Omelas. The negative view ranks Z > A: the negative view only considers the pinpricks in this utopia, and sufficiently huge magnitudes of these can worse than awful lives (the classical view, which wouldn’t discount all the upside in A, would not). In general, this negative view can countenance any amount of awful suffering if this is the price to pay to abolish a near-utopia of sufficient size.(This axiology is also anti-egalitarian (consider replacing half the people in A with half the people in Z) and—depending how you litigate—susceptible to a sadistic conclusion. If the axiology claims welfare is capped above by 0, then there’s never an option of adding positive welfare lives so nothing can be sadistic. If instead it discounts positive welfare, then it prefers (given half of A) adding half of Z (very negative welfare lives) to adding the other half of A (very positive lives)).I take this to make absolute negative utilitarianism (similar to average utilitarianism) a non-starter. In the same way folks look for a better articulation of egalitarian-esque commitments that make one (at least initially) sympathetic to average utilitarianism, so folks with negative-esque sympathies may look for better articulations of this commitment. One candidate could be what one is really interested in cases of severe rather than trivial suffering, so this rather than suffering in general should be the object of sole/lexically prior concern. (Obviously there are many other lines, and corresponding objections to each).But note this is an anti-aggregation move. Analogous ones are available for classical utilitarians to avoid the (v/)RC (e.g. a critical-level view which discounts positive welfare below some threshold). So if one is trying to evaluate a particular principle out of a set, it would be wise to aim for ‘like-for-like’: e.g. perhaps a ‘negative plus a lexical threshold’ view is more palatable than classical util, yet CLU would fare even better than either.
[Mea culpa re. messing up the formatting again]1) I don’t closely follow the current state of play in terms of ‘shorttermist’ evaluation. The reply I hope (e.g.) a Givewell Analyst would make to (e.g.) “Why aren’t you factoring in impacts on climate change for these interventions?” would be some mix of:a) “We have looked at this, and we’re confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc.”b) “We tried looking into this, but our uncertainty is highly resilient (and our best guess doesn’t vary appreciably between interventions) so we get higher yield investigating other things.”c) “We are explicit our analysis is predicated on moral (e.g. “human lives are so much more important than animals lives any impact on the latter is ~moot”) or epistemic (e.g. some ‘common sense anti-cluelessness’ position) claims which either we corporately endorse and/or our audience typically endorses.” Perhaps such hopes would be generally disappointed.2) Similar to above, I don’t object to (re. animals) positions like “Our view is this consideration isn’t a concern as X” or “Given this consideration, we target Y rather than Z”, or “Although we aim for A, B is a very good proxy indicator for A which we use in comparative evaluation.”But I at least used to see folks appeal to motivations which obviate (inverse/) logic of the larder issues, particularly re. diet change (“Sure, it’s actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what we’re aiming for”). Yet this overriding motivation typically only ‘came up’ in the context of this discussion, and corollary questions like:* “Is maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?”
* “Is encouraging carnivores to adopt a vegan diet the best way to influence attitudes?”
* “Shouldn’t we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/be bad by the lights of many/most non-consequentialist views?” seemed seldom asked. Naturally I hope this is a relic of my perhaps jaundiced memory.
FWIW, I don’t think ‘risks’ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. “Give Directly reduces extinction risk by reducing poverty, a known cause of conflict”); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
(Apologies in advance I’m rehashing unhelpfully)The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a ‘confirmed discovery’). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, there’s a natural response of ‘shouldn’t we try something which targets this on purpose?‘; if it were 0, we wouldn’t attend to it further; if it meant you were −10, you wouldn’t give to (now net EV = “-9”) GiveDirectly. The right response where all three scenarios are credible (plus all the intermediates) but you’re unsure which one you’re in isn’t intuitively obvious (at least to me). Even if (like me) you’re sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just ‘run the numbers’ and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something ‘extra’ to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms ‘cancel out’), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better. This can be put in plain(er) English (although familiar-to-EA jargon like ‘EV’ may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isn’t even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty. Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/vocab to grapple with the problem. They may also think we don’t have a good ‘answer’ yet of what to do in these situations, so may hesitate to give ‘accept there’s uncertainty but don’t be paralysed by it’ advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.
Belatedly:I read the stakes here differently to you. I don’t think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to ‘everything which isn’t longtermism’. At least, that isn’t my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to. The AMF discussions around cluelessness in the OP are intended as toy example—if you like, deliberating purely on “is it good or bad to give to AMF versus this particular alternative?” instead of “Out of all options, should it be AMF?” Parallel to you, although I do think (per OP) AMF donations are net good, I also think (per the contours of your reply) it should be excluded as a promising candidate for the best thing to donate to: if what really matters is how the deep future goes, and the axes of these accessible at present are things like x-risk, interventions which are only tangentially related to these are so unlikely to be best they can be ruled-out ~immediately.
So if that isn’t a main motivation, what is? Perhaps something like this:1) How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-land—particularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive. In most cases, these mechanisms for something to ‘backfire’ are fairly trivial, but how seriously credible ones should be investigated is up for grabs.Although “just be indifferent if it is hard to figure out” is a bad technique which finds little favour, I see a variety of mistakes in and around here. E.g.:a) People not tracking when the ground of appeal for an intervention has changed. Although I don’t see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an ‘inverse logic of the larder’ (see), such as “per area, a factory farm has a lower intensity of animal suffering than the environment it replaced”. Even if so, it wouldn’t follow the best thing to do would to be as carnivorous as possible. There are also various lines of response. However, one is to say that the key objective of animal advocacy is to encourage greater concern about animal welfare, so that this can ramify through to benefits in the medium term. However, if this is the rationale, metrics of ‘animal suffering averted per $’ remain prominent despite having minimal relevance. If the aim of the game is attitude change, things like shelters and companion animals over changes in factory farmed welfare start looking a lot more credible again in virtue of their greater salience.b) Early (or motivated) stopping across crucial considerations. There are a host of ramifications to population growth which point in both directions (e.g. climate change, economic output, increased meat consumption, larger aggregate welfare, etc.) Although very few folks rely on these when considering interventions like AMF (but cf.) they are often being relied upon by those suggesting interventions specifically targeted to fertility: enabling contraceptive access (e.g. more contraceptive access --> fewer births --> less of a poor meat eater problem), or reducing rates of abortion (e.g. less abortion --> more people with worthwhile lives --> greater total utility).Discussions here are typically marred by proponents either completely ignoring considerations on the ‘other side’ of the population growth question, or giving very unequal time to them/sheltering behind uncertainty (e.g. “Considerations X, Y, and Z all tentatively support more population growth, admittedly there’s A, B, C, but we do not cover those in the interests of time—yet, if we had, they probably would tentatively oppose more population growth”). 2) Given my fairly deflationary OP, I don’t think these problems are best described as cluelessness (versus attending to resilient uncertainty and VoI in fairly orthodox evaluation procedures). But although I think I’m right, I don’t think I’m obviously right: if orthodox approaches struggle here, less orthodox ones with representors, incomparability or other features may be what should be used in decision-making (including when we should make decisions versus investigate further). If so then this reasoning looks like a fairly distinct species which could warrant it’s own label.
I may be missing the thread, but the ‘ignoring’ I’d have in mind for resilient cluelessness would be straight-ticket precision, which shouldn’t be intransitive (or have issues with principle of indifference).E.g. Say I’m sure I can make no progress on (e.g.) the moral weight of chickens versus humans in moral calculation—maybe I’m confident there’s no fact of the matter, or interpretation of the empirical basis is beyond our capabilities forevermore, or whatever else.Yet (I urge) I should still make a precise assignment (which is not obliged to be indifferent/symmetrical), and I can still be in reflective equilibrium between these assignments even if I’m resiliently uncertain.