True, some common abreviations are standard. But my remarks, and probably Dobroslawa’s, concern mostly oral conversations—that’s the context where non-native speakers are in a huge disadvantage, even if they are proficient.
I kind of enjoy reading unusual expressions or slang, because it gives me new data and time enough to update on—so if someone uses it in a conversation later on, I may have a better chance of understanding it. Perhaps that’s precisely the problem for skilled non-native speakers: we’re usually much better “trained” in the written language than in the spoken one, so that we’re often ignorant about some of their differences. Thus, writing “slang, abbreviations, unusual collocations” may actually have a net positive effect.
It looks like a strawman to me. It conflates (A) a question about evaluation (is Suboptimal Earth axiologically better than current Earth?) with (B) a question about decision/action (would it be right to kill everyone for the sake of Suboptimal Earth), and it omits:
(A) a utilitarian doesn’t classify scenarios categorically (“this is good, that is bad”), but through an ordering over possible worlds, such as: (1) current population + everyone alive in Suboptimal Earth is better than (2) Suboptimal Earth scenario minus current population is better than (3) current Earth...
(B) a utilitarian decides according to ex ante expected utility, so it’d have to ask “what’s the odds that Suboptimal Earth will occur given my decision?”
Of course, there are huge problems for such reasoning—a more realistic Suboptimal Earth would get close to a Pascal Muggering: imagine that a Super AGI asked you to press this red button, freeing it to turn the whole galaxy into an eternal utopian hedonist simulation, for example.
As someone who has been “fighting” utilitarianism for a long time, I can say that the best objections against it have been produced by utilitarians themselves.
Thanks for this post. However, HoH still seems ambiguous to me, particularly when we take uncertainty seriously. For example, what kind of comparison is happening in “T is the most influential time ever”—and, consequently, what kind of probability function does one use to model credence in it?
1) Weak-HoH: “the sentence ‘t is hingey’ is more likely to be true for now (or for the next n years) than for any other similar set t in the future”
If you interpret hingey events as produced by stochastic processes modeled by an exponential distribution, then weak-HoH has a trivial explanation.
If the risk of rain is p= .03 per day, then today is most likely to be the next rainy day—because the risk of it being tomorrow is (.97 * .03) – i.e., the probability of not raining today multiplied by the probability of raining tomorrow, and so on.
So, even though it’s very unlikely that we’ll go extinct in the next year, if I had to bet on an exact year, 2020 is a priori more likely to be it than 2021 - we can only die once. Something similar for AGI: though I don’t think it’s gonna happen in the next decade, this century is more likely to be The One than the next—but not more likely than the next 900 years, for example.
2) The strongest version of HoH is (A): “Now is THE most important time ever”, which is so unlikely that it looks like a strawman. But (B): “Now is more important than the median / average” is very tempting: first, the prior is high – you need evidence 99 times weaker (1:99 against 1:1 odds) to ascertain (B) instead of (C): “Now is in the first percentile of the importance distribution”. Second, it fits the historical record better – it looks like most of the last 200 kyr were boring in comparison with now (of course, I agree there are some huge biases affecting this assessment). Also, the HoH defender may limit the considered time-span: “the next decade will be the most important in the century / the next 100 years”
3) In (1) and (2), I supposed HoH refers only to the future, but some of the arguments against HoH refer any time, even the past. What’s the relevance (and meaning) of comparing the influence of now to important times in the past – besides assessing the odds of existing more hingey times in the future?
Influence is asymmetric: the past influences both the present and the future. Also, It seems plausible that hingeness is not a “timeless property” or absolute property: 3 different rational individuals, X, Y and Z, each located in different times Tx, Ty and Tz, would have different impartial assessments of the set (Tx, Ty, Tz) – mostly because of uncertainty, or the path-dependency of their actions, or value differences. And since “hingeness” is an ordering, not a cardinal relation, it might be hard (if not impossible) to aggregate X-Y-Z assessments.
I agree with your reasoning concerning uncertainty.
In the arguments against HoH, there’s an appeal to the uncertainty of our evaluations of “Influence”. However, the definition of most influential time depends on an evaluation of the opportunity costs of investing in one time vs. another (such as the short-term vs. the long-term).
Uncertainty is a double-edged sword: I get confused when someone argues for “give later” mostly on the ground of our current uncertainty about impact (actually, uncertainty often induces risk-aversion and presentist bias). Suppose that I currently have a credence 0.7 over the statement “AMF saves at least a life (30 QALY) for every U$3,000”; if I wait ten years, I can hope my confidence on such statements will increase to something like 0.8. However, my confidence in such an increase is just 0.9 – so, when I aggregate all of this uncertainty, it’s almost a draw – 0.72.
(Sorry about using point estimates, but I’m no statistician, and I guess we better keep it simple)
Something similar applies to “start a movement”, and I didn’t even mention cluelessness and value shift.
So, if I donate to a Fund that promises me to invest in the best actions in the long term future, instead of the short-term, I have to trust: a) that the world is not going to end first (so, I have to discount extinction rates); b) the Fund and the underlying financial structure will not end first (or significantly lose its value); c) the Fund will correctly identify a more influential moment, and d) its investment will be aligned with my impartial preferences (as I would decide if I had the same info).
I’m no expert in the field, but this problem really bothers me, too—so perhaps you should read my remarks as additional questions.
So the first part of my question is:
“Anthropic shadow” is an observation bias / selection effect concerning the data-generating process. I don’t see such bias in your red/blue example, where (CMIW) you have both perfect info on Q, N and the final state of the marker. For this to be analogous to anthropic bias regarding x-risks, you should add a new feature—like someone erasing your memory and records with probability P* whenever Coin#1 lands heads.
(My “personal” toy model of anthropic shadow problems is someone trying to estimate the probability of heads for the next coin toss, after a sequence TTTT… knowing that, whenever the coin lands heads, the memory of previous tosses is erased. It’s tempting to just apply Laplace’s Rule of Succession here—but it’d mean knowing the amnesia possibility gives you no information.
I don’t think that’s an exact representation of our anthropic bias over x-risks, but it does highlight a problem easy to underestimate)
And the second part is: How can the anthropic shadow argument be phrased in a fully bayesian way?
I guess that’s the jackpot, right? idk. But one the best attacks on this problem I’ve seen so far is Snyder-Beattie, Ord & Bonsall Nature paper.
Thank you, and congrats for writing this.
Avoid slang, abbreviations, unusual collocations. Speak clearly and slowly.
This has to be constantly remarked. I think a good antidote for that is to learn a second language and talk to its native-speakers (besides, reasoning in a foreign language may reduce some biases).
BTW, I’m not sure if that’s just me, but one of the things that sometimes prevents me from engaging in a conversation with other person in her native language (not only English) is that, if I am too successful (e.g., if I mimic her accent or style), she often assumes I’m almost as proficient as her and ends up speaking twice as fast, with slangs only a professional rapper would know. So: even if a non-native speaker doesn’t seem to have an accent (= she speaks with your accent), don’t assume you can drop the “avoid slang...” advice.
Maybe one could argue in favor of an article in the Stanford Encyclopedia of Philosophy or in the IEP, too.
Thanks for this post. I added some of these books to my reading list. Have you considered literary novels or essays?
There’s a post on books someone would recommend to a gifted teenager with some tips on this
In this line, I’d recommend “The Mind’s I”, a collection Dennett has edited in collaboration with Hofstadter.
Terry Pratchett, particularly The Amazing Maurice…
DFW, Infinite Jest;
J. S. Foer, On eating animals;
Jonathan Franzen, Freedom;
Cixin Liu, Remembrance of earth’s past.
My point is that by “gifted teenager” you probably mean someone intelectually gifted, but not necessarily morally aligned; moreover, teenagers (everyone, actually, but teens more than anyone else) may rebel and resist if it’s too obvious that you’re trying to lead them to a specific mindset. So, if that might be the case, perhaps you should consider what kind of literature would nudge this teenager into EA-thinking first, and then what kind of books could shape their thought.
I agree with most of the text, though with the same epistemic status. Nevertheless, I fear sovereign funds and government investment might affect free trade and create an incentive for big companies to corrupt political power, competing for its support, and for corrupt politicians to use this power for their own benefit. This may seem easy to avoid through good institutional design, but since we cannot even avoid regulatory capture, nor tax avoidance...
Me too. Perhaps we should create a mutual support group ourselves? The “mid-career You can Save”?
However, I’m not so sure about what you guys mean by “harder” in this context. Yes, it might be easier to spot some really promising 22-year-old Ivy League graduates and advise them, and, since they have so many options left, general advice might be good enough. But it doesn’t seem so hard to nudge some mid-career professionals towards optimal options, precisely because there are less alternatives. And wouldn’t it be more scalable? E.g. what’s more likely, that we can advice the right young graduate to get a job in the government, or that we could talk to many potential candidates and convert at least one of them into EA goals?
True, but people are already competing to invest in THC providers. Why wouldn’t they do it for psychedelics?
Agree. I kind of regret mentioning QALY in my argument, but do notice that I was trying to be healthy skeptical when I mentioned “I still don’t think that donating for this cause would result, in the margin, in more QALY than donating to GD, in general”. I never said I was confident that GD would result in more QALYs than supporting psychedelics.
First, I’m not referring GD as our best charity, but just as a minimal standard for EA causes.
Second, last time I checked (please, update if I’m wrong):
GD was considered to be saving 1 life per U$7000 on nov 2016 by GW: https://docs.google.com/spreadsheets/d/1KiWfiAGX_QZhRbC9xkzf3I8IqsXC5kkr-nwY_feVlcM/edit#gid=1034883018
GW considered 1 life = 35 QALY. So, I estimate GD results in U$200/QALY
(Actually, there are huge uncertainties over this estimate, and GW is not conclusive about GDs effectiveness in terms of lives and QALY. But one could pick AMF or SCI instead as a standard)
I’m assuming DALY = 1 - QALY
Enthea’s estimate of psychedelics liberalization is of $472/DALY.
I do agree that QALY is biased towards some interventions, and that mental health is usually underestimated by healthy people (I suspect they are unduly led by the lack of physical and apparent symptoms). I do think we should find out how to treat depression properly (maybe some neglected, cheap and scalable solution end up becoming an EA-like charity).
However, I don’t believe Enthea poll is free of biases, either; particularly, it seems to me that people in developed countries consistently underestimate the burden of disease and poverty in the 3o world, screwing the comparison in the opposite way.
Notwithstanding, my main point is not so much about impact, but about neglectedness; 32 million people had experimented with psychedelics only in US by 2010 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3917651/). If each of them donated an average of U$ 1 for this cause, they would match all of GDs transfers in 2017. I do believe we should liberalize psychedelics—and probably we will, eventually,since many people with considerable purchase power are interested in it.
You have a good point: if a big pharma can’t have IP over a psychedelic product, at least in our current system, it has no incentives to invest on risky R&D. However, we do observe increasing private funding for psychedelic research and a lot of recent exposure; and the war on drugs explains enough of the halt in psychedelics research in the 70′s. So, despite updating my priors, I still don’t think that donating for this cause would result, in the margin, in more QALY than donating to GD, in general.
Epistemic status: >50%
(I hope SSC is wrong and Griffe is right, and I’d like to see more research , too—but I think it’s way more likely that psychedelics end up being provided by big companies than by startups or non-profits)
I feel tempted to invoke epistemic (and financial) modesty: depression (and mental health) is not a very neglected disease which only affects a small or poor population; there’s a lot of money to be made in this area by pharmaceutical research, and I see no coordination problem or similar obstacle. If big companies such as Bayer or Pfizer (more capable of providing adequate funding, research and lobby) are not willing to bet on that, why should we?
P.S.: I didn’t read every other comment, but I searched a little bit and concluded that only GnomeGnostic mentioned big pharma. His argument is sound.
I wonder if the results of this salience manipulation can be explained as some kind of framing effect of loss-gain asymmetry.
I think you should warn your reader, in the first or second paragraph, that your intent is not so straightforward. Do not assume everyone will read it up to the end otherwise.
This is not a solution/answer, but someone should design a clever way for us to be constantly searching for cause x. I think a general contest could help, such as an “Effective Thesis Prize”, to reward good works aligned with EA goals; perhaps cause x could be the aim of a contest of its own.