Can you (or someone) write a TLDR of why “helping others” would turn off “progressives”?
Vanessa
Kudos for the initiative! I think it makes sense to crosspost this to LessWrong.
I cried a lot, especially in the ending. Also really liked the concept of the witch doing all this for the sake of other/future people. And, wow, this part:
“There is beauty in the world and there is a horror,” she said, “and I would not a miss a second of the beauty and I will not close my eyes to the horror.”
Bravo!
Yes, I did notice you’re subverting the trope here, it was very well done :)
Points where I agree with the paper:
Utilitarianism is not any sort of objective truth, in many cases it is not even a good idea in practice (but in other cases it is).
The long-term future, while important, should not completely dominate decision making.
Slowing down progress is a valid approach to mitigating X-risk, at least in theory.
Points where I disagree with the paper:
The papers argues that “for others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent”. I think it is completely clear, given that in pre-industrial times most people lived in societies that were rather unfree and unequal (harder to say about “virtue” since different people would argue for very different conceptions of what virtue is). Moreover, although intellectuals argued for all sorts of positions (words are cheap, after all), few people are trying to return to pre-industrial life in practice. Finally, techno-utopian visions of the future are usually very pro-freedom and are entirely consistent with groups of people voluntarily choosing to live in primitivist communes or whatever.
If ideas are promoted by an “elitist” minority, that doesn’t automatically imply anything bad. Other commenters have justly pointed out that many ideas that are widely accepted today (e.g. gender equality, religious freedom, expanding suffrage) were initially promoted by elitist minorities. In practice, X-risk is dominated by a minority since they are the people who care most X-risk. Nobody is silencing the voices of other people (maybe the authors would disagree, given their diatribe in this post, but I am skeptical).
“Democratization” is not always a good approach. Democratic decision processes are often dominated by tribal virtue-signaling (simulacrum levels 3⁄4), because from the perspective of every individual participant using their voice for signaling is much more impactful than using it for affecting the outcome (a sort of tragedy of the commons). I find that democracy is good for situations that are zero-sum-ish (dividing a pie), where abuse of power is a major concern, whereas for situation that are cooperative-ish (i.e. everyone’s interests are aligned), it is much better to use meritocracy. That is, set up institutions that give more stage to good thinkers rather than giving an equal voice to everyone. X-risk seems much closer to the latter than to the former.
If some risk is more speculative, that doesn’t mean we should necessarily allocate it less resources. “Speculativeness” is a property of the map, not the territory. A speculative risk can kill you just as well as a non-speculative risk. The allocation of resources should be driven by object-level discussion, not by a meta-level appeal to “speculativeness” or “consensus”.
Because, unfortunately, we do not have consensus among experts about AI risk, talking about moratoria on AI seems impractical. With time, we might be able to build such a consensus and then go for a moratorium, although it is also possible we don’t have enough time for this.
This is a relatively minor point, but there is some tension between the authors calling for stopping the development of dangerous technology while also strongly rejecting the idea of government surveillance. Clearly, imposing a moratorium on research requires some infringement on personal freedoms. I understand the authors’ argument as something like: early moratoria are better since they require less drastic measures. This is probably true but the tension should be acknowledged more.
Kudos for this post. One quibble I have is, in the beginning you write
Potential help includes:
Money
Good mental health support
Friends or helpers, for when things are tough
Insurance (broader than health insurance)
But later you focus almost exclusively on money. [Rest of the comment was edited out.]
What is this “Effective Crypto”? (Google gave me nothing)
Quantified uncertainty might be fairly important for alignment, since there is a class of approaches that rely on confidence thresholds to avoid catastrophic errors (1, 2, 3). What might also be important is the ability to explicitly control your prior in order to encode assumptions such as those needed for value learning (but maybe there are ways to do it with other methods).
We plan to run 3 EA Global conferences in 2021
I’m guessing this is a typo and you meant 2022?
IMO everyone have pure time preference (descriptively, as a revealed preference). To me it just seems commonsensical, but it is also very hard to mathematically make sense of rationality without pure time preference, because of issues with divergent/unbounded/discontinuous utility functions. My speculative 1st approximation theory of pure time preference for humans is: choose a policy according to minimax regret over all exponential time discount constants starting from around the scale of a natural human lifetime and going to infinity. For a better approximation, you need to also account for hyperbolic time discount.
The question is, what is your prior about extinction risk? If your prior is sufficiently uninformative, you get divergence. If you dogmatically believe in extinction risk, you can get convergence but then it’s pretty close to having intrinsic time discount. To the extent it is not the same, the difference comes through privileging hypotheses that are harmonious with your dogma about extinction risk, which seems questionable.
I dunno if I count as “EA”, but I think that a social planner should have nonzero pure time preference, yes.
Because, ceteris paribus I care about things that happen sooner more than about things that happen latter. And, like I said, not having pure time preference seems incoherent.
As a meta-sidenote, I find that arguments about ethics are rarely constructive, since there is too little in the way of agreed-upon objective criteria and too much in the way of social incentives to voice / not voice certain positions. In particular when someone asks why I have a particular preference, I have no idea what kind of justification they expect (from some ethical principle they presuppose? evolutionary psychology? social contract / game theory?)
This is separate to the normative question of whether or not people should have zero pure time preference when it comes to evaluating the ethics of policies that will affect future generations.
I am a moral anti-realist. I don’t believe in ethics the way utilitarians (for example) use the word. I believe there are certain things I want, and certain things other people want, and we can coordinate on that. And coordinating on that requires establishing social norms, including what we colloquially refer to as “ethics”. Hypothetically, if I have time preference and other people don’t then I would agree to coordinate on a compromise. In practice, I suspect that everyone have time preference.
So if hypothetically we were alive around King Tut’s time and we were given the mandatory choice to either torture him or, with certainty, cause the torture of all 7 billion humans today we would easily choose the latter with a 1% rate of pure time preference (which seems obviously wrong to me).
You can avoid this kind of conclusions if you accept my decision rule of minimax regret over all discount timescales from some finite value to infinity.
Thank you for this comment!
Knowledge about AI alignment is beneficial but not strictly necessarily. Casting a wider net is something I planned to do in the future, but not right now. Among other reasons, because I don’t understand the academic job ecosystem and don’t want to spend a huge effort studying it in the near-term.
However, if it’s as easy as posting the job on mathjobs.org, maybe I should do it. How popular is that website among applicants, as far as you know? Is there something similar for computer scientists? Is there any way to post a job without specifying a geographic location s.t. applicants from different places would be likely to find it?
Thank you Frank, that’s very useful to know!
I am skeptical of using answers to questions such as “how satisfied are you with your life?” as a measure of human preferences. I suspect that the meaning of the answer might differ substantially between people in different cultures and/or be normalized w.r.t. some complicated implicit baseline, such as what a person thinks they should “expect” or “deserve”. I would be more optimistic of measurements based on revealed preferences, i.e. what people actually choose given several options when they are well-informed or what people think of their past choices in hindsight (or at least what they say they would choose in hypothetical situations, but this is less reliable).
[I’m assuming that something like preference utilitarianism is a reasonable model of our goal here, I do realize some people might disagree but didn’t want to dive into those weeds just yet.]
(I only skimmed the article, so my apologies if this was addressed somewhere and I missed it.)
Suppose I’m the intended recipient of a philanthropic intervention by an organization called MaxGood. They are considering two possible interventions: A and B. If MaxGood choose according to “decision utility” then the result is equivalent to letting me choose, assuming that I am well-informed about the consequences. In particular, if it was in my power to decide according to what measure they choose their intervention, I would definitely choose decision-utility. Indeed, making MaxGood choose according to decision-utility is guaranteed to be the best choice according to decision-utility, assuming MaxGood are at least as well informed about things as I am, and by definition I’m making my choices according to decision-utility.
On the other hand, letting MaxGood choose according to my answer on a poll is… Well, if I knew how the poll is used when answering it, I could use it to achieve the same effect. But in practice, this is not the context in which people answer those polls (even if they know the poll is used for philanthropy, this philanthropy usually doesn’t target them personally, and even if it did individual answers would have tiny influence[1]). Therefore, the result might be what I actually want or it might be e.g. choosing an intervention which will influence society in a direction that makes putting higher numbers culturally expected or will lower the baseline expectations w.r.t. which I’m implicitly calculating this number[2].
Another issue with polls is, how do we know the answer is utility rather than some monotonic function of utility? The difference is important if we need to compute expectations. But this is the least of the problem IMO.
Now, in reality it is not in the recipient’s power to decide on that measure. Hence MaxGood are free to decide in some other way. But, if your philanthropy is explicitly going against what the recipient would choose for themself[3], well… From my perspective (as Vanessa this time), this is not even altruism anymore. This is imposing your own preferences on other people[4].
- ^
A similar situation arises in voting, and I indeed believe this causes people to vote in ways other than optimizing the governance of the country (specifically, vote according to tribal signalling considerations instead).
- ^
Although in practice, many interventions have limited predictable influence on this kind of factors, which might mean that poll-based measures are usually fine. It might still be difficult to see the signal through the noise in this measure. And, we need to be vigilant about interventions that don’t fall into this class.
- ^
It is ofc absolutely fine if e.g. MaxGood are using a poll-based measure because they believe, with rational justification, that in practice this is the best way to maximize the recipient’s decision-utility.
- ^
I’m ignoring animals in this entire analysis, but this doesn’t matter much since the poll methodology is in applicable to animals anyway.
- ^
I don’t know much about supplements/bednets, but AFAIU there are some economy of scale issues which make it easier for e.g. AMF to supply bednets compared with individuals buying bednets for themselves.
As to how to predict “decision utility when well informed”, one method I can think of is look at people who have been selected for being well-informed while similar to target recipients in other respects.
But, I don’t at all claim that I know how to do it right, or even that life satisfaction polls are useless. I’m just saying that I would feel better about research grounded in (what I see as) more solid starting assumptions, which might lead to using life satisfaction polls or to something else entirely (or a combination of both).
- 19 Aug 2022 20:02 UTC; 2 points) 's comment on Rhodri Davies on why he’s not an EA by (
I am deeply touched and honored by this endorsement. I wish to thank the LTFF and all the donors who support the LTFF from the bottom of my heart, and promise you that I will do my utmost to justify your trust.