I think the main comparative advantage (= irreplaceability) of the typical EA comes not from superior technical skill but from motivation to improve the world (rather than make money, advance one’s career or feel happy). This means researching questions which are ethically important but not grant-sexy, donating to charities which are high-impact but don’t yield a lot of warm-fuzzies, promoting policy which goes against tribal canon etc.
Squark
Maximizing long-term impact
Well the original strands of thought mostly came from early 19th century utopian socialists and were updated by Marx and Engels. There has been a lot of post-Marxian analysis as well.
AFAICT, the strands of thought you are talking about are poorly correlated with reality. Marxist thought is largely outside of mainstream economics. They use neither studies nor mathematical models (at least they didn’t in the 19th century). To the extent they are based on any evidence at all, this evidence is highly subjective interpretation of history. Finally, Marxist revolutions caused suffering and death on massive scale.
I suspect that Marxism is popular with intellectual elites for purely political reasons that have little to do with its objective intellectual merit. The same sort of elites supported Stalin and Mao in their time. To me it seems like a massive failure to update.
Of the examples you give here, I think #1 is the best by far.
Regarding #2, I think that world government is a great idea (assuming it’s a liberal, democratic world government!) but it’s highly unobvious how to get there. In particular, am very skeptical about giving more power to the UN. The UN is a fundamentally undemocratic institution, both because each country has 1 vote regardless of size and because (more importantly) many countries are represented by undemocratic governments. I am not at all convinced removing the security council veto power would have positive consequences. IMHO the first step towards world government or any similar goal would be funding a research programme that will create a plan that is evidence based, nonpartisan and incremental / reversible.
Regarding #3, I am really not sure who these theorists are and why should we believe them.
Another potentially relevant cause area (although I’m not sure whether this is “systemic change” as you understand it) is reforming the education system: setting more well-defined goals, using evidence based methods, improving incentive mechanisms, educating for rationality.
Your reply seems to be based on the premise that EA is some sort of a deontological duty to donate 10% of your income towards buying bednets. My interpretation of EA is very different. My perspective is that EA is about investing significant effort into optimizing the positive impact of your life on the world at large, roughly in the same sense that a startup founder invests significant effort into optimizing the future worth of their company (at least if they are a founder that stands a chance).
The deviation from imaginary “perfect altruisim” is either due to having values other than improving the world or due to practical limitations of humans. In neither case do moral offsets offer much help. In the former case, the deciding factor is the importance of improving the world versus the importance of helping yourself and your close circle, which offsets completely fail to reflect. In the latter case, the deciding factor is what can you actually endure without losing productivity to an extent which is more significant than the gain. Again, moral offsets don’t reflect the relevant considerations.
I think that most people here will tell you that we already know specific examples of such wrongdoing e.g. factory farming.
Nice review! Two comments so far:
Re Critch’s paper, the result is actually very intuitive once you understand the underlying mechanism. Critch considers a situation of, so to speak, Aumannian disagreement. That is, two agents hold different beliefs, despite being aware of each other’s beliefs, because some assumption of Aumann’s theorem is false: e.g. each agent considers emself smarter than the other. For example, imagine that Alice believes the Alpha Centauri system has more than 10 planets (call it “proposition P”), Bob believes it has less than 10 planets (“proposition not-P”) and each is aware of the other’s belief and considers it to be foolish. In this case, an AI that benefits Alice if P is true and benefits Bob if not-P is true would seem like an excellent deal for both of them, because each will be sure the AI is in eir own favor. In a way, the AI constitutes a bet between the two agents.
Critch writes: “It is also assumed that the players have common knowledge of one another’s posterior… Future work should design solutions for facilitating the process of attaining common knowledge, or to obviate the need to assume it.” Indeed it is interesting to study what happens when each agents does not know the other’s beliefs.
I will risk being accused of self-advertisement, but given that one of my papers appeared in the review it doesn’t seem too arrogant to point at another which IMHO is not less important, namely “Forecasting using incomplete models”, a paper that builds on Logical Induction in order to develop a way to reason about complex environments that doesn’t require logic/deduction. I think it would be nice if this paper was included, although of course it’s your review and your judgment whether it merits it.
So you claim that you have values related to animals that most people don’t have and you want your eccentric values to be overrepresented in the AI?
I’m asking unironically (personally I also care about wild animal suffering but I also suspect that most people would care about if they spent sufficient time thinking about it and looking at the evidence).
Who said we will preserve wild nature in its present form? We will re-engineer it to eliminate animal suffering while enhancing positive animal experience and wild nature’s aesthetic appeal.
I don’t think one should agonize over offsets. I think offsets are not a satisfactory solution the problem of balancing resource spending on charitable vs. personal ends since they don’t reflect the correct considerations. If you admit X leads to mental breakdowns then you should admit X is ruled out by purely consequentialist reasoning, without the need to bring in extra rules such as offsetting.
I think downvoting as disagreement is terrible.
First, promoting content based on majority agreement is a great way to build an echo chamber. We should promote content which is high-quality (well written, well argumented, thought-provoking, contains novel insights, provides a balanced perspective etc.). Hearing repetitions of what you already believe just amplifies your confirmation bias. I want to learn something new.
Second, downvoting creates a strong negative incentive against posting. Silencing people you disagree with is also a great way to build an echo chamber.
Third, downvoting based on disagreement creates a battle atmosphere. Instead of a platform for rational, well-meaning debate we risk turning into a scuffle between factions with different ideologies.
All in all I think the rules for downvoting posts should be slightly more lax than for downvoting comments. Downvoting a low-quality post is acceptable (but be very cautious before deciding something you disagree with is “low-quality”). Downvoting a comment is only acceptable when the comment is not in good faith (spam, trolling, flaming etc.). I think this is essential to maintain a healthy amicable atmosphere.
Upvoted, because although I disagree with much of this on object level, I think the post is totally legit and I think we should encourage original thinking.
Perhaps we need to find a time and place to start a serious discussion of ethics. I think hedonistic utilitarianism is wrong already on the level of meta-ethics. It seems to assume the existence of universal morals which from my point of view is a non-sequitur. Basically all discussions of universal morals are games with meaningless words, maps of non-existing territories. The only sensible meta-ethics I know is equating ethics with preferences. It seems that there is such a thing as intelligent agents with preferences (although we have no satisfactory mathematical definition yet). Of course each agent has its own preferences and the space of possible preferences is quite big (orthogonality thesis). Hence ethical subjectivism. Human preferences don’t seem to differ much from human to human once you take into account that much of the differences in instrumental goals are explained by different beliefs rather than different terminal goals (=preferences). Therefore it makes sense in certain situations to use approximate models of ethics that don’t explicitly mention the reference human, like utilitarianism. On the other hand, there is no reason the precise ethics should have a simple description (complexity of value). It is a philosophical error to think ethics should be low complexity like physical law since ethics (=preferences) is a property of the agent and has quite a bit of complexity put in by evolution. In other words, ethics is in the same category as the shape of Africa rather than Einstein’s equations. Taking simplified models which take only one value into account (e.g. pleasure) to the extreme is bound to lead to abhorrent conclusions as all other values as sacrificed.
I completely agree that many conceivable post-human future have low value. See also “unhumanity” scenario in my analysis. I think that term “existential risk” might be somewhat misleading since what we’re really aiming it as “existence of beings and experiences that we value” rather than just existence of “something.” That is, I view your reasoning not as an argument for caring less about existential risk but as an argument for working towards a valuable far future.
Regarding MIRI, I think their position is completely adequate since once we create a singleton which endorses our values it will guard us from all sorts of bad futures, not only from extinction.
Regarding “consciousness as similarity”, I think it’s a useful heuristic but it’s not necessarily universally applicable. I consider certain futures in which I gradually evolve into something much more complex than my current self as positive, but one must be very careful about which trajectories to endorse. Building an FAI will save us from doing irreversible mistakes, but if for some reason constructing a singleton turns out to be intractable we will have to think of other solutions.
Well, the problem with optimizing for a specific target audience is the risk to put off other audiences. I would say something like:
Being born with advantages isn’t something to feel guilty about. Being born with advantages is something to be glad about: it gives you that much more power to improve life for everyone.
Probably the most important “good things that can happen” after FAI are:
Whole brain emulation. It would allow eliminating death, pain and physical violence, not even mentioning ending discrimination and social stratification on the basis of appearance (although serious investment into cybersecurity would be required).
Full automation of the labor required to maintain a comfortable style of living for everyone. Avoiding a Malthusian catastrophe would still require reasonable reproduction culture (especially given immortality due to e.g. WBE).
...asking for an empirical meta study of complex social ideologies is not the right way to approach things.
What is the right way to approach things? In order to claim that certain policies will have certain consequences, you need some kind of model. In order to know that a model is useful, you need to test it against empirical data. The more broad, unusual and complex policy changes you are proposing, the more stringent standard of evidence you need to meet.
I have seen several empirical analyses by economists showing positive economic and welfare data from Soviet countries.
My family lived in the Soviet Union for its entire history. I assure you that it was a hellhole.
Many types of socialism and communism have not been implemented. For instance, Marxism advocates a classless and moneyless society. The USSR was not classless and was not moneyless.
The Khmer Rouge abolished money. Abolishing class is much harder since class can exist without formal acknowledgement in the legal system. The real question, though, is why should we think these changes are possible or desirable.
I don’t see how any of this takes away from the point it started from, namely that capitalism as an economic system has its own record of brutality as well as communism.
But the two are not on equal footing. People in modern Western-style democracies (which are all capitalist) enjoy personal freedom and quality of life unrivaled in the entire history of the human race. On the other hand, virtually all attempts to implement communism lead to disaster. So, although it is theoretically possible that some implementation of communism is superior, there is a very high burden of proof involved.
I said “public ownership of the means of production”, and Marxism is just one of several frameworks for doing this.
Well, Marxism was your justification for it.
More importantly, I did not suggest that the EA community embrace it. I suggested that people look into it, see if was desirable, etc. Doing so requires serious engagement with the relevant literature and discussing it with people who can answer your questions better. If I was trying to argue for socialism or communism, of course I would be speaking much differently and with much more extensive sources and evidence.
In this case, I suggest formulating a much broader objective e.g. “alternative systems of government / economics”. This might be communism, might be anarcho-capitalism, might be something else altogether. IMO, the best strategy is moving one level of “meta” up. Instead of promoting a specific political ideology, let’s fund promising research into theoretical tools that enable evaluating policy proposals or government systems.
Thanks for replying!
“There are no moral qualities over and above the ones we can measure, either a) in the consequences of an act, or b) in the behavioural profiles or personality traits in people that reliably lead to certain acts. Both these things are physical (or, at least, material in the latter case), and therefore measurable.”
The parameters you measure are physical properties to which you assign moral significance. The parameters themselves are science, the assignment of moral significance is “not science” in the sense that it depends on the entity doing the assignment.
The problem with your breatharianism example is that the claim “you can eat nothing and stay alive” is objectively wrong but the claim “dying is bad” is a moral judgement and therefore subjective. That is, the only sense in which “dying is bad” is a true claim is by interpreting it as “I prefer that people won’t die.”
In the preferences page there is a box for “EA Profile Link.” How does it work? That is, how do other users get from my username to the profile? I linked my LessWrong profile but it doesn’t seem to have any effect...
I have some evidence that there are many software engineers who would gladly volunteer to code for EA causes (and some access to such engineers). What volunteering opportunities like that are available? EA organizations that need coders? Open source projects that can be classified as EA causes? Anything else?
This essay comes across as confused about the is-ought problem. Science in the classical sense studies facts about physical reality, not moral qualities. Once you already decided something is valuable, you can use science to maximize it (e.g. using medicine to maximize health). Similarly if you already decided hedonistic utilitarianism is correct you can use science to find the best strategy for maximizing hedonistic utility.
I am convinced that ethics is subjective, not in the sense that any claim about ethics is as good as any other claim, but in the sense that different people and different cultures can possess different ethics (although perhaps the differences are not very substantial) and there is no objective measure by which one is better than the other. In other words, I think there is an objective function that takes a particular intelligent agent and produces a system of ethics but it is not the constant function.
Assessing the quality of conscious experiences using neuroscience might be a good tool for helping moral judgement, but again it is only useful in light of assumptions about ethics that come from elsewhere. On the other hand neuroscience might be useful for computing the “ethics function” above.
The step from ethical subjectivism to the claim it’s wrong to interfere with other cultures seems to me completely misguided, even backwards. If according to my ethics your culture is doing something bad then it is completely rational for me to stop your culture from doing it (at the same time it can be completely rational for you to resist). There is no universal value of “respecting other cultures” anymore than any other value is universal. If my ethics happens to include the value of “respecting other cultures” then I need to find the optimal trade-off between allowing the bad thing to continue and violating “respect”.