Yes, I kind of did see this coming (although not in the US) and I’ve been working on a forum post for like a year and now I will finish it.
Yeah I wrote it in google docs and then couldn’t figure out how to transfer the del and suffixes to the forum.
I think this is correct and EA thinks about neglectedness wrong. I’ve been meaning to formalise this for a while and will do that now.
If preference utilitarianism is correct there may be no utility function that accurately describes the true value of things. This will be the case if people’s preferences aren’t continuous or aren’t complete, for instance if they’re expressed as a vector. This generalises to other forms of consequentialism that don’t have a utility function baked in.
A 6 line argument for AGI risk
(1) Sufficient intelligence has capitalities that are ultimately limited by physics and computability
(2) An AGI could be sufficiently intelligent that it’s limited by physics and computability but humans can’t be
(3) An AGI will come into existence
(4) If the AGIs goals aren’t the same as humans, human goals will only be met for instrumental reasons and the AGIs goals will be met
(5) Meeting human goals won’t be instrumentally useful in the long run for an unaligned AGI
(6) It is more morally valuable for human goals to be met than an AGIs goals
Thank you, those both look like exactly what I’m looking for
But thank you for replying, in hindsight by reply seems a bit dismissive :)
Not really because that paper is essentially just making the consequentialist claim that axiological long termism implies that the action we should take are those which help the long run future the most. The Good is still prior to the Right.
Hi Alex, the link isn’t working
I’m worried about associating Effective altruism and rationality closely in public. I think rationality is reasonably likely to make enemies. The existence of r/sneerclub is maybe the strongest evidence of this, but also the general dislike that lots of people have for silicon valley and ideas that have a very silicon valley feel to them. I’m unsure to degree people hate Dominic Cummings because he’s a rationality guy, but I think it’s some evidence to think that rationality is good at making enemies. Similarly, the whole NY times-Scott Alexander crazyness makes me think there’s the potential for lots of people to be really anti rationality.
I think empirical claims can be discriminatory. I was struggling with how to think about this for a while, but I think I’ve come to two conclusions. The first way I think that empirical claims can be discrimory is if they express discriminatory claims with no evidence, and people refusing to change their beliefs based on evidence. I think the other way that they can be discriminatory is when talking about the definitions of socially constructed concepts where we can, in some sense and in some contexts, decide what is true.
I think the relevant split is between people who have different standards and different preferences for enforcing discourse norms. The ideal type position on the SJ side is that a significant number of claims relating to certain protected characteristics are beyond the pale and should be subject to strict social sanctions. The facebook group seems to on the over side of this divide.
I think using Bayesian regret misses a number of important things.
It’s somewhat unclear if it means utility in the sense of a function that maps preference relations to real numbers, or utility in axiological sense. If it’s in the former sense then I think it misses a number of very important things. The first is that preferences are changed by the political process. The second is that people have stable preferences for terrible things like capital punishment.
If it means it in the axiological sense then I don’t think we have strong reason to believe that how people vote will be closely related and I think we have reason to believe it will be different systematically. This also makes it vulnerable to some people having terrible outcomes.
Lots of what I’m worried about with elected leaders are negative externalities. For instance, quite plausibly the main reasons Trump was bad was his opposition to climate change and rejecting democratic norms. The former harms mostly people in other countries and future generations, and the latter mostly future generations (and probably people in other countries too more than Americans although it’s not obviously true.)
It also doesn’t account for dynamic affects of parties changing their platforms. My claim is that the overton window is real and important.
I think that having strong political parties which the electoral system protects is good for stopping these things in rich democracies because I think the gatekeepers will systematically support the system that put them in power. I also think the set of polices the elite support is better in the axiological sense than those supported by the voting population. The catch here is that the US has weak political parties that are supported by electoral system.
Yeah I mean this is a pretty testable hypothesis and I’m tempted to actually test it. My guess is that the level of vote splitting that electoral system has won’t have an effect and that that whether not voting is compulsory, number of young people, level of education and level of trust will explain most of the variation in rich democracies.
Two books I recommend on structural causes and solutions to global poverty. The bottom billion by Paul Collier focuses on the question how can you get failed and failing states in very poor countries to middle income status and has a particular focus on civil war. It also looks at some solutions and thinks about the second order effects of aid. How Asia works by Joe Studwell focus on the question of how can you get poor countries with high quality, or potentially high quality governance and reasonably good political economy to become high income countries. It focuses exclusively on the Asian developmental state model and compares it with neoliberalish models in other parts of Asia that are now mostly middle income countries.
Maybe this isn’t something people on the forum do, but it is something I’ve heard some EAs suggest. People often have a problem when they become EAs that they now believe this really strange thing that potentially is quite core to their identity now and that can feel quite isolating. A suggestion that I’ve heard is that people should find new, EA friends to solve this problem. It is extremely important that this does not come off as saying that people should cut ties with friends and family who aren’t EAs. It is extremely important that this is not what you mean. It would be deeply unhealthy for us as community is this became common.