Blog at The Good Blog https://thegoodblog.substack.com/
Nathan_Barnard
Is there anyone doing research from an EA perspective on the impact of Nigeria becoming a great power by the end of this century? Nigeria is projected to be the 3rd largest country in the world by 2100 and potentially appears to be having exponential growth in GDP per capita. I’m not claiming that it’s a likely outcome that Nigeria is a great power in 2100 but nor does it seem impossible. It isn’t clear to me that Nigeria has dramatically worse institutions than India but I expect India to be a great power by 2100. It seems like it’d be really valuable for someone to do some work on this given it seems really neglected.
Yeah that sounds right, I don’t even know how many people are working on strategy based around India becoming a superpower, which seems completely plausible.
Maybe this isn’t something people on the forum do, but it is something I’ve heard some EAs suggest. People often have a problem when they become EAs that they now believe this really strange thing that potentially is quite core to their identity now and that can feel quite isolating. A suggestion that I’ve heard is that people should find new, EA friends to solve this problem. It is extremely important that this does not come off as saying that people should cut ties with friends and family who aren’t EAs. It is extremely important that this is not what you mean. It would be deeply unhealthy for us as community is this became common.
Two books I recommend on structural causes and solutions to global poverty. The bottom billion by Paul Collier focuses on the question how can you get failed and failing states in very poor countries to middle income status and has a particular focus on civil war. It also looks at some solutions and thinks about the second order effects of aid. How Asia works by Joe Studwell focus on the question of how can you get poor countries with high quality, or potentially high quality governance and reasonably good political economy to become high income countries. It focuses exclusively on the Asian developmental state model and compares it with neoliberalish models in other parts of Asia that are now mostly middle income countries.
Yeah I mean this is a pretty testable hypothesis and I’m tempted to actually test it. My guess is that the level of vote splitting that electoral system has won’t have an effect and that that whether not voting is compulsory, number of young people, level of education and level of trust will explain most of the variation in rich democracies.
I think using Bayesian regret misses a number of important things.
It’s somewhat unclear if it means utility in the sense of a function that maps preference relations to real numbers, or utility in axiological sense. If it’s in the former sense then I think it misses a number of very important things. The first is that preferences are changed by the political process. The second is that people have stable preferences for terrible things like capital punishment.
If it means it in the axiological sense then I don’t think we have strong reason to believe that how people vote will be closely related and I think we have reason to believe it will be different systematically. This also makes it vulnerable to some people having terrible outcomes.
Lots of what I’m worried about with elected leaders are negative externalities. For instance, quite plausibly the main reasons Trump was bad was his opposition to climate change and rejecting democratic norms. The former harms mostly people in other countries and future generations, and the latter mostly future generations (and probably people in other countries too more than Americans although it’s not obviously true.)
It also doesn’t account for dynamic affects of parties changing their platforms. My claim is that the overton window is real and important.
I think that having strong political parties which the electoral system protects is good for stopping these things in rich democracies because I think the gatekeepers will systematically support the system that put them in power. I also think the set of polices the elite support is better in the axiological sense than those supported by the voting population. The catch here is that the US has weak political parties that are supported by electoral system.
I think the relevant split is between people who have different standards and different preferences for enforcing discourse norms. The ideal type position on the SJ side is that a significant number of claims relating to certain protected characteristics are beyond the pale and should be subject to strict social sanctions. The facebook group seems to on the over side of this divide.
I think empirical claims can be discriminatory. I was struggling with how to think about this for a while, but I think I’ve come to two conclusions. The first way I think that empirical claims can be discrimory is if they express discriminatory claims with no evidence, and people refusing to change their beliefs based on evidence. I think the other way that they can be discriminatory is when talking about the definitions of socially constructed concepts where we can, in some sense and in some contexts, decide what is true.
I’m worried about associating Effective altruism and rationality closely in public. I think rationality is reasonably likely to make enemies. The existence of r/sneerclub is maybe the strongest evidence of this, but also the general dislike that lots of people have for silicon valley and ideas that have a very silicon valley feel to them. I’m unsure to degree people hate Dominic Cummings because he’s a rationality guy, but I think it’s some evidence to think that rationality is good at making enemies. Similarly, the whole NY times-Scott Alexander crazyness makes me think there’s the potential for lots of people to be really anti rationality.
A 6 line argument for AGI risk
(1) Sufficient intelligence has capitalities that are ultimately limited by physics and computability
(2) An AGI could be sufficiently intelligent that it’s limited by physics and computability but humans can’t be
(3) An AGI will come into existence
(4) If the AGIs goals aren’t the same as humans, human goals will only be met for instrumental reasons and the AGIs goals will be met
(5) Meeting human goals won’t be instrumentally useful in the long run for an unaligned AGI
(6) It is more morally valuable for human goals to be met than an AGIs goals
If preference utilitarianism is correct there may be no utility function that accurately describes the true value of things. This will be the case if people’s preferences aren’t continuous or aren’t complete, for instance if they’re expressed as a vector. This generalises to other forms of consequentialism that don’t have a utility function baked in.
I think this is correct and EA thinks about neglectedness wrong. I’ve been meaning to formalise this for a while and will do that now.
Yeah I wrote it in google docs and then couldn’t figure out how to transfer the del and suffixes to the forum.
Yes, I kind of did see this coming (although not in the US) and I’ve been working on a forum post for like a year and now I will finish it.
Thanks :)
I organise EA Warwick and we’ve had decent success so far with concepts workshops as an alternative to fellowships. They’re much less of a time commitment for people, and after the concepts workshop people seem to basically bought into EA and want to get involved more heavily. We’ve only done 3 this term so far, so definitely we don’t know how this will turn out.
I definitely feel this as a student. I care a lot about my impact and I know intellectually that being really good at being a student the best thing I can do for long term impact. Emotionally though, I find it hard to know that the way I’m having my impact is so nebulous and also doesn’t take very much work do well.
It seems like a strange claim that both the atrocities committed by Hitler, Stalin and Mao were substantially more likely because they had dark triad traits and that when doing genetic selection we’re interested in removing the upper tail, in the article it was the top 1%. To take this somewhat naively, if we think that the Holocaust, and Mao and Stalin’s terror-famines wouldn’t have happened unless all three leaders exhibited dark tetrad traits in the top 1%, this implies we’re living in a world world that comes about with probability 1/10^6, i.e 1 in a million, assuming the atrocities were independent events. This implies a need to come up with a better model.
Edit 2: this is also wrong. Assuming independence the number of atrocious should be binomially distributed with p=1/100 and n=#of leaders in authoritarian regimes with sufficiently high state capacity or something. Should probably be a markov-chain model.
If we adjust the parameters to top 10% and say that the atrocities were 10% more likely to happen if this condition is met, this implies we’re living in a world that’s come about with probability (p/P(Dark triad|Atrocity)^3, where p is the probability of that the atrocity would have occurred without Hitler, Stalin and Mao having dark triad traits. The interpretation of P(Dark triad|Atrocity) is what’s the probability that a leader has a dark triad traits given they’ve committed an atrocity. If you have p as 0.25 and P(Dark|Atrocity) as 0.75 this means we’re living in a 1⁄9 world, which is much more reasonable. But, this makes this intervention look much less good.
Edit: the maths in this section is wrong because I did a 10% probability increase of p as 1.1*p rather than p having an elasticity of 0.1 with respect to the resources put into the intervention or something. I will edit this later.
Excluding 10% of population from politcal power seems like a big ask. If the intervention reduced the probability that someone with dark triad traits coming to power (in a system where they could commit an atrocity) by 10%, which seems ambitious to me, this reduces the probability of an atrocity by 1% (if the above model is correct). Given this requires excluding 10% of the population from politcal power, which I’d say is generously 10%, this means that EV of the intervention is reducing the probability of an atrocity by 0.1%. Although this would increase if the intervention could be used multiple times, which seems likely.
I’m currently doing research on this! The big big driver is age, income is pretty small comparatively, the education effect goes away when you account for income and age. At least this what I get from the raw health survey of England data lol.
I think China is basically in a similar situation to Prussia/Germany from 1848 to 1914. The revolutions of 1848 were unsuccessful in both Prussia and the independent South German states but they gave the aristocratic elites one hell of fright. The formal institutions of government didn’t change very much, nor did who was running the show—in the Prussia then Germany the aristocratic-military Junker class. They still put people they didn’t like in prison sometimes and still had kings with a large amount of formal power. However, they liberalised pretty spectacularly in lots of way—for instance trade unions were unbanned and the SPD (communist party at the time) grew to be the largest party in Germany, contract law was made equal between employers and workers and a market economy was allowed to flourish independently of the state and the old organisations of guilds and the vestiges of feudalism allowed to die.
To see how dramatic this change was one can look at the state of Prussian agriculture before and after 1848. Prior to 1848 agriculture was still in important ways governed by the Conservative mode of economic organisation—production, exchange and consumption were decided by what tradition dictated, was insulated from market forces by tariffs, and dominated by old aristocratic families. After 1848 Prussian agriculture was allowed to become a part of the market economy and become dominated by bourgeois men who ran their farms to make a profit and hired and fired workers as they pleased, and the market dictated the price of grain. It is hard to overstate how different this is from how agriculture was organised in, say, 1830.
I think China is doing something pretty similar now. 20 years ago individuals lives were controlled in lots of ways by their work units. Your factory unit provided you with your job, your house, your pension, your healthcare and it was controlled by the party. This is now not the case. People move freely between jobs (that’s mostly but not entirely true) , regional newspapers report on government failures and people bring lawsuits against big powerful companies and sometimes they win.
Prussia/Germany was able to achieve growth at the frontier after 1848 and I think it’s plausible China does the same. Basically, I think that both governments are acting something like monopolists would in a contestable market. From the outside it’s looks uncompetitive and like the monopolist should be extracting big rents, but actually they’re keeping prices low because they’re shit scared that someone’s going to come and take their place if they start trying to get monopoly profits.
Now, having said all that, the Chinese economy has some big structural problems that look like classic extractive institutions problems. The two biggest to me at least look like the urban-rural divide and the massive about of infrastructure spending fueled by local government spending based land value prices. The Hukou system increases the cost of individuals moving from one administrative district to another by making it extremely difficult to access public services. This has created an underclass of poorly educated, low productivity migrants in the big cities who’ve left their children back home who go to low quality schools and just have the poor life chances associated by being raised not by one’s parents. China then also has the classic authoritarian problem of being really good at producing loads of infrastructure and then producing way too much of it relative. The political economy reason behind this is in the China case is that big infrastructure projects offer opportunities for graft and make regional GDP numbers look good.