Blog at The Good Blog https://thegoodblog.substack.com/
Nathan_Barnard
I think I mostly disagree with this post.
I think Michael Webb would be an example of someone who did pretty abstract stuff (are ideas, in general, getting harder to find) at a relatively junior level (PhD student) but then because his work was impressive and rigorous became very senior in the British government and DeepMind.
Tamay Besiroglu’s MPhil thesis on ideas getting harder to find in ML I think should be counted as strategy research by a junior person but has been important in feeding into various Epoch papers and the Davidson takeoff speeds model. I expect the Epoch papers and the takeoff model to be very impactful. Tamay is now deputy director of Epoch.
My guess is that it’s easier to do bad strategy research than it is to get really good at a niche but important things, but I think it’s very plausible that it’s the better-expected value decision provided you can make your strategy research legibly impressive, and your strategy research is good research. It seems plausible that doing independent strategy research one isn’t aiming to be published in a journal is particularly bad since it doesn’t provide good career capital, there isn’t good mentorship or feedback and there’s no clear path to impact.
I would guess that economists are unusually well-suited to strategy research because it can often be published in journals which is legibly impressive and so is good career capital, and the type of economics strategy research that one does is either empirical and so has good feedback loops, or is model-based but drawn from economic theory and so much more structured than typical theory would be. I think this latter type of research can clearly be impactful—for instance, Racing to the Precipice is a pure game theory paper but informs much of the conversation of avoiding race dynamics. Economics is also generally respected within government and economists are often hired as economists which is unusual amongst the social sciences.
My training is as an economist and it’s plausible to me that work in political science, law, and political philosophy would also be influential but I have less knowledge of these areas.
I don’t want to overemphasise my disagreement—I think lots of people should become experts in very specific things—but I think this post is mostly an argument against doing bad strategy research that doesn’t gain career capital. I expect doing strategy research at an organization that is experienced at doing good and/or legibly impressive research e.g. in academia mostly solves this problem.
A final point, I think this post underrates the long-run influence of ideas on government and policy. The neoliberals of the 60s and 70s are a well-known example of this, but also Jane Jacob’s influence on US urban planning, the influence of legal and literary theory on the Modern American left, and the importance of Silent Spring for the US environmental movement. Research in information economics has been important in designing healthcare systems, e.g. the mandate to buy healthcare under the ACA. The EUs cap and trade scheme is another idea that came quite directly from pretty abstract research.
This is a different kind of influence to proposing a specific or implementing a specific policy in government—which is a very important kind of influence—but I suspect over the long run is more important (though with weak confidence and I don’t think this is especially cruxy.)
I essentially agree with the basic point of this post—and think it was a great post!
I have some what feel like nitpicks about the specific story that you told that and I’m sort of confused about how much they matter. My guess is that this actually is a counterargument to the point being made in the post and imply that trapped priors are less of a problem than the example used in the post would imply.
I think that the broadly libertarian view and Scandinavian-style social democracy views are much more similar than this post gives them credit for. In particular, they agree on the crucial importance of liberal democracy that prevents elites (in the 19th-century traditional agricultural elites) from using the state to engage in rent-seeking. I remember reading a list of demands of the German Social Democratic party in the 1870s (before it had moderated) that read a list of liberal democratic demands—secret ballot, free speech, expansion of the power of democratically elected Reichstag etc. These two strands of modern liberal thought also agreed on a liberal epistemology that should be used to try to systematically improve society from a broadly utilitarian perspective—the London School of Economics was founded by 4 Fabian Society members to further this aim!
I think this cashes out in the Effective Samaritans and the Libertarian side of EA (although the Libertains side of EA is pretty unusually libertarian) pursuing pretty similar projects when trying to use non-randomista means for development. For instance, my guess is that both would support increasing state capacity in low-income countries to improve the basic nightwatchman functions of the state, reducing corruption, protecting liberties and the integrity of elections, and removing regulation that that represent elite rent-seeking. Of course they’ll be some differences in emphasis—the Effective Samarations might have a particular theory of change around using unions to coordinate labour to push for political change—but these seem relatively minor compared to the core things both agree are important. Byran Caplan and Robin Hanson are genuinely unusually libertarian even amongst broadly free-market economists, but typically both utilitarian-motivated libertarians and social democrats would be interested in building at least a basic welfare state in low-income countries.
I think we actually in practice see this convergence between liberal social democrats and broadly utilitarian libertarians in the broadly unified policy agendas of Ezra Klien’s abundance agenda and lots of EA-Rationalist adjacent Libertarians like a focus on making it easier to build houses in highly productive cities, reducing barrier to immigration to rich countries, increased public funding of R&D and improving state capacity, particularly around extremely ambitious projects like operation warp speed.
I’m sceptical that there are substantial benefits to generating AI safety research ideas from gender diversity. I haven’t read the literature here, but my prior on these types of interventions is that the effect size is small.
I regardless think Athena is good for the same reasons Lewis put forward in his comment—the evidence that women are excluded from male-dominated work environments seems strong and it’s very important that we get as many talented researchers into AI safety as possible. This also seems especially like a problem in the AIS community where anecdotal claims of difficulties from unwanted romantic/sexual advances are common.
I think the intellectual benefits from gender diversity claims haven’t been subjected to sufficient scrutiny because it’s convenient to believe. For this kind of claim, I would need to see high-quality causal inference research to believe it and I haven’t seen this research and the article linked doesn’t cite such research. The linked NatGeo article doesn’t seem to me to bring relevant evidence to bear on the question. I completely buy that having more women in the life sciences leads to better medical treatment for women, but that causal mechanism at work here doesn’t seem like it would apply to AI safety research.
I think people should be very careful about promoting earning to give in light of this. It still seems true that because the capital is much more unequally distributed than income if you’re trying to earn to give you should be doing by trying to increase the value of equity you hold in firms rather than working a high paying job. Wealth also seems to be distributed according to a power law which also pushes towards a strategy of being extremely ambitious if one is earning to give.
I think it would be very bad if people who otherwise could do high impact direct work switched to earning to give in investment banking, consulting or corporate law as a result of this. EA funding has not declined to the point where there is an immediate crisis where relatively small amounts of money from high paying jobs is needed to keep the EA movement going—Dustin is worth somewhere between 5 and 10 billion, founders pledge has 8.5bn committed (although substantially less than 100% of this will go to the highest impact things.)
I’d be very surprised if this burnt your ability to speak with EAs.
I’m obviously not speaking for Jessica here, but I think the reason the comparison is relevant is that the high spend by Goldman ect suggests that spending a lot on recruitment at unis is effective.
If this is the case, which I think is also supported by the success of well funded groups with full or part time organisers, and that EA is in an adversarial relationship to with these large firms, which I think is large true, then it makes sense for EA to spend similar amounts of money trying to attract students.
The relvent comparison is then comparing the value of the marginal student recurited with malaria nets ect.
I think China is basically in a similar situation to Prussia/Germany from 1848 to 1914. The revolutions of 1848 were unsuccessful in both Prussia and the independent South German states but they gave the aristocratic elites one hell of fright. The formal institutions of government didn’t change very much, nor did who was running the show—in the Prussia then Germany the aristocratic-military Junker class. They still put people they didn’t like in prison sometimes and still had kings with a large amount of formal power. However, they liberalised pretty spectacularly in lots of way—for instance trade unions were unbanned and the SPD (communist party at the time) grew to be the largest party in Germany, contract law was made equal between employers and workers and a market economy was allowed to flourish independently of the state and the old organisations of guilds and the vestiges of feudalism allowed to die.
To see how dramatic this change was one can look at the state of Prussian agriculture before and after 1848. Prior to 1848 agriculture was still in important ways governed by the Conservative mode of economic organisation—production, exchange and consumption were decided by what tradition dictated, was insulated from market forces by tariffs, and dominated by old aristocratic families. After 1848 Prussian agriculture was allowed to become a part of the market economy and become dominated by bourgeois men who ran their farms to make a profit and hired and fired workers as they pleased, and the market dictated the price of grain. It is hard to overstate how different this is from how agriculture was organised in, say, 1830.
I think China is doing something pretty similar now. 20 years ago individuals lives were controlled in lots of ways by their work units. Your factory unit provided you with your job, your house, your pension, your healthcare and it was controlled by the party. This is now not the case. People move freely between jobs (that’s mostly but not entirely true) , regional newspapers report on government failures and people bring lawsuits against big powerful companies and sometimes they win.
Prussia/Germany was able to achieve growth at the frontier after 1848 and I think it’s plausible China does the same. Basically, I think that both governments are acting something like monopolists would in a contestable market. From the outside it’s looks uncompetitive and like the monopolist should be extracting big rents, but actually they’re keeping prices low because they’re shit scared that someone’s going to come and take their place if they start trying to get monopoly profits.
Now, having said all that, the Chinese economy has some big structural problems that look like classic extractive institutions problems. The two biggest to me at least look like the urban-rural divide and the massive about of infrastructure spending fueled by local government spending based land value prices. The Hukou system increases the cost of individuals moving from one administrative district to another by making it extremely difficult to access public services. This has created an underclass of poorly educated, low productivity migrants in the big cities who’ve left their children back home who go to low quality schools and just have the poor life chances associated by being raised not by one’s parents. China then also has the classic authoritarian problem of being really good at producing loads of infrastructure and then producing way too much of it relative. The political economy reason behind this is in the China case is that big infrastructure projects offer opportunities for graft and make regional GDP numbers look good.
I’m going through this right now. There have just clearly been times both as a group organiser and in my personal life when I should have just spent/taken money and in hindsight clearly had higher impact, e.g buying uni textbooks so I study with less friction to get better grades.
This is fantastic
I did the summer fellowship last year and found it extremely useful in getting research experience, having space to think about x-risk questions with others who were also interested in these questions, and making very valuable connections. I also found the fellowship very enjoyable.
Is there anyone doing research from an EA perspective on the impact of Nigeria becoming a great power by the end of this century? Nigeria is projected to be the 3rd largest country in the world by 2100 and potentially appears to be having exponential growth in GDP per capita. I’m not claiming that it’s a likely outcome that Nigeria is a great power in 2100 but nor does it seem impossible. It isn’t clear to me that Nigeria has dramatically worse institutions than India but I expect India to be a great power by 2100. It seems like it’d be really valuable for someone to do some work on this given it seems really neglected.
Bentham would be proud
I think this is correct and EA thinks about neglectedness wrong. I’ve been meaning to formalise this for a while and will do that now.
I think the relevant split is between people who have different standards and different preferences for enforcing discourse norms. The ideal type position on the SJ side is that a significant number of claims relating to certain protected characteristics are beyond the pale and should be subject to strict social sanctions. The facebook group seems to on the over side of this divide.
Thanks for you feedback! Unfortunately I am a smart junior person, so looks like we know who’ll be doing the copy editing
Maybe this isn’t something people on the forum do, but it is something I’ve heard some EAs suggest. People often have a problem when they become EAs that they now believe this really strange thing that potentially is quite core to their identity now and that can feel quite isolating. A suggestion that I’ve heard is that people should find new, EA friends to solve this problem. It is extremely important that this does not come off as saying that people should cut ties with friends and family who aren’t EAs. It is extremely important that this is not what you mean. It would be deeply unhealthy for us as community is this became common.
I think the key point here is that it is unsually easy to recuirt EAs at uni compared to when they’re at McKinsey. I think it’s unclear if a) among the the best things for a student to do is go to McKinsey and b) how much less likely it is that an EA student goes to McKinsey. I think it’s pretty unlikely going to McKinsey is the best thing to do, but I also think that EA student groups have a realtively small effect on how often students go into elite coporate jobs (a bad thing from my perspective) at least in software engineering.
I think using Bayesian regret misses a number of important things.
It’s somewhat unclear if it means utility in the sense of a function that maps preference relations to real numbers, or utility in axiological sense. If it’s in the former sense then I think it misses a number of very important things. The first is that preferences are changed by the political process. The second is that people have stable preferences for terrible things like capital punishment.
If it means it in the axiological sense then I don’t think we have strong reason to believe that how people vote will be closely related and I think we have reason to believe it will be different systematically. This also makes it vulnerable to some people having terrible outcomes.
Lots of what I’m worried about with elected leaders are negative externalities. For instance, quite plausibly the main reasons Trump was bad was his opposition to climate change and rejecting democratic norms. The former harms mostly people in other countries and future generations, and the latter mostly future generations (and probably people in other countries too more than Americans although it’s not obviously true.)
It also doesn’t account for dynamic affects of parties changing their platforms. My claim is that the overton window is real and important.
I think that having strong political parties which the electoral system protects is good for stopping these things in rich democracies because I think the gatekeepers will systematically support the system that put them in power. I also think the set of polices the elite support is better in the axiological sense than those supported by the voting population. The catch here is that the US has weak political parties that are supported by electoral system.
This is great Matt! I think I’d be also be interested in work trying to estimate the effect sizes of this stuff, as well as research on optimal design.
I strongly disagree with the claim that the connection to EA and doing good is unclear. The EA community’s beliefs about AI have been, and continue to be, strongly influenced by Eliezer. It’s very pertinent if Eliezer is systematically wrong and overconfident about being wrong because, insofar as there’s some level of defferal to Elizer on AI questions within the EA community which I think there clearly is, it implies that most EAs should reduce their credence in Elizer’s AI views.