Researching AI governance at the intersection of political science and economics.
AI researchers, please stop falling for the hereditarian BS of Charles Murray et al.
Researching AI governance at the intersection of political science and economics.
AI researchers, please stop falling for the hereditarian BS of Charles Murray et al.
This project sounds intriguing, though I haven’t yet read about its cost effectiveness on the website. As a side note, if they don’t already, I think it would be useful for Charity Entrepreneurship or other non-profit incubators to make a concerted effort to reach out to people like you in the global south.
Doesn’t seem worth it at that point since markets will start to clear, as in Iran, lowering counterfactual impact. At that point it largely becomes a question of earning to give.
I refer to Bob Jacobs’ excellent reply for covering some of my concerns in more depth (and adding many I didn’t know about).
In principle any topic is worthy of further study. However, given the cost of information processing and amount of biased noise written on the topic by the likes of Charles Murray, I would need studies far stronger than ones done on interpolated national IQ values to update my beliefs about the topic’s importance.
Not an expert in the area, but the data on National IQ seems shoddy at best, fraudulent at worst. Due to the number of potential confounds, I don’t put much stock in cross-country regression analyses, especially when run on such poor data.
OP just joined the forum and has not provided any reason why, given the strong ties of pronatalism to the far-right, this cause is noticeably more pressing than even adjacent causes such as immigration reform. I’d recommend not engaging.
Losing some parsimony, perhaps 80,000 Hours could allow users to toggle between a “short-termist” and “longtermist” ranking of top career paths? The cost of switching rankings seems low enough that ~anyone who’s inclined to work on a longtermist cause area under the status-quo would still do so with this change.
Having done some of this modelling myself, I think it’s difficult to pin down the exact outcome of a particular race. Some empirical evidence suggests that winning a patent race leads to more follow-on innovation, while other models, including those fitted to data, suggest that laggards are often more innovative. However, models also suggest that laggards who are quite far behind tend to give up racing entirely.
My tentative conclusion is that the finding you highlight is plausible enough such that I’d consider small gaps in innovativeness to ~= neck-and-neck races, but larger gaps to produce a monopoly-like situation for the race leader. Determining where precisely this cutoff, of course, is difficult.
This lack is one (among several) reasons why I haven’t shifted any of my donations toward longtermist causes.
Thanks for your thoughts!
Good question! I’ll need to think more about this, but my initial impression is that regular surveying of developers about AI progress could help by quantifying their level of uncertainty over the arrival rate of particular milestones, which is likely correlated with how they believe expected capabilities investments map onto progress.
That seems right, though it likely depends upon how substitutable safety research is across firms.
Thanks for sharing. Based on this paper, the paper on the fact that many emergent properties of LLMs are a mirage, and Epoch’s work on data scaling laws, I have greatly revised my estimates of the arrival rate of AI risks and benefits downward.