Researching AI governance at the intersection of political science and economics.
AI researchers, please stop falling for the hereditarian BS of Charles Murray et al.
Researching AI governance at the intersection of political science and economics.
AI researchers, please stop falling for the hereditarian BS of Charles Murray et al.
This lack is one (among several) reasons why I haven’t shifted any of my donations toward longtermist causes.
Losing some parsimony, perhaps 80,000 Hours could allow users to toggle between a “short-termist” and “longtermist” ranking of top career paths? The cost of switching rankings seems low enough that ~anyone who’s inclined to work on a longtermist cause area under the status-quo would still do so with this change.
OP just joined the forum and has not provided any reason why, given the strong ties of pronatalism to the far-right, this cause is noticeably more pressing than even adjacent causes such as immigration reform. I’d recommend not engaging.
Not an expert in the area, but the data on National IQ seems shoddy at best, fraudulent at worst. Due to the number of potential confounds, I don’t put much stock in cross-country regression analyses, especially when run on such poor data.
In principle any topic is worthy of further study. However, given the cost of information processing and amount of biased noise written on the topic by the likes of Charles Murray, I would need studies far stronger than ones done on interpolated national IQ values to update my beliefs about the topic’s importance.
I refer to Bob Jacobs’ excellent reply for covering some of my concerns in more depth (and adding many I didn’t know about).
This project sounds intriguing, though I haven’t yet read about its cost effectiveness on the website. As a side note, if they don’t already, I think it would be useful for Charity Entrepreneurship or other non-profit incubators to make a concerted effort to reach out to people like you in the global south.
Thanks for your thoughts!
Good question! I’ll need to think more about this, but my initial impression is that regular surveying of developers about AI progress could help by quantifying their level of uncertainty over the arrival rate of particular milestones, which is likely correlated with how they believe expected capabilities investments map onto progress.
That seems right, though it likely depends upon how substitutable safety research is across firms.