Is the point when models hit a length of time on the x-axis of the graph meant to represent the point where models can do all tasks of that length that a normal knowledge worker could perform on a computer? The vast majority of knowledge worker tasks of that length? At least one task of that length? Some particular important subset of tasks of that length?
David Mathers🔸
Morally, I am impressed that you are doing an in many ways socially awkward and uncomfortable thing because you think it is right.
BUT
I strongly object to you citing the Metaculus AGI question as significant evidence of AGI by 2030. I do not think that when people forecast that question, they are necessarily forecasting when AGI, as commonly understood or in the sense that’s directly relevant to X-risk will arrive. Yes the title of the question mentions AGI. But if you look at the resolution criteria, all an AI model has to in order to resolve the question ‘yes’ is pass a couple of benchmarks involving coding and general knowledge, put together a complicated model car, and imitate. None of that constitutes being AGI in the sense of “can replace any human knowledge worker in any job”. For one thing, it doesn’t involve any task that is carried out over a time span of days or weeks, but we know that memory and coherence over long time scales is something current models seem to be relatively bad at, compared to passing exam-style benchmarks. It also doesn’t include any component that tests the ability of models to learn new tasks at human-like speed, which again, seems to be an issue with current models. Now, maybe despite all this, it’s actually the case that any model that can pass the benchmark will in fact be AGI in the sense of “can permanently replace almost any human knowledge worker”, or at least will obviously only be a 1-2 years of normal research progress away from that. But that is a highly substantive assumption in my view.
I know this is only one piece of evidence you cite, and maybe it isn’t actually a significant driver of your timelines, but I still think it should have been left out.
Yes. (Though I’m not saying this will happen, just that it could, and that is more significant than a short delay.)
The more task lengths the 80% threshold has to run through before it gets to task length we’d regard as AGI complete though, the more different the tasks at the end of the sequence are from the beginning, and therefore the more likely it is that the doubling trend will break down somewhere along the length of the sequence. That seems to me like the main significance of titotal’s point, not the time gained if we just assume the current 80% doubling trend will continue right to the end of the line. Plausibly 30 seconds to minute long tasks are more different from weeks long tasks than 15 minute tasks are.
The total view is not the only view on which future good lives starting has moral value. You can also think that if you believe in (amongst other things):
-Maximizing average utility across all people who ever live, in which case future people coming into existence is good if their level of well-being is above the mean level of well-being of the people before them.
-A view on which adding happy lives gets less and less valuable the more happy people have lived, but never reaches zero. (Possibly helpful with avoiding the repugnant conclusion.)
-A view like the previous one on which both the total amount of utility and how fairly it is distributed matter, so that more utility is always in itself better, and so adding happy people is always intrinsically good in itself, but a population with less total utility but a fairer distribution of utility can sometimes be better than a population with more utility, less fairly distributed.
This isn’t just nitpicking: the total view is extreme in various ways that the mere idea that happy people coming into existence is good is not.
Also, even if you reject the view that creating happy people is intrinsically valuable, you might want to ensure there are happy people in the future just to satisfy the preferences of current people, most of whom probably have at least some desire for happy descendants of at least one of their family/culture/humanity as whole, although it is true that this won’t get you the view that preventing extinction is astronomically valuable.
I genuinely have no idea.
One thing to worry about here is deception. All things being equal, it’s general a reason against doing something that is deceives people, and trying to ease people in gently can be a special case of that because it’s deceiving them about the beliefs you hold. It also might stop you yourself getting useful information, since if you only introduce your more and unusual and radical commitments to people who’ve already been convinced by your more mainstream ones, you are missing out on criticism of the radical commitments from the people most opposed to them.
This sort of thing has been an issue with EA historically: people have accused EA leaders (fairly or not) of leading with their beliefs about global poverty to give the impressiont that that is what they (the leader) and EA are really all about, when actually what the leader really cares about is a bunch of much more controversial things: AI safety, longtermism or niche animal welfare stuff like shrimp welfare.
I’m not saying that this means no one should ever introduce people to radical ideas gently, I think it can be reasonable, just that this is worth keeping in mind.
What makes it leftist? If anything my immediate reaction is that abundance is in some sense right-coded in that it’s about unleashing markets. Maybe more British right and pre-Trump American right than current right.
(Mostly a nitpick, as I don’t want Open Phil doing centrist or libertarian or centre-right coded things rather than what’s most effective either, and I think strongly upvoted your comment.)
Why use automation as your reference class for AI and not various other (human) groups of intelligent agents though? And if you use the later, co-operation is common historically but so are war and imperialism.
I don’t think the issue here is actually about whether all science grants should go only to actual scientific work. Suppose that a small amount of the grant had been spent on getting children interested in science in a completely non-woke way that had nothing to do with race or gender. I highly doubt that either the administration or you regard that as automatically and obviously horrendously inappropriate. The objection is to stuff targeted at women and minorities in particular, not to a non-zero amount of science spending being used to get kids interested in science. Describing it as just being about spending science grants only on science is just a disingenuous way of making the admin’s position sound more commonsense and apolitical than it actually is.
I agree with the very narrow point that flagging grants that mention some minor woke spending while mostly being about something elde is not a sign of the AI generating false positives when asked to search for wokeness. Indeed, I already said in my first comment that the flagged material was indeed “woke” in some sense.
Yeah, I agree with the more general point.
“Labeling that grant as woke because it puts like 2% of its total funds towards a K-12 outreach program seems like a mistake to me.”
It’s a mistake if it’s meant to show the grant is bad, and I suspect that Larks has political views that I would very strongly disagree with, but I think it does successfully make the narrow point that the data about NSF grants does not show that an AI designed to identify pro-woke or pro-Hamas language will be bad at doing so.
It’s not clearly bad. It’s badness depends on what the training is like, and what your views are around a complicated background set of topics involving gender and feminism, none of which have clear and obvious answers. It is clearly woke in a descriptive non-pejorative sense, but that’s not the same thing as clearly bad.
EDIT: For example, here is one very obvious way of justifying some sort of “get girls into science” spending that is totally compatible with centre-right meritocratic classical liberalism and isn’t in any sense obviously discriminatory against boys. Suppose girls who are in fact capable of growing up to do science and engineering just systematically underestimate their capacity to do those things. Then “propaganda” aimed at increasing the confidence of those girls specifically is a totally sane and reasonable response. It might not in fact be the correct response: maybe there is no way to change things, maybe the money is better spent elsewhere etc. But it’s not mad and its not discriminatory in any obvious sense, unless anything targeted only at any demographic subgroup is automatically discriminatory, which at best only a defensible position not an obvious one. I don’t know if smart girls are in fact underconfident in this way, but it wouldn’t particularly susprirse me.
Unfair: he/she did not propose speaking in vibes ourselves he/she merely argued that this is how many other people will process things.
Condoms are a classic public health measure because they prevent STDs, apart from the benefits of giving people control over their fertility.
Obviously rationalists have contributed a lot to EA, and of the early adopters probably started with views closest to where the big orgs are now (i.e. AI risk as the number one problem). But there have always been non-rationalist EAs. When I first took the GWWC pledge in 2012, I was only vaguely aware that rationalism/LW existed. As far as I can tell none of Toby Ord, Will MacAskill, Holden Karnofsky or Elie Hassenfeld identified as rationalists when EA was first being set up, and they seem the best candidates for “founders of EA”, especially Toby (since he was working on GWWC before he met Will if I recall what I’ve read about the history correctly.) Not that there weren’t strong connections to the rationalist community right from the beginning-Bostrom was always a big influence and he had known Eliezer Yudkowksy for years before even the embyronic period of EA. But it’s definitely wrong in my view to see EA as just an offshoot of rationalism. (I am a bit biased about this I admit, because I am an Oxford philosophy PhD, and although I wasn’t involved, I was in grad school when a lot of the EA stuff was starting up.)
“The vast majority of conservatives do not draw a distinction between USAID and foreign aid in general.” Not sure I’d go this far, though I do think it is relatively easy to get many elite conservatives angry if they think EAs or anyone else is suggesting they personally are obligated to give to charity. My sense is that what most conservatives object to is public US government money being spent to help foreigners, and they don’t really care about other people doing private charity. I know that on twitter there are a bunch of bitter far-right Trump-supporting racists who think helping Black people not die is automatically bad (“dysgenic”), but I highly doubt they are representative of the supporters of a major party in a country where as recently as 2021, 94% of people said they approved of interracial marriage: https://news.gallup.com/poll/354638/approval-interracial-marriage-new-high.aspx My vague memory is also that US conservatives tend to be more charitable on average than liberals, mostly because they give to their churches.
(Having said that, people who read this forum who think liberals just unfairly malign conservatives as racists in general, should look at the data from that Gallup poll and re-evaluate. Interracial marriage had under 50% support as late as the early 90s. That is within my lifetime even though I’m under 40. As late as around the last year of the Bush administration (by which time I was nearly finished my undergraduate degree), 1⁄5 Americans opposed interracial marriage. By far the most plausible interpretation of this is that many conservatives were very racist even relatively recently*.)
*Yes I know some Black people probably disapproved of it as well, but given that Blacks are a fairly small % of the US population, results of a national poll are likely driven by views among whites.
Having read it again, all I can see them saying about aid and wokeness is that aid is at risk of being perceived as woke. That’s not a claim about exactly how the causation works as far as I can tell.
It’s relevant because if people’s opposition to woke is driven by racism or dislike of leftist-coded things or groups, that will currently also drive opposition to foreign aid, which is meant to help Black people and is broadly (centre) left coded*. (There are of course old-style Bush II type conservatives who both hate the left and like foreign aid, so this sort of polarization is not inevitable at the individual level, but it does happen.)
*Obviously there are lots of aid critics as you go further left who think it is just a instrument of US imperialism etc. And some centrists and centre-left people are aid critics too of course.
“And fundamentally opposition to wokism is motivated by wanting to treat all people equally regardless of race or sex”
I think this true of a lot of public opposition to wokeism: plenty liberals, socialist and libertarians with very universalist cosmopolitan moral views find a lot of woke stuff annoying, plenty working class people of colour are not that woke on race, and lots of moderate conservatives believe in equality of this sort. Many people in all these groups genuinely express opposition to various woke ideas based on a genuine belief in colourblindness and its gender equivalent, and even if that sort of views is somehow mistaken it is very annoying and unfair when very woke people pretend that it is always just a mask for bigotry.
But it absolutely is not true of all opposition to woke stuff, or all but a tiny minority:
Some people are genuinely openly racist, sexist and homophobic, in the sense that they will admit to being these things. If you go and actually read the infamous “neoreactionnaries” you will find them very openly attacking the very idea of “equality”. They are a tiny group, but they do have the ear of some powerful people: definitely Peter Thiel, probably J.D. Vance (https://www.nytimes.com/2025/01/18/magazine/curtis-yarvin-interview.html).
But in addition very many ordinary American Christians believe that men in some sense have authority/leadership over women, but would sincerely (and sometimes accurately) deny feeling hostile to women. For example the largest Protestant denomination in the United States is Southern Baptism, and here’s the NYT reporting on them making women even more banned from leadership with the organization than they already were, all of 2 years ago: https://www.nytimes.com/2023/06/14/us/southern-baptist-women-pastors-ouster.html There are 13 million Southern Baptists, which isn’t a huge share of the US population, but many other conservative Protestant denominations also forbid women to serve in leadership positions and there are a lot of conservative Protestants overall, and some Catholics, and officially the Catholic Church itself share this view. Of course, unlike the previous group, almost all of these people will claim that men and women in some sense have equal value. But almost all woke people who openly hate on white men will also claim to believe everyone has equal value, and develop elaborate theory about why their seemingly anti-white male views are actually totally compatible with that. If you don’t believe the latter, I wouldn’t believe this group either that men being “the head of the household” is somehow compatible with the good, proper kind of equality. (Note that it’s not primarily the sincerity of that belief I am skeptical of, just it’s accuracy.)
As for sexuality, around 29% of Americans still oppose same-sex marriage: https://news.gallup.com/poll/1651/gay-lesbian-rights.aspx Around a quarter think having gay sex/being gay is immoral: https://www.statista.com/statistics/225968/americans-moral-stance-towards-gay-or-lesbian-relations/
More generally, outgroup bias is a ubiquitous feature of human cognition. People can have various groups that wokeness presents itself as protecting as their outgroup, and because of outgroup bias some of those people will then oppose wokeness as a result of that bias. This is actually a pretty weak claim, compatible with the idea that woke or liberal people have equal or even greater levels of outgroup bias as conservatives. And it means that even a lot of people who sincerely claim to hold egalitarian views are motivated to oppose wokeness at least partially because of outgroup bias. (Just as some Americans liberals who are not white men and claim to be in some sense egalitarian in fact have dislike of white men as a significant motivation behind their political views: https://www.bbc.com/news/world-us-canada-45052534 There are obviously people like Jeong on the right. Not a random sample, but go on twitter and you’ll see dozens of them.)
Literally all of these factions/types of person on the right have reason to oppose wokeness that are not a preference for colourblindness and equality of opportunity (the last group may of course also genuinely be aggravated by open woke attacks on those things yes, it’s not an either or.) Since there are lots of these people, and they are generally interested enough in politics to care about wokeness in the first place, there is no reason whatsoever to think they are not well represented in the population of “people who oppose wokeness”. The idea that no one really opposes wokeness except because they believe in a particular centre-right version of colourblind equality of opportunity both fails to take account of what the offficial, publicly stated beliefs of many people on the right actually are, and also fails to apply very normal levels of everyday skepticism to the stated motivations of (other) anti-woke people who endorse colourblindness.
“I don’t think that, for a given person, existing can be better or worse than not existing. ”
Presumably even given this, you wouldn’t create a person who would spending their entire life in terrible agony, begging for death. If that can be a bad thing to do even though existing can’t be worse than not existing, then why can’t it be a good thing to create happy people, even though existing can’t be better than not existing?