Unfair: he/she did not propose speaking in vibes ourselves he/she merely argued that this is how many other people will process things.
Condoms are a classic public health measure because they prevent STDs, apart from the benefits of giving people control over their fertility.
Obviously rationalists have contributed a lot to EA, and of the early adopters probably started with views closest to where the big orgs are now (i.e. AI risk as the number one problem). But there have always been non-rationalist EAs. When I first took the GWWC pledge in 2012, I was only vaguely aware that rationalism/LW existed. As far as I can tell none of Toby Ord, Will MacAskill, Holden Karnofsky or Elie Hassenfeld identified as rationalists when EA was first being set up, and they seem the best candidates for “founders of EA”, especially Toby (since he was working on GWWC before he met Will if I recall what I’ve read about the history correctly.) Not that there weren’t strong connections to the rationalist community right from the beginning-Bostrom was always a big influence and he had known Eliezer Yudkowksy for years before even the embyronic period of EA. But it’s definitely wrong in my view to see EA as just an offshoot of rationalism. (I am a bit biased about this I admit, because I am an Oxford philosophy PhD, and although I wasn’t involved, I was in grad school when a lot of the EA stuff was starting up.)
David Mathers🔸
“The vast majority of conservatives do not draw a distinction between USAID and foreign aid in general.” Not sure I’d go this far, though I do think it is relatively easy to get many elite conservatives angry if they think EAs or anyone else is suggesting they personally are obligated to give to charity. My sense is that what most conservatives object to is public US government money being spent to help foreigners, and they don’t really care about other people doing private charity. I know that on twitter there are a bunch of bitter far-right Trump-supporting racists who think helping Black people not die is automatically bad (“dysgenic”), but I highly doubt they are representative of the supporters of a major party in a country where as recently as 2021, 94% of people said they approved of interracial marriage: https://news.gallup.com/poll/354638/approval-interracial-marriage-new-high.aspx My vague memory is also that US conservatives tend to be more charitable on average than liberals, mostly because they give to their churches.
(Having said that, people who read this forum who think liberals just unfairly malign conservatives as racists in general, should look at the data from that Gallup poll and re-evaluate. Interracial marriage had under 50% support as late as the early 90s. That is within my lifetime even though I’m under 40. As late as around the last year of the Bush administration (by which time I was nearly finished my undergraduate degree), 1⁄5 Americans opposed interracial marriage. By far the most plausible interpretation of this is that many conservatives were very racist even relatively recently*.)
*Yes I know some Black people probably disapproved of it as well, but given that Blacks are a fairly small % of the US population, results of a national poll are likely driven by views among whites.
Having read it again, all I can see them saying about aid and wokeness is that aid is at risk of being perceived as woke. That’s not a claim about exactly how the causation works as far as I can tell.
It’s relevant because if people’s opposition to woke is driven by racism or dislike of leftist-coded things or groups, that will currently also drive opposition to foreign aid, which is meant to help Black people and is broadly (centre) left coded*. (There are of course old-style Bush II type conservatives who both hate the left and like foreign aid, so this sort of polarization is not inevitable at the individual level, but it does happen.)
*Obviously there are lots of aid critics as you go further left who think it is just a instrument of US imperialism etc. And some centrists and centre-left people are aid critics too of course.
“And fundamentally opposition to wokism is motivated by wanting to treat all people equally regardless of race or sex”
I think this true of a lot of public opposition to wokeism: plenty liberals, socialist and libertarians with very universalist cosmopolitan moral views find a lot of woke stuff annoying, plenty working class people of colour are not that woke on race, and lots of moderate conservatives believe in equality of this sort. Many people in all these groups genuinely express opposition to various woke ideas based on a genuine belief in colourblindness and its gender equivalent, and even if that sort of views is somehow mistaken it is very annoying and unfair when very woke people pretend that it is always just a mask for bigotry.
But it absolutely is not true of all opposition to woke stuff, or all but a tiny minority:
Some people are genuinely openly racist, sexist and homophobic, in the sense that they will admit to being these things. If you go and actually read the infamous “neoreactionnaries” you will find them very openly attacking the very idea of “equality”. They are a tiny group, but they do have the ear of some powerful people: definitely Peter Thiel, probably J.D. Vance (https://www.nytimes.com/2025/01/18/magazine/curtis-yarvin-interview.html).
But in addition very many ordinary American Christians believe that men in some sense have authority/leadership over women, but would sincerely (and sometimes accurately) deny feeling hostile to women. For example the largest Protestant denomination in the United States is Southern Baptism, and here’s the NYT reporting on them making women even more banned from leadership with the organization than they already were, all of 2 years ago: https://www.nytimes.com/2023/06/14/us/southern-baptist-women-pastors-ouster.html There are 13 million Southern Baptists, which isn’t a huge share of the US population, but many other conservative Protestant denominations also forbid women to serve in leadership positions and there are a lot of conservative Protestants overall, and some Catholics, and officially the Catholic Church itself share this view. Of course, unlike the previous group, almost all of these people will claim that men and women in some sense have equal value. But almost all woke people who openly hate on white men will also claim to believe everyone has equal value, and develop elaborate theory about why their seemingly anti-white male views are actually totally compatible with that. If you don’t believe the latter, I wouldn’t believe this group either that men being “the head of the household” is somehow compatible with the good, proper kind of equality. (Note that it’s not primarily the sincerity of that belief I am skeptical of, just it’s accuracy.)
As for sexuality, around 29% of Americans still oppose same-sex marriage: https://news.gallup.com/poll/1651/gay-lesbian-rights.aspx Around a quarter think having gay sex/being gay is immoral: https://www.statista.com/statistics/225968/americans-moral-stance-towards-gay-or-lesbian-relations/
More generally, outgroup bias is a ubiquitous feature of human cognition. People can have various groups that wokeness presents itself as protecting as their outgroup, and because of outgroup bias some of those people will then oppose wokeness as a result of that bias. This is actually a pretty weak claim, compatible with the idea that woke or liberal people have equal or even greater levels of outgroup bias as conservatives. And it means that even a lot of people who sincerely claim to hold egalitarian views are motivated to oppose wokeness at least partially because of outgroup bias. (Just as some Americans liberals who are not white men and claim to be in some sense egalitarian in fact have dislike of white men as a significant motivation behind their political views: https://www.bbc.com/news/world-us-canada-45052534 There are obviously people like Jeong on the right. Not a random sample, but go on twitter and you’ll see dozens of them.)
Literally all of these factions/types of person on the right have reason to oppose wokeness that are not a preference for colourblindness and equality of opportunity (the last group may of course also genuinely be aggravated by open woke attacks on those things yes, it’s not an either or.) Since there are lots of these people, and they are generally interested enough in politics to care about wokeness in the first place, there is no reason whatsoever to think they are not well represented in the population of “people who oppose wokeness”. The idea that no one really opposes wokeness except because they believe in a particular centre-right version of colourblind equality of opportunity both fails to take account of what the offficial, publicly stated beliefs of many people on the right actually are, and also fails to apply very normal levels of everyday skepticism to the stated motivations of (other) anti-woke people who endorse colourblindness.
Some people are going to say that destroying nature is a positive impact of new humans, because they think wild animals have net negative lives.
That’s my read of the evidence as well, but I haven’t examined it closely.
Have you checked it was uniquely strong? Just off the top of my head Taiwan and (especially) South Korea both grew very rapidly too, under “right-wing” dictatorships and then (at least with SK, less sure about when Taiwan stopped growing rapidly) under democracy as well. I don’t dispute the general point that the CCPs developmental record is very impressive, but that’s still importantly different from “their system achieved things no one has ever achieved under another system”.
Well, at a technical level the first is a conditional probability and the second is an unconditional probability of a conjunction. So the first is to be read as “the probability that alignment is achieved, conditional on humanity creating a spacefaring civilization” whilst the second is “the probability that the following is happens: alignment is solved and humanity creates a spacefaring civilization”. If you think of probability as a space, where the likelihood of an outcome=the proportion of the space it takes up, then:
-the first is the proportion of the region of probability space taken up humanity creating a space-faring civilization in which alignment occurs.
-the second is the proportion of the whole of probability space in in which both alignment occurs and humanity creates a space-faring civilization.
But yes, knowing that does not automatically bring real understanding of what’s going on. Or at least for me it doesn’t. Probably the whole idea being expressed would better written up much more informally, focusing on a concrete story of how particular actions taken by people concerned with alignment might surprisingly be bad our suboptimal.
No, I still don’t understand.
Fair enough, I actually think it is very hard to discover causal relationship in any social scientific domain. I still strongly suspect that dictatorial governments are bad however. (It’s almost impossible to get data on the effects of highly developed countries by modern standards ceasing to be democracy, because this has almost never happened.)
Unclear what (economic) libertarianism implies about the Trump admin. They will cut taxes, but also they might put up tarifs.
“Some existing AI Safety agendas may increase P(Alignment AND Humanity creates an SFC) while at the same time not increasing as much or even, if unlucky, reducing P(Alignment | Humanity creates an SFC). For example, such agendas may significantly prevent early AIs and AI usages from destroying, at the same time, the potential of Humanity and AIs. “
This is compressing a complicated line of thought into such a small number of words that I find it impossible to understand.
“It is unclear to me whether less democracy would increase or decrease economic growth, which has been very connected to human welfare. So I do not know whether less democracy would increase or decrease human welfare.”
I usually think your posts are very good because you are prepared to honestly and clearly state unpopular beliefs. But this seems a bit glib: economic growth is not the only thing that effects well-being, by any means, and so simply being unsure about how democracy effects it is not a strong case on its own for being unsure whether democracy increases or decreases human well-being. Growth might be the most important thing of course, but if you really are neutral on the effect of democracy on growth, other factors will still determine whether you should think democracy is net beneficial for humans in expectation.
Also, in the particular case of the US to evaluate whether democracy continuing is a good thing for human well-being, what primarily matters is how democracy shapes up versus the realistic alternatives in the US, not whether democracy is the best possible system in principle, or even the best feasible system in most times and places. It’s not like we are comparing democracy in the US to the Chinese communist system, market anarchism, sortition or the implementation of the knowledge-based restrictions on the franchise suggested by Jason Brennan in his book Against Democracy. We are comparing it to “on the surface democracy, but really Musk and Trump use the justice department to make it impossible for credible opponents to run against the Republican party for many national offices or against their favoured candidates in crucial Republican primaries, and also Musk can in practice stop any government payment to anyone so long as Trump himself doesn’t prevent him doing so.” Maybe you think the risk of that is low, but that’s what people are worried about. Maybe you also think that might be good, because Republican policies might be better for growth and that dominates all other factors, but even then, it’s worth being clear about what you are advocating agnosticism about and its not the merits of democracy in the abstract, but the current situation in the US.
“Most articles seem to default to either full embrace of AI companies’ claims or blanket skepticism, with relatively few spotlighting the strongest version of arguments on both sides of a debate. ” Never agreed with anything as strongly in my life. Both these things are bad and we don’t need to choose a side between them. And note that the issue here isn’t about these things being “extreme”. An article that actually tries to make a case for foom by 2027, or “this is all nonsense, it’s just fancy autocomplete and overfitting on meaningless benchmarks” could easily be excellent. The problem is people not giving reasons for their stances, and either re-writing PR, or just expressing social distaste for Silicon Valley, as a substitute.
Not surprising they are getting rid of the safety people, but getting rid of CHIPS act people seems to me to be evidence in favour of the “genuinely idiotic, rather than Machiavellian geniuses” theory of Trump and Musk. Presumably Trump still wants to be more powerful than China even if he moves away from hawkishness towards making friends. And Musk presumably wants Grok to be better than the best Chinese models. (In Musk’s case of course, it’s possible he actually doesn’t favour getting rid of the CHIPS staff.)
Fair point. I certainly don’t think it is established (or even more than 50% likely) that SBF was purely motivated by narrow personal gain to the exclusion of any real utilitarian convictions at all. But I do think he misrepresented his political convictions.
“how much AGI companies have embedded with the national security state is a crux for the future of the lightcone”
What’s the line of thought here?
I don’t think cutting ties with Palantir would move the date of AGI much, and I doubt it is the key point of leverage for whether the US becomes a soft dictatorship under Trump. As for the other stuff, people could certainly try, but I think it is probably unlikely to succeed, since it basically requires getting the people who run Anthropic to act against the very clear interests of Anthropic and the people who run it (And I doubt Amodei in particular, sees himself as accountable to the EA community in any way whatsoever.)
For what it’s worth I also think this complicated territory and that there is genuinely a risk of very bad outcomes from China winning an AI race too, and that the US might recover relatively quickly from its current disaster. I expect the US to remain somewhat less dictatorial than China even in the worst outcomes, though it is also true that even the democratic US has generally been a lot more keen to intervene, often but not always to bad effect, in other country’s business.
In fairness, SBF was also secretly a prominent Republican donor, right? Didn’t he basically suggest in the infamous interview with Kelsey Piper that he was essentially cynical about politics and just trying to gain influence with both parties to help advance FTX and Alameda’s interests?
It’s not clearly bad. It’s badness depends on what the training is like, and what your views are around a complicated background set of topics involving gender and feminism, none of which have clear and obvious answers. It is clearly woke in a descriptive non-pejorative sense, but that’s not the same thing as clearly bad.
EDIT: For example, here is one very obvious way of justifying some sort of “get girls into science” spending that is totally compatible with centre-right meritocratic classical liberalism and isn’t in any sense obviously discriminatory against boys. Suppose girls who are in fact capable of growing up to do science and engineering just systematically underestimate their capacity to do those things. Then “propaganda” aimed at increasing the confidence of those girls specifically is a totally sane and reasonable response. It might not in fact be the correct response: maybe there is no way to change things, maybe the money is better spent elsewhere etc. But it’s not mad and its not discriminatory in any obvious sense, unless anything targeted only at any demographic subgroup is automatically discriminatory, which at best only a defensible position not an obvious one. I don’t know if smart girls are in fact underconfident in this way, but it wouldn’t particularly susprirse me.