I live for a high disagree-to-upvote ratio
huw
Thank you—I am a big believer in the power of collective action & have organised successful union drives & pay disputes in the past. I don’t have a lot to add to your breakdown; I think this is a very promising area for EA to consider for almost every cause area (ex. would love to see a similar breakdown for current/future efforts in frontier AI labs).
Just strategically, I think the most promising insider activism campaign would be to partner with an existing union in a country with strong union protections; this way, you can leverage those protections to prevent retaliation against employee activists, as they can credibly claim they were organising for the union. I think, frankly, this rules out the U.S. as a starting point—you would want to build groundswell in places where the host companies can’t cut out off at the knees (the recent dismissals at Google are a strong reminder that if employees protest something the company has a stake in, they’ll be fired at-will with no consequences).
Furthermore, unions have a lot of existing connections & skills in developing these campaigns, and, as you’ve noted, regularly participate in employee activism directly or otherwise have presences in other social movements. This comes with the trade-off of potentially alienating some employees (unions are almost exclusively left-wing and have established reputations), but I don’t think there are many people (outside of the U.S.) who would be put off by a union and would’ve otherwise joined an employee activist drive.
Can you give a sense of what proportion? Should we expect ‘some’ to mean ≤10% or something more significant?
I misinterpreted “but low if you think AI could start to automate a large fraction of jobs before 2030”. Thanks for clarifying :)
I don’t get it. How are consumers supposed to pay trillions of dollars if AI is going to automate a large fraction of their jobs?
FWIW on timelines:
June 13, 2022: Critiques paper (link 1)
May 9, 2023: Language models explain language models paper (link 2)
November 17, 2023: Altman removal & reinstatement
February 15, 2024: William_S resigns
March 8, 2024: Altman is reinstated to the OpenAI board
March 12, 2024: Transformer debugger is open-sourced
April 2024: Cullen O’Keefe departs (via LinkedIn)
April 11, 2024: Leopold Aschenbrenner & Pavel Izmailov fired for leaking information
April 18, 2024: Users notice Daniel Kokotaljo has resigned
Without reading too much into it, there’s a similar amount of negativity about the state of EA as there is a lack of confidence in its future. That suggests to me that there’s a lot of people who think EA should be reformed to survive (rather than ‘it’ll dwindle and that’s fine’ or ‘I’m unhappy with it but it’ll be okay’)?
If anything, EA now has a strong public (admittedly critical) reputation for longtermist beliefs. I wouldn’t be surprised if some people have joined in order to pursue AI alignment and got confused when they found out more than half of the donations go to GHD & animal welfare.
Orthogonal to your post, that particular policy position seems out of character for him. He was very happy to tout Operation Warp Speed as president & encouraged people to get vaccinated (as well as privately being a germaphobe). I wonder what’s motivating this specific statement?
My sense from a very quick skim of the literature is:
There are barely any studies or RCTs on non-dual mindfulness, and certainly not enough to make a conclusion about it having a larger-than-normal effect size[1][2]
The most highly-cited meta-analyses that do split out types of meditation either directly find no significant difference between kinds, or claim they don’t have enough evidence for a difference in their discussions[1:1][2:1]
The effect size is no better or worse than other psychotherapies
It might be possible to do some special pleading around non-dual mindfulness in particular, but frankly, everyone who has their own flavour of mindfulness does a lot of special pleading around it, so I’m default skeptical despite non-dual being my personal preference.
My sense as an experienced non-dual meditator (~10 years, and having experienced ‘ego death’ before without psychedelics):
I am skeptical that at-will or permanent ego death is possible. By ‘at-will’, I mean with an ease similar to meditating, with effects lasting longer than an acid trip.
I am skeptical that this state would even be desirable; most people that have tried psychedelics aren’t on a constant low dose (despite that having few downsides for people not prone to psychosis).
Even if it is possible and desirable, I am skeptical that there is a path to this kind of enlightenment for every person, and it might only be possible for a very small percentage of people even with the motivation and infinite free time to practice
I think teaching people mindfulness would be good, but probably no better than teaching them any other kind of therapy. Maybe it’s generally more acceptable because it’s less stigmatised than self-learning CBT. But I’d be really curious to understand what the people who voted yes were thinking, and in particular what they think ‘enlightenment’ is.
On a separate claim, I find it really hard to discount the rough period since ~1800 where a huge amount of new technological development took place in academic or other non-profit contexts (including militaries). When you add pre-production research to that, I think you’d be hard-pressed to find a single world-changing technology since the enlightenment that doesn’t owe a lot of its existence to non-profit research. Am I misunderstanding your claim?
I disagree-voted because the latter sounds like a very extraordinary claim. I know you don’t have the time to go into an essay on this, but do you mind sketching the rough logic?
That’s not falsifiable
Edit: I stand by this; it was a quick way to explain the problems with Jason’s comment. I don’t think we should be too mean to people for not donating (in order to not dissuade them from doing it in the future), but this particular model could be used to excuse basically any behaviour as ‘they might be a potential EA one day’. I don’t think it’s a good defence and wouldn’t want to see it trotted out more often.
This is an extremely rich guy who isn’t donating any of his money. I wouldn’t call him ‘aligned’ at all to EA.
I would also just, be careful about reading him on his word. He’s only started talking about this framing recently (I’ve followed him for a while because of a passing interest in Kernel). He may well just be a guy who’s very scared of dying with an incomprehensible amount of money to spend on it, who’s looking for some admirers.
And OP discusses market socialist systems which allow capital markets but not private capital!
This isn’t a petty distinction. It allows the definer to claim all of the benefits of markets and dodge the more negative effects of private ownership, pitting centralised price controls as inherent to anti-capitalist systems. And in the worst cases (not here) it allows people to motte-and-bailey their way out of the devastating effects of wealth inequality by claiming that ‘capitalism’ actually just means markets.
I mention all this because I see this definition a lot in rat-adjacent circles and it frustrates me, because people usually just want to talk about why disgusting levels of wealth inequality are necessary or even permissible, and then get a non-sequitur defence of markets in response.
To make it concrete, the OP’s friends are interested in economic inequality. This is absolutely an inherent consequence of private capital ownership, and therefore capitalism. In a debate, then, you’d want to start defending private capital ownership rather than markets. So I think the ‘talking past each other’ arises from a faulty definition, but just not the one that the OP identified.
A meta thing that frustrates me here is I haven’t seen much talking about incentive structures. The obvious retort to negative anecdotal evidence is the anecdotal evidence Will cited about people who had previous expressed concerns who continued to affiliate with FTX and the FTXFF, but to me, this evidence is completely meaningless because continuing to affiliate with FTX and FTXFF meant a closer proximity to money. As a corollary, the people who refused to affiliate with them did so at significant personal & professional cost for that two-year period.
Of course you had a hard time voicing these concerns! Everyone’s salaries depended on them not knowing or disseminating this information! (I am not here to accuse anyone of a cover-up, these things usually happen much less perniciously and much more subconsciously)
FWIW I find the self-indulgence angle annoying when journalists bring it up, it’s reasonable for Sam to have been reckless, stupid, and even malicious without wanting to see personal material gain from it. Moreover, I think leads others to learn the wrong lessons—as you note in your other comment, the fraud was committed by multiple people with seemingly good intentions; we should be looking more at the non-material incentives (reputation, etc.) and enabling factors of recklessness that led them to justify risks in the service of good outcomes (again, as you do below).
G’day Marissa! I’m admittedly not the best-versed in psychiatry specifically, since I’ve focused more on psychotherapy in the past. My general vibe from reading & research I’ve done is that (for pharmacotherapy only, can’t speak to crisis care):
Pharmacotherapy is robustly effective in the short-term with minimal deterioration
It’s no more effective than therapy, and is likely worse than therapy in the long-term
Pharmacology & psychotherapy combined is better than both individually
People might adapt to it, requiring higher and higher doses
We don’t know how it works, nor do we know how depression works (‘chemical imbalance’ is marketing)
There is probably a meaningful difference between common-or-garden depression & anxiety, and all other psychiatric conditions (ex. bipolar, schizophrenia); the latter may require sustained treatment
My personal theory is that drugs are good for preventing people from doing harm to themselves for a short period, and in many cases the causes of the underlying depression go away on their own. But they probably shouldn’t be used to permanently improve someone’s mood, at which point we should focus on improving their environmental conditions and retraining their learned responses to stimuli.
But in a more general sense, I haven’t come across a lot of general reviews assessing the effectiveness one way or the other in a deliberately unbiased way, but I haven’t looked hard. I think it’s likely that the role split between psychologists and psychiatrists, and the industrial split in funding between the two, is likely to make this research very hard. Anecdotally, I liked Johann Hari’s Lost Connections, which begins with a pop-science assessment of the evidence against psychiatry while remaining balanced enough to describe when it’s valuable, but I wouldn’t call it unbiased.
Thank you! I framed it as a question for this reason ❤️
Based on the timing, how likely is it that this was a partial consequence of Bostrom’s personal controversies?
My union is pretty conservative w/r/t social justice, because it’s the one that covers tech & science (our members tend to hold left-wing opinions, but don’t like stirring the pot). I don’t know how we’d feel about animal welfare, but not many of us work directly in those industries.
To get closer to your point, live animal export is a big issue in Australia, and our dedicated Meat Industry Employees Union have called for a ban on it. So I think the kind of campaign you’re talking about would fit right in here. Their animal welfare policy is so important to them that it’s on the front page of their website. Equally, they’ve worked with the Greens and the Animal Justice Party (both legislatively represented) in the past, and the unions here have close ties to the Labor party (1 of 2 major parties), so political change might be uniquely achievable here—although I doubt the situation is much different in most EU countries.