I live for a high disagree-to-upvote ratio
huw
Therapy without a therapist: Why unguided self-help might be a better intervention than guided
FWIW I find the self-indulgence angle annoying when journalists bring it up, it’s reasonable for Sam to have been reckless, stupid, and even malicious without wanting to see personal material gain from it. Moreover, I think leads others to learn the wrong lessons—as you note in your other comment, the fraud was committed by multiple people with seemingly good intentions; we should be looking more at the non-material incentives (reputation, etc.) and enabling factors of recklessness that led them to justify risks in the service of good outcomes (again, as you do below).
This is an extremely rich guy who isn’t donating any of his money. I wouldn’t call him ‘aligned’ at all to EA.
I would also just, be careful about reading him on his word. He’s only started talking about this framing recently (I’ve followed him for a while because of a passing interest in Kernel). He may well just be a guy who’s very scared of dying with an incomprehensible amount of money to spend on it, who’s looking for some admirers.
Self-guided mental health apps aren’t cost-effective… yet
FWIW on timelines:
June 13, 2022: Critiques paper (link 1)
May 9, 2023: Language models explain language models paper (link 2)
November 17, 2023: Altman removal & reinstatement
February 15, 2024: William_S resigns
March 8, 2024: Altman is reinstated to the OpenAI board
March 12, 2024: Transformer debugger is open-sourced
April 2024: Cullen O’Keefe departs (via LinkedIn)
April 11, 2024: Leopold Aschenbrenner & Pavel Izmailov fired for leaking information
April 18, 2024: Users notice Daniel Kokotaljo has resigned
My sense from a very quick skim of the literature is:
There are barely any studies or RCTs on non-dual mindfulness, and certainly not enough to make a conclusion about it having a larger-than-normal effect size[1][2]
The most highly-cited meta-analyses that do split out types of meditation either directly find no significant difference between kinds, or claim they don’t have enough evidence for a difference in their discussions[1:1][2:1]
The effect size is no better or worse than other psychotherapies
It might be possible to do some special pleading around non-dual mindfulness in particular, but frankly, everyone who has their own flavour of mindfulness does a lot of special pleading around it, so I’m default skeptical despite non-dual being my personal preference.
My sense as an experienced non-dual meditator (~10 years, and having experienced ‘ego death’ before without psychedelics):
I am skeptical that at-will or permanent ego death is possible. By ‘at-will’, I mean with an ease similar to meditating, with effects lasting longer than an acid trip.
I am skeptical that this state would even be desirable; most people that have tried psychedelics aren’t on a constant low dose (despite that having few downsides for people not prone to psychosis).
Even if it is possible and desirable, I am skeptical that there is a path to this kind of enlightenment for every person, and it might only be possible for a very small percentage of people even with the motivation and infinite free time to practice
I think teaching people mindfulness would be good, but probably no better than teaching them any other kind of therapy. Maybe it’s generally more acceptable because it’s less stigmatised than self-learning CBT. But I’d be really curious to understand what the people who voted yes were thinking, and in particular what they think ‘enlightenment’ is.
Can you give a sense of what proportion? Should we expect ‘some’ to mean ≤10% or something more significant?
I’m wary of EAs performatively self-flagellating and accepting more responsibility for the FTX thing than is warranted (given, e.g., that huge numbers of people with a very direct financial incentive in spotting FTX’s fraud didn’t spot it, so it’s obviously not weird if random EAs failed to spot it).
I don’t think this is about spotting fraud at all. I think the strong case for EA responsibility goes like:
EA promoted earning to give
When the movement largely moved away from it, not enough work was done to make that distance (such that the average person still strongly associates EA with earning to give)
Sam adopted an extremely reckless approach to EV-maximisation that was highly likely, if not guaranteed, to lead to a severe loss in value, regardless of the illegality of that loss
He publicly branded himself with this reckless EV-maximisation approach
He publicly associated himself with the movement, with earning-to-give specifically, and donated lots of money to EA causes to prove it
Many EA causes accepted that money and/or associated themselves with Sam
Critics of Sam’s recklessness did not speak up or were drowned out by other community members
The counterfactuals look something like this:
If EA had strongly distanced themselves from ETG, Sam may not have considered it as a path, or been unable to tie himself to EA
If EA had criticised Sam’s recklessness this may have moderated it
If EA orgs conditionalised their acceptance of his donations on financial transparency this may have mitigated the recklessness (and this would have been in their own interests since they stood to lose money!)
FWIW, I don’t necessarily agree with this, but this is what feels either explicit or implicit in a lot of the EA criticism around this that I’ve read. The focus appears to be explicitly on the causal chain of Sam adopting a reckless flavour of ETG which motivated him to take risks which didn’t pay off, and what EA, which was surely influential over his thinking, could’ve done to prevent it.
You’re right! It’s not that ETG is inherently bad (and frankly, I haven’t seen anyone make this argument), it’s that specific EV-maximising interpretations of ETG cause people to pursue careers that are (1) harmful, (2) net harmful, or (3) too risky to pay off.
Personally, I think FTX was (1) and (3), and
unlikely to be (2)probably also (2). I’m not really sure where the bar is, but under any moderately deontological framework (1) is especially concerning, and many of the people EA might want to have a good reputation with believe (1). So that’s roughly the worldview-neutral case for caring about strongly rejecting EV-maximising forms of ETG.
It’s wild for a news organisation that routinely witnesses and reports on tragedies without intervening (as is standard journalistic practice, for good reason) to not recognise it when someone else does it.
It feels like you’re arguing for a higher-than-necessary level of harm and suffering in the world, just because a high level of suffering already exists in this context? I can’t see an argument with this structure working anywhere else (and believe me, I think Sam should be punished).
And OP discusses market socialist systems which allow capital markets but not private capital!
This isn’t a petty distinction. It allows the definer to claim all of the benefits of markets and dodge the more negative effects of private ownership, pitting centralised price controls as inherent to anti-capitalist systems. And in the worst cases (not here) it allows people to motte-and-bailey their way out of the devastating effects of wealth inequality by claiming that ‘capitalism’ actually just means markets.
I mention all this because I see this definition a lot in rat-adjacent circles and it frustrates me, because people usually just want to talk about why disgusting levels of wealth inequality are necessary or even permissible, and then get a non-sequitur defence of markets in response.
To make it concrete, the OP’s friends are interested in economic inequality. This is absolutely an inherent consequence of private capital ownership, and therefore capitalism. In a debate, then, you’d want to start defending private capital ownership rather than markets. So I think the ‘talking past each other’ arises from a faulty definition, but just not the one that the OP identified.
I haven’t run the numbers myself but I generally assume that FTX’s account-holders were mostly moderately well-off HIC residents (based on roughly imbibed demographics of crypto), and the Future Fund’s beneficiaries are by and large worse off. There were probably some number of people who invested their life savings or were otherwise poor to begin with that were harmed more significantly then the beneficiaries of their money. But on the whole it feels like it was an accidental wealth transfer, and much of that harm will be mitigated if they’re made whole (but admittedly, the make-whole money just comes from crypto speculation that trades on the gullibility of yet more people).
But much less confident in this take; my point is much more around the real harms it caused being worth thinking about.
Orthogonal to your post, that particular policy position seems out of character for him. He was very happy to tout Operation Warp Speed as president & encouraged people to get vaccinated (as well as privately being a germaphobe). I wonder what’s motivating this specific statement?
Based on the timing, how likely is it that this was a partial consequence of Bostrom’s personal controversies?
I’m always surprised to see sheep get lumped in with cows in discussions of farmed animal welfare (ex. the SSC Adversarial Collaboration). Sure, it’s not a terrible proxy, but sheep are often freer, need to be regularly shorn to avoid overheating, and usually die of natural causes. There are definitely some practices which are awful, but sheep are quite hard to optimise in the same way we’ve done with pigs & chickens, or even cows.
However, we eat them when they’re babies so maybe it swings in the absolute other direction.
I tend to agree with all these points, actually—forgot about the clawbacks & specifically how substantial they were.
Coherence may not even matter that much, I presume that one of Open Philanthropy’s goals in the worldview framework is to have neat buckets for potential donors to back depending on their own feelings. I also reckon that even if they don’t personally have incoherent beliefs, attracting the donations of those that do is probably more advantageous than rejecting them.
If anything, EA now has a strong public (admittedly critical) reputation for longtermist beliefs. I wouldn’t be surprised if some people have joined in order to pursue AI alignment and got confused when they found out more than half of the donations go to GHD & animal welfare.
A meta thing that frustrates me here is I haven’t seen much talking about incentive structures. The obvious retort to negative anecdotal evidence is the anecdotal evidence Will cited about people who had previous expressed concerns who continued to affiliate with FTX and the FTXFF, but to me, this evidence is completely meaningless because continuing to affiliate with FTX and FTXFF meant a closer proximity to money. As a corollary, the people who refused to affiliate with them did so at significant personal & professional cost for that two-year period.
Of course you had a hard time voicing these concerns! Everyone’s salaries depended on them not knowing or disseminating this information! (I am not here to accuse anyone of a cover-up, these things usually happen much less perniciously and much more subconsciously)