Former CTO and co-founder of earn-to-give fintech Mast.
Henry Stanley šø
Thank you for your service š«”
The BCC has been hard to get more traction on and probably requires mobilization on a larger scale than we currently have.
Could you give more detail on this?
I donāt see any mention here or in the comments about neglectedness, which seems like the most obvious reason for why EA isnāt a good fit here. There are enormous, well-funded, long-established ecosystems dedicated to exactly this sort of thingācivil liberties organisations, legal defence funds, democratic governance NGOs, journalism, academic institutions, unions, anti-fascist networks etc.
I think thereās some argument that the EA mindset could be applied to finding tractable interventions here but ultimately I just think there are more pressing problems that need our attention.
An analogy I read on Substack: if an epidural manufacturer told a government hospital āyouāre welcome to use our drug so long as you donāt use it in any abortions,ā it would probably be prudent to decline that contract (too much overhead).
This is a great intuition pump.
CEA was aware it was shared with people outside of HR by Riley, even if they themselves did not share it outside HR.
And it seems then like any confidentiality obligation on HR is expunged, given that this Riley shared the document themselves. Or at the very least thereās no case for them failing to act because of the need to keep the document/āits author confidential, as they had already shared it widely.
Donāt have much to add except that this sounds exceptionally fucked-up and Iām sorry you had to go through it.
I once had a conversation with a friend who felt that Anthropic advancing the AI frontier (despite their explicit commitment not to) was fine because theyāre āleading from the frontā in terms of their ethical stance.
It seems like that might not actually work? Advancing the frontier presumably encourages other labs to competeāand if those labs donāt have the same ethical strictures then leading from the front has no effect except to have moved the frontier forward faster than it would have otherwiseā¦
(Referencing OpenAIās deal with the Pentagon announced shortly after the Anthropic sanctions)
I donāt think this is meaningfully different from previous admins (not sure about autonomous weapons but certainly mass surveillance of Americans at home has been going on since the 2010s).
Broadly agree but:
The current problem is the lack of good training programs in impact-focused thinking, so itās hard for people with tons of experience and great credentials to get to the required EA-ness stage (impact-focused mindset, landscape familiarity) quickly enough to get the positions on offer, when they join EA.
Arenāt we mitigating this with things like MATS and BlueDot et al? These should be producing useful hires at a high rate so training isnāt the issue it seems
Let me write something up and come back to you.
Broadly in order of safety itās probably caffeine > modafinil > amphetamines (Vyvanse, Ritalin, Concerta, dexamphatamine etc). But amphetamines are very commonly prescribed for ADHD/ānarcolepsy (usually with an ECG and occasional blood pressure checks). I think the risk-reward works out very much positively but obviously Iām eliding a lot of detail.
Great post. Two things come to mind:
-
One way to just be able to do more stuff is to take stimulants. I think there are cases where being on them can dent your intelligence in some subtle ways but broadly they can drastically increase your ability to do more, work through when youāre fatigued, etc. Maybe itās still a sufficiently edgy position that you donāt mentioned it here but the absence was interesting. People at college are all taking modafinil for a reason.
-
I worry that some incredibly ambitious people in the EA world have gone on to pursue paths that have actually been harmful. Early employees at the frontier AI labs seem like the obvious exampleāAnthropic was founded as an āAI safety labā with commitments not to push the frontier but they obviously forgot about that along the way, and it seems hard to justify continuing to work there on capabilities imo. I suspect thereās a lot of motivated reasoning going on among this group. Perhaps itās a cautionary tale about ambition unmoored from reflection as other people point out here, or that if your ambition leads to filthy lucre then itās very hard to course correct later on.
(Agree with the other commenters here that maybe the rate-limiting step isnāt just pushing harder but co-ordination, taking more individual risks, etc)
reposted from my comment on the original Substack article
-
Is there a risk of boiling the ocean here?
The ācommunity notes everywhereā proposal seems easy enough to build (Iāve been hacking away at a Chrome extension version of it). Iām not sure it makes sense to wait for personal computing to change fundamentally before trying to attempt this.
I agree that distribution is an issue, which Iām not sure how to solve. One approach might be to have a core group of users onboarded who annotate a specific subset of pagesālike the top 20 posts on Hacker Newsāso that thereās some chance of your notes being seen if youāre a contributor. But I suppose this relies on getting that rather large core group of users (e.g. HN readers) to start using the product.
Alternatively you build the thing and hope that it gets adopted in some larger way, say it gets acquired by X if they want to roll out community notes to the whole web.
Flock ā work in pubĀlic with friends (beta testers wanted)
You do address the FTX comparison (by pointing out that it wonāt make funding dry up), thatās fair. My bad.
But I do think youāre make an accusation of some epistemic impropriety that seems very different from FTXāgetting FTX wrong (by not predicting its collapse) was a catastrophe and I donāt think itās the same for AI timelines. Am I missing the point?
I might be missing the point, but Iām not sure I see the parallels with FTX.
With FTX, EA orgs and the movement more generally relied on the huge amount of funding that was coming down the pipe from FTX Foundation and SBF. When all that money suddenly vanished, a lot of orgs and orgs-to-be were left in the lurch, and the whole thing caused a huge amount of reputational damage.
With the AI bubble popping⦠I guess some money that would have been donated by e.g. Anthropic early employees disappears? But itās not clear that that money has been āearmarkedā in the same way the FTX money was; itās much more speculative and I donāt think there are orgs relying on receiving it.
OpenPhil presumably will continue to exist, although it might have less money to disburse if a lot of it is tied up in Meta stock (though I donāt know that it is). Life will go on. If anything, slowing down AI timelines will probably be a good thing.
I guess I donāt see how EAās future success is contingent on AI being a bubble or not. If it turns out to be a bubble, maybe thatās good. If it turns out not to be a bubble, we sure as hell will have wanted to be on the vanguard of figuring out what a post-AGI world looks like and how to make it as good for humanity as possible.
For effect, I would have pulled in a quote from the Reddit thread on akathisia rather than just linking to it.
Akathisia is a inner restlessness that is as far as I know the most extreme form of mental agitation known to man. This can drive the sufferer to suicide [...] My day today consisted of waking up and feeling like I was exploding from my skin, I had a urge that I needed to die to escape. [...] I screamed, hit myself, threw a few things and sobbed. I canāt get away from it. My family is the only reason why Iām alive. [...] My CNS is literally on fire and food is the last thing I want. My skin burns, my brain on fire. Itās all out survival.
Indeed; seems more like founding to give.
if I had kept at it and pushed harder, maybe the project would have got further⦠but I donāt think I actually wanted to be in that position either!
I think this is a problem with for-profit startups as well. Most of the time they fail. But sometimes they succeed (in the sense of ānot failingā rather than breakout success which is far rarer), and in that case youāre stuck with the thing to see it through to an exit.
I enjoyed this, and I miss bumping into you on the stairs at house parties!
Honestly, I kind of hated doing [GTP].
Are you willing to share why you hated it?
April Foolās isnāt for a couple more weeks