I work on the 1-on-1 team at 80,000 hours talking to people about their careers; the opinions I’ve shared here (and will share in the future) are my own.
alexrjl
Know what you’re optimising for
Have you read the transcripts from Vael Gates’s structured interviews with people working in AI about safety stuff, they seem to have done something pretty close to what you’re asking.
I saw many bits of this discussion on Twitter and found them interesting, but missed more than one of the threads posted here. It was a great idea to collect them in one place, thanks for doing so!
Welcome to the forum! What an impressive first post :)
I thought you might find it interesting to know that there’s been quite a lot of discussion of some sleep research over on LessWrong, notably this piece by Alexey Guzey and this response from Natália Mendonça (and >100 comments on each). It might be worth reaching out to either/both of them for thoughts if you were planning on writing more on this (you may already have done so, but it seemed worth flagging just in case).
And it never will be with that attitude! Sounds like you should write a piece on asteroid risk for it!
[half baked idea]
It seems reasonable to thank someone for the time they spent evaluating a grant, especially if you also do it when the grant is rejected (though this may be harder). I think it is reasonable to thank people for doing their job even (maybe especially?) when you are not the primary beneficiary of that job, and that their reason for doing it is not thanks.
There are a few organisations who work with high net worth individuals to deploy their money, and my guess is that anyone with this kind of capital would be able to speak to all of them fairly easily.
https://www.longview.org/ might be interesting for you to check out, as well as https://founderspledge.com/.
If it’s actually 100B though, that’s bigger than the two biggest EA-adjacent foundations which currently exist, so talking to either of them would be sensible.
https://www.openphilanthropy.org/
https://blog.ftx.com/blog/ftx-foundation/
The comment below is made in a personal capacity, and is speaking about a specific part of the post, without intending to take a view on the broader picture (though I might make a broader comment later if I have time).
Thanks for writing this. I particularly appreciated this example:A friend of mine at a different university attended the EA intro fellowship and found it lacking. He tells me that in the first session, foundational arguments were laid out, and he was encouraged to offer criticism. So he did. According to him, the organisers were grateful for the criticism, but didn’t really give him any satisfying replies. They then proceeded to build on the claims about which he remained unconvinced, without ever returning to it or making an effort to find an answer themselves.
I’m pretty worried about this. I got the impression from the rest of your post that you suspect some of the big picture problem is community builders focusing too much on what will work to get people into AI safety, but I think this particular failure mode is also a huge issue for people with that aim. The sorts of people who will hear high-level/introductory arguments and immediately be able come up with sensible responses seem like exactly the sorts of people who have high potential to make progress on alignment. I can’t imagine many more negative signals for bright, curious people than someone who’s meant to be introducing an idea not being able to adequately respond* to an objection they just thought of.
Though, to be fair, ‘hang on a sec, let me just check what my script says about that objection’ might actually be worse...
*To be clear, ‘adequately responding’ doesn’t necessarily mean ‘is so much of an expert that will just come up with a perfect response on the spot’. It’s fine to not know stuff, and it’s vital to be able to admit when you don’t. Signposting to a previous place the question has been discussed, or knowing that it will be covered later (if e.g. this comes up in a fellowship) if that is the case, both seem useful. It seems important to know enough about common questions, objections, and alternative viewpoints to be able to do this the majority of the time. If it’s genuinely something that the person running the session has never heard, this is exactly the time to demonstrate good epistemics—being willing to seriously engage, ask follow-up questions, and trying to double crux.
It’s flattering to see that this was in part prompted by my post!
Without trying to lean too hard into this tweet, I do actually think it might be worth linking to a googledoc version of this piece which has comment access enabled. Being able to comment on specific parts to ask for clarification and/or to respond to others, is pretty useful, especially for something that’s more than a few paragraphs long.
An easy win for hard decisions.
I found the context of the post kind of hard to understand and think the introduction is probably the section most worth editing. In particular, the “there’s an opportunity here” framing seemed to clash a bit with “this was almost funded by a major grantmaker” (emphasis mine).
As almost means wasn’t, it’s not super clear how big an update people should make about this without more context on the funding decision. If OP is making the case for this to happen, I think it might be better to just frame the post more clearly as “this is why I think you should found this thing, and I’ll connect you to people who are excited to fund it if you think you can”.
I had similar thoughts, discussed here after I tweeted about this post and somebody replied mentioning this comment.
(Apologies for creating a circular link loop, as my tweet links to this post, which now has a comment linking to my tweet)
[Question] What examples are there of (science) fiction predicting something strange/bad, which then happened?
Sounds right to me! I’m reading worth the candle at the moment :)
If be keen to hear right how you’re defining the genre, especially when the author isn’t obviously a member of the community. I loved worm and read it a couple of years ago, at least a year before I was aware rational fiction was a thing, and don’t recall thinking “wow this seems really rationalist” so much as just “this is fun words go brrrrrrrr”
If this is too time-consuming for the current FTX advisers, hire some staffHiring is an extremely labour and time intensive process, especially if the position you’re hiring for requires great judgement. I think responding to a concern about whether something is a good use of staff time with ‘just hire more staff’ is pretty poor form, and given the context of the rest of the post it wouldn’t be unreasonable to respond to it with ‘do you want to post a BOTEC comparing the cost of those extra hires you think we should make to the harms you’re claiming?’
(not an organiser but live in london)
I’ve been recommended https://www.expresstest.co.uk/, and also biogroup in shoreditch. Both offer same-day results.
Don’t people have the option to take it as a lump sum? If that is the case, presumably if they are willing to game the system to get the money they will not be particularly persuaded by a clear instruction to “only spend it on education”.
I’m pretty clearly Longtermist by most reasonable definitions of that phrase and think there are much better reasons to care about GHW than PR, for example the fact that future people mattering doesn’t stop me thinking that people dying of malaria is bad. I think this is also in general true of my Longtermist friends.
Thanks for the tip! I haven’t read it but having taken a quick look now at the maths education example (particularly enticing given my background), I agree that this seems closely related. Many of the ideas I have around things like this were partially formed in response to examples extremely like that one.