Thanks for this post. It’s really interesting to hear your story and analysis, and I’m really glad you’ve recently been much happier with how things are going!
When reading, I thought of some other posts/​collections that readers might find interesting if they want to dive deeper on certain threads. I’ll list them below relevant quotes from your post.
I prioritised being cooperative and loyal to the EA community over any other concrete goal to have an impact. I think that was wrong, or at least wrong without a very concrete plan backing it up.
I put too much weight on what other people thought I should be doing, and wish I had developed stronger internal beliefs. Because I wanted to cooperate, I considered a nebulous concept of ‘the EA community’ the relevant authority for decision-making. Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to.
I did not think enough about how messages can be distorted once they travel through the community and how this message might not have been the one 80,000 Hours had intended.
Similar things can be said about the risks newly started projects could possibly entail. [...] My interpretation was that my judgement generally was not to be trusted, and if it was not good enough to start new projects myself, I should not make generic career decisions myself, even where the possible downsides were very limited.
On this, this collection of sources related to downside risks/​accidental harm may be relevant.
Thanks for this post. It’s really interesting to hear your story and analysis, and I’m really glad you’ve recently been much happier with how things are going!
When reading, I thought of some other posts/​collections that readers might find interesting if they want to dive deeper on certain threads. I’ll list them below relevant quotes from your post.
On this, posts tagged Cooperation & Coordination may be relevant.
On this, posts tagged Epistemic Humility may be relevant.
On this, The fidelity model of spreading ideas may be relevant. (Also the less prominent and more authored-by-me Memetic downside risks: How ideas can evolve and cause harm.)
On this, this collection of sources related to downside risks/​accidental harm may be relevant.
Thanks—so useful to list specific things people could read to follow up!