What problem are you trying to solve by recommending to not date within EA?
If it’s conflicts of interest, it seems like you’ll get more mileage directly promoting norms of avoiding conflict of interest by disclosing what would bias judgement and avoiding being a decision maker in that situation.
As one anecdote, I worked in a typical enough non EA startup in which multiple coworkers had romantic relationships with each other, and multiple coworkers had strong friend relationships with each other. In my experience management decisions were more highly influenced and biased by friend relationships than by romantic relationships. Many companies and institutions have cliques and friend networks that try to gain power together, and I do think it makes sense to have strong norms on disclosing that and reducing those conflicts of interest.
On one hand I agree that avoiding conflicts of interest is important, on the other I think you’re approaching it too narrowly if you focus on romantic/sexual relationships. But I wouldn’t bite the bullet of saying one shouldn’t have friendly or romantic/sexual relationships in the EA community, as that just seems too high a cost to pay.
Jonathan Claybrough
Though there’s a point of diminishing returns to treating every bad grant as a scandal, 500000 $ seems non negligible and worth scandaling about at least a little. If we do scandals on all large grants, then it incentivizes to start with smaller grants for hits based giving (where possible)
My take on Buck’s comment is that he didn’t update from this post because it’s too high level and doesn’t actually argue for most of its object level proposals. I have a similar reaction to Buck where I evaluate a lot of the proposals to be pretty bad, and since they haven’t argued much for them and I don’t feel much like arguing against them.
I think Buck was pretty helpful in saying (what I interpret to mean) “I would be able to reply more if you argued for object level suggestions and engaged more deeply with the suggestions you’re bringing”
I overall feel I learnt nothing new from the generated answers and could recognize existing inspiration. ChatGPT is valuable at coming up with a bunch of stuff fast, but I’m not impressed by the quality itself.
Specifically in the first 20 examples, I’d say over half of them are mostly false (ie. I would not follow its advice and think there are good reasons to not follow the advice) : 2 3 4 6 7 8 10 11 13 16
Others are uninteresting.
Just 2 I found both mostly true and mostly novel (“If you’ve never had a cold, you’re not exposing yourself to enough germs” and “If you’ve never received a parking ticket, you’re not driving in enough unfamiliar places.”)I found the Robin Hanson versions mostly uninteresting.
I found the historical accident versions mostly uninteresting.
I’d be interested in an update on this post :)
You explained the difference between strategy and governance as governance being a more specific thing, but I’m not sure it’s a good idea in a specific place to separate and specialize in that way. What good does it bring to have governance separated from strategy ? Should experts in governance not communicate directly with experts in strategy (like should they only interface through public reports given from one group for the other?)
It seems to be governance was already a field thinking about the total strategy as well as specific implementations. I personally think of AI safety as Governance, Alignment and field building, and with the current post I don’t see why I should update that.
I think the information you give on Aurea is expired. They’ve closed their form, taken down their initial EA Forum post, and have not responded to emails in months. In short, they seem to have shut down (though there probably still are individuals working on longtermist issues living there).
I personally see them as one of the most egregious failures for a longtermist hub from the info I was given.(I don’t give more detail because I wasn’t there and would prefer if they wrote a postmortem themselves).EDIT : I’ve crossed out the before-last sentence as I don’t endorse it anymore and shouldn’t have written in that tone in the first place. I didn’t have first hand information and was unnecessarily hostile. I wish instead that I’d simply expressed my confusion as to why they disappeared.