Developing my worldview. Interested in meta-ethics, epistemics, psychology, AI safety, and AI strategy.
Jack R
Because of Evan’s comment, I think that the signaling consideration here is another example of the following pattern:
Someone suggests we stop (or limit) doing X because of what we might signal by doing X, even though we think X is correct. But this person is somewhat blind to the negative signaling effects of not living up to our own stated ideals (i.e. having integrity). It turns out that some more rationalist-type people report that they would be put off by this lack of honesty and integrity (speculation: perhaps because these types have an automatic norm of honesty).
The other primary example of this I can think of is with veganism and the signaling benefits (and usually unrecongnized costs).
A solution is that when you find yourself saying “X will put off audience Y” to ask yourself “but what audience does X help attract, and who is put off by my alternative to X?”
I’ve been taking a break from the EA community recently, and part of the reasoning behind this has been in search of a project/job/etc that I would have very high “traction” on. E.g. the sort of thing where I gladly spend 80+ hours per week working on it, and I think about it in the shower.
So one heuristic for leaving and exploring could be “if you don’t feel like you’ve found something you could have high traction on and excel at, and you haven’t spent at least X months searching for such a thing, consider spending time searching”
I’m still not very convinced of your original point, though—when I simulate myself becoming non-vegan, I don’t imagine this counterfactually causing me to lose my concern for animals (nor does it seem like it would harm my epistemics? Though not sure if I trust my inner sim here. It does seem like that, if anything, going non-vegan would help my epistemics, since, in my case, being vegan wastes enough time such that it is harmful for future generations to be vegan, and by continuing to be vegan I am choosing to ignore that fact).
I’d be curious to see how many people each of these companies employ + the % of employees which are EAs
Notably, (and I think I may feel more strongly about this than others in the space), I’m generally less excited about organizers who are ambitious or entrepreneurial, but less truth-seeking, or have a weak understanding of the content that their group covers.
Do you feel that you’d rather have the existing population of community builders be a bit more ambitious or a bit more truth-seeking? Or: if you could suggest improvement on only one of these virtues to community builders, which would you choose? ETA: Does the answer feel obvious to you, or is it a close call?
FWIW, Chris didn’t say what you seem to be claiming he said
Not sure, but it feels like maybe being targeted multiple times by a large corporation (e.g. Pepsi) is less annoying than being targeted by a more niche thing
I really like your drawings in section 2 -- conveys the idea surprisingly succinctly
I might make it clearer that your bullet points are what you recommend people not do. I was skimming and at first and was close to taking away the opposite of what you intended.
e.g. from P(X) = 0.8, I may think in a week I will—most of the time—have notched this forecast slightly upwards, but less of the time notching it further downwards, and this averages out to E[P(X) [next week]] = 0.8.
I wish you had said this in the BLUF—it is the key insight, and the one that made me go from “Greg sounds totally wrong” to “Ohhh, he is totally right”
ETA: you did actually say this, but you said it in less simple language, which is why I missed it
This analysis suggests that altruistic actors with large amounts of money giving or lending money to young, resource-poor altruists might produce large amounts of altruistic good per dollar.
A suspicious conclusion coming from a young altruist! (sarcasm)
Of course feel free not to share, but I’d be curious for a photo of the inside of the office! Partly I am curious because I imagine how nice of a place it is (and e.g. whether there is a fridge) could make a big difference re: how much people tend to hang out there.
If you had to do it yourself, how would you go about a back-of-the-envelope calculation for estimating the impact of a Flynn donation?
Asking this question because I suspect that other people in the community won’t actually do this, and since you are maybe one of the best-positioned people to do this since you seem interested in it.
SPOILER: My predictions for the mean answers from each org. The first number is for Q2, the second is for Q1 (EDIT: originally had the order of the questions wrong):
OpenAI: 15%, 11%
FHI: 11%, 7%
DeepMind: 8%, 6%
CHAI/Berkeley: 18%, 15%
MIRI: 60%, 50%
Open Philanthropy: 8%, 6%
My primary reaction to this was “ah man, I hope this person doesn’t inadvertently annoy important people about AI safety being important, hurting the reputation of AI safety/longtermism/EA etc”
Could someone show the economic line of reasoning one would use to predict ex ante from the Nordhaus research that the Forum would have 50x more employees per user? (FYI, I might end up working it out myself.)
Yeah, I had to look this up
each additional doubling will solve a similar fraction of the problem, in expectation
Aren’t you assuming the conclusion here?
I have seen little evidence that FTX Future Fund (FFF) or EA Infrastructure Fund (EAIF) have lowered their standards for mainline grants
FFF is new, so that shouldn’t be a surprise.
Maybe someone should user-interview or survey Oregonians to see what made people not want to vote for Carrick