Developing my worldview. Interested in meta-ethics, epistemics, psychology, AI safety, and AI strategy.
Jack R
[Question] What domains do you wish some EA was an expert in?
Redwood Research is hiring for several roles
Consequentialists (in society) should self-modify to have side constraints
23 career choice heuristics
Community builders should learn product development models
If you ever feel bad about EA social status, try this
Because of Evan’s comment, I think that the signaling consideration here is another example of the following pattern:
Someone suggests we stop (or limit) doing X because of what we might signal by doing X, even though we think X is correct. But this person is somewhat blind to the negative signaling effects of not living up to our own stated ideals (i.e. having integrity). It turns out that some more rationalist-type people report that they would be put off by this lack of honesty and integrity (speculation: perhaps because these types have an automatic norm of honesty).
The other primary example of this I can think of is with veganism and the signaling benefits (and usually unrecongnized costs).
A solution is that when you find yourself saying “X will put off audience Y” to ask yourself “but what audience does X help attract, and who is put off by my alternative to X?”
[Question] Ideas for avoiding optimizing the wrong things day-to-day?
I’ve been taking a break from the EA community recently, and part of the reasoning behind this has been in search of a project/job/etc that I would have very high “traction” on. E.g. the sort of thing where I gladly spend 80+ hours per week working on it, and I think about it in the shower.
So one heuristic for leaving and exploring could be “if you don’t feel like you’ve found something you could have high traction on and excel at, and you haven’t spent at least X months searching for such a thing, consider spending time searching”
I’m still not very convinced of your original point, though—when I simulate myself becoming non-vegan, I don’t imagine this counterfactually causing me to lose my concern for animals (nor does it seem like it would harm my epistemics? Though not sure if I trust my inner sim here. It does seem like that, if anything, going non-vegan would help my epistemics, since, in my case, being vegan wastes enough time such that it is harmful for future generations to be vegan, and by continuing to be vegan I am choosing to ignore that fact).
I’d be curious to see how many people each of these companies employ + the % of employees which are EAs
Notably, (and I think I may feel more strongly about this than others in the space), I’m generally less excited about organizers who are ambitious or entrepreneurial, but less truth-seeking, or have a weak understanding of the content that their group covers.
Do you feel that you’d rather have the existing population of community builders be a bit more ambitious or a bit more truth-seeking? Or: if you could suggest improvement on only one of these virtues to community builders, which would you choose? ETA: Does the answer feel obvious to you, or is it a close call?
[Question] Have you noticed costs of being anticipatory?
FWIW, Chris didn’t say what you seem to be claiming he said
Not sure, but it feels like maybe being targeted multiple times by a large corporation (e.g. Pepsi) is less annoying than being targeted by a more niche thing
I really like your drawings in section 2 -- conveys the idea surprisingly succinctly
[Linkpost] Can lab-grown brains become conscious?
[Link-post] Beware of Other-Optimizing
I might make it clearer that your bullet points are what you recommend people not do. I was skimming and at first and was close to taking away the opposite of what you intended.
Maybe someone should user-interview or survey Oregonians to see what made people not want to vote for Carrick