Developing my worldview. Interested in meta-ethics, epistemics, psychology, AI safety, and AI strategy.
Jack R
I’d be curious to see how many people each of these companies employ + the % of employees which are EAs
[First comment was written without reading the rest of your comment. This is in reply to the rest.]
Re: whether a company adds intrinsic value, I agree, it isn’t necessarily counterfactually good, but also that’s sort of the point of a heuristic—most likely you can think of cases where all of these heuristics fail; by prescribing a heuristic, I don’t mean to say the heuristic always holds, instead just that using the heuristic v.s. not happens to, on average, lead to better outcomes.
Serial entrepreneur seems to also be a decent heuristic.
I haven’t thought about it deeply, but the main thing I was thinking here was that I think founders get the plurality of credit for the output of a company, partly because I just intuitively believe this, and partly because, apparently, not many people found things. This is an empirical claim, and it could be false e.g. in worlds where everyone tries to be a founder, and companies never grow, but my guess is that the EA community is not in that world. So this heuristic tracks (to some degree) high counterfactual impact/neglectedness.
This heuristic is meant to be a way of finding good opportunities to learn (which is a way to invest in yourself to improve your future impact) and it’s not meant to be perfect.
I’m still not very convinced of your original point, though—when I simulate myself becoming non-vegan, I don’t imagine this counterfactually causing me to lose my concern for animals (nor does it seem like it would harm my epistemics? Though not sure if I trust my inner sim here. It does seem like that, if anything, going non-vegan would help my epistemics, since, in my case, being vegan wastes enough time such that it is harmful for future generations to be vegan, and by continuing to be vegan I am choosing to ignore that fact).
it would make me deeply sad and upset
That makes sense, yeah. And I could see this being costly enough such that it’s best to continue avoiding meat.
No—and when I wrote it, I meant to direct it at anyone involved in the comments discussion. I probably should have made that clearer in the comment. Also, I probably should have read all of the comments before commenting (e.g. are you referring to some comment thread that it seemed like I was replying to?), but am time-limited.
Also, for more context, I wrote this comment because I felt concerned about bottom-line/motivated reasoning causing people to apply the sorts of arguments for action that they don’t apply elsewhere to argue for veganism, and I felt compelled to prompt anyone part of this discussion with the sort of calculation that I thought longtermists generally thought was best to use when choosing their actions.
I’m not very convinced of your second point (though I could be—curious to hear why it feels true for you). I don’t currently see why you think the bolded words instead of: “it seems harder to see the importance of future beings or think correctly about the badness of existential risk while wasting time eating non-meat”
It feels like a universally compelling argument, or at least, I don’t see where you think the argument should stop applying on a spectrum between something like “it seems hard to think correctly about x-risk without having a career in it” and “it seems hard to think correctly about the importance of all sentient beings while squashing dust mites every time you sleep”
ETA: I imagine you wrote the bolded words because they feel true to you i.e. that eating meat might cause you to value drift or have worse epistemics in certain ways such that it’s worth staying vegan. I am curious about what explicable arguments that feeling (if you have it) might be tracking (e.g. in case they cause me to stay vegan).
I’m not an independent researcher, so this advice is probably less trustworthy than others’, but I am currently on somewhat of an independent research stint to work on ELK, and have been annoyed at motivation being hard to conjure sometimes.
I’ve been thinking about what causes motivation (e.g. thinking about various anecdata in my life) and I’ve also just begun tracking my time practically to the minute in the hope of this causing me to reflect on the sequence of stimuli, actions, and feelings I have throughout the day/week such that I can deduce any tractable levers on my own motivation. Though it seems too early to tell whether the time tracking will be fruitful in the end—we will see.
An example of how “reflecting on the sequence of stimuli, actions, and feeling” could be helpful: today, I hypothesized that I was much more productive on two recent plane rides than I am usually due to being away from people and being action/motion restricted. And so I tried getting on a train; though I noticed I didn’t want to work on ELK due to being anxious and hypothesized that perhaps my brain still wanted to pay attention to things I had been doing before getting on the train, and that, maybe, an additional reason I am productive on planes is due to security lines giving me time to reset my brain; I then tried resting for 20 minutes, to see if my anxiety would go away; unfortunately it didn’t, though I then went on to think of more testable hypotheses and decided to lower my caffeine dosage [I had had about 120mg that morning, and caffeine seems to cause anxiety]).
Correct me if I am wrong, but it seems like the notion of existence you are using here is either sharp or would be sharp if you had sufficient time to reflect on and refine a formalization (e.g. it formalization might be of the form “the set of physical systems A count as existing humans and everything else doesn’t”). For example, your valence/non-existence diagram has a sharp line.
I’m asking this because I think the main reason why I don’t hold person affecting views is that sharpness seems like it isn’t a necessary assumption about my morality* and that it is more complex than at least one alternative: view all matter as potentially mattering, because, for example, there is always potential to shape the matter like a human living a good life. Viewing personhood like this also seems more accurate/practical to me (e.g. it feels like being more of a physicalist about personhood in this way would let you make more progress on questions related to digital sentience, though this claim is unsubstantiated).
Do you have any thoughts on how to weigh between “choose the morality that uses simpler-seeming assumptions” v.s. “choose the morality that matches my immediate feelings about what it means to improve the world?” I’m unsure, though I currently lean toward the former.
*maybe this is where you disagree since not having sharpness means that reproducing in the future when life isn’t sucky can be good
What are some plausible stories for how being vegan makes the far-future non-trivially better? My bar here is something like “me being vegan is 1/1000th as impactful as working on a biosecurity project for my entire career,” so e.g. I claim that being vegan wastes something like 0.1% of EA’s time (which is an underestimate for me at least—I often spend 10s of counterfactual minutes searching for meatless food; for reference, 1/1000th of waking hours for seven days is about seven minutes).
Is this story the reason you are vegan?
Thank you!
The first system I thought of: Think about your goal and break it into bite-sized sub-tasks (e.g. completable in <30 minutes), writing each on a sticky note. While doing each sub-task, put its sticky note on your monitor, to remind yourself of what you are supposed to do. If you realize that your goal decomposition was wrong (e.g. discovered that there is an unexpected thing you need to do that you weren’t aware of when you originally decomposed your goal) stop what you are doing and add the new sub-task(s) to your pile of sticky notes. Ask yourself whether the new sub-task(s) is actually necessary; if it isn’t, throw away the sticky note and don’t do that sub-task.
This is minor, but the first two Metaculus questions are about aging rather than AGI
Thanks for this!
Curious if anyone else would be fans if the community started using the following titling for this type of post, to emphasize to ourselves more that we have an impoverished understanding of the way the world works:
“X is tractable and neglected, and it might be important”
Build career capital relevant for things like software engineering or machine learning engineering for AI safety.
Random, but a piece of advice that I’ve heard is that career capital for these things is only useful for getting your foot in the door (e.g. getting a coding test) and then your actually performance (rather than your resume) is what ends up getting you/not getting you the job. If you think you can succeed at this, I think it would almost certainly be better to just directly optimize for getting to a performance level that’s good enough for the AI safety job you want, rather than spending a few years software engineering (which might still be useful, but would be a less optimal way to spend your time, probably). I recommend reaching out to any AI safety orgs you hope to work at to confirm whether this is the case, in case you can get a few additional years of impact.
ETA: That said, ignoring personal fit, probably: AI safety paths where you skip the SWE ~= community building path >> software engineering at a start-upAlso, if you want to chat about Lightcone and/or Redwood, I’m doing work trials for both of their generalist roles (currently at Redwood, switching to Lightcone soon)--feel free to DM me for my Calendly :)
Good to know—thanks Bill!
This analysis suggests that altruistic actors with large amounts of money giving or lending money to young, resource-poor altruists might produce large amounts of altruistic good per dollar.
A suspicious conclusion coming from a young altruist! (sarcasm)
I think I meant analogous in the sense that I can then see how statements involving the defined word clearly translate to statements about how to make decisions. [ETA: I agree that this was underspecified and perhaps non-sensical]
FFF is new, so that shouldn’t be a surprise.