i’m modelling this as: basic drive to not die → selects values that are compatible with basic drive’s fulfillment.
i’ve been wondering if humans generally do something like this. (in particular to continue having values/cares after ontological crises like: losing belief in a god, or losing a close other who one was dedicated to protecting.)
that I was a burden and that the resources expended keeping me alive were better used on someone who actually wanted to live
in case anyone has similar thoughts: to have the level of altruism to even consider the question is extremely rare. there are probably far better things you can do, than just dying and donating; like earning to give, or direct research, or maybe some third thing you’ll come up with. (most generally, the two traits i think are needed for research are intelligence and creativity. this is a creative, unintuitive moral question to ask. and my perception is that altruism and intelligence correlate, but i could be wrong about that, or biased from mostly seeing EAs.)
i’m modelling this as: basic drive to not die → selects values that are compatible with basic drive’s fulfillment.
i’ve been wondering if humans generally do something like this. (in particular to continue having values/cares after ontological crises like: losing belief in a god, or losing a close other who one was dedicated to protecting.)
This does seem like a good explanation of what happened. It does imply that I had motivated reasoning though, which probably casts some doubt on those values/beliefs being epistemically well grounded.
in case anyone has similar thoughts: to have the level of altruism to even consider the question is extremely rare. there are probably far better things you can do, than just dying and donating; like earning to give, or direct research, or maybe some third thing you’ll come up with. (most generally, the two traits i think are needed for research are intelligence and creativity. this is a creative, unintuitive moral question to ask. and my perception is that altruism and intelligence correlate, but i could be wrong about that, or biased from mostly seeing EAs.)
i see, thanks for explaining!
i’m modelling this as: basic drive to not die → selects values that are compatible with basic drive’s fulfillment.
i’ve been wondering if humans generally do something like this. (in particular to continue having values/cares after ontological crises like: losing belief in a god, or losing a close other who one was dedicated to protecting.)
in case anyone has similar thoughts: to have the level of altruism to even consider the question is extremely rare. there are probably far better things you can do, than just dying and donating; like earning to give, or direct research, or maybe some third thing you’ll come up with. (most generally, the two traits i think are needed for research are intelligence and creativity. this is a creative, unintuitive moral question to ask. and my perception is that altruism and intelligence correlate, but i could be wrong about that, or biased from mostly seeing EAs.)
Sorry for the delayed response.
This does seem like a good explanation of what happened. It does imply that I had motivated reasoning though, which probably casts some doubt on those values/beliefs being epistemically well grounded.
These words are very kind. Thank you.