I’m trying to optimise something like “expected positive impact on a brighter future conditional on being the person that I am with the skills available to/accessible for me”.
If this is true, then I think you would be an EA. But from what you wrote it seems that you have a relatively large term in your philosophical objective function (as opposed to your revealed objective function, which for most people gets corrupted by personal stuff) on status/glory. I think the question determining your core philosophy would be which term you consider primary. For example if you view them as a means to an end of helping people and are willing to reject seeking them if someone convinces you they are significantly reducing your EV then that would reconcile the “A” part of EA.
A piece of advice I think younger people tend to need to hear is that you should be more willing to accept that “X is something I like and admire, and I am also not X” without having to then worry about your exact relationship to X or redefining X to include themselves (or looking for a different label Y). You are allowed to be aligned with EA but not be an EA and you might find this idea freeing (or I might be fighting the wrong fight here).
I plan to seek status/glory through making the world a better place.
That is, my desire for status/prestige/impact/glory is interpreted through an effective altruistic like framework.
“I want to move the world” transformed into “I want to make the world much better”.
“I want to have a large impact” became “I want to have a large impact on creating a brighter future”.
I joined the rationalist community at a really impressionable stage. My desire for impact/prestige/status, etc. persisted, but it was directed at making the world better.
I think the question determining your core philosophy would be which term you consider primary.
If this is not answered by the earlier statements, then it’s incoherent/inapplicable. I don’t want to have a large negative impact, and my desire for impact/prestige cannot be divorced from the context of “a much brighter world”.
For example if you view them as a means to an end of helping people and are willing to reject seeking them if someone convinces you they are significantly reducing your EV then that would reconcile the “A” part of EA.
My EV is personally making the world a brighter place.
I don’t think this is coherent either. I don’t view them as a means to an end of helping people.
But I don’t know how seeking status/glory by making the world a brighter place could possibly be reducing my expected value?
It feels incoherent/inapplicable.
A piece of advice I think younger people tend to need to hear is that you should be more willing to accept that “X is something I like and admire, and I am also not X” without having to then worry about your exact relationship to X or redefining X to include themselves (or looking for a different label Y). You are allowed to be aligned with EA but not be an EA and you might find this idea freeing (or I might be fighting the wrong fight here).
This is true, and if I’m not an EA, I’ll have to accept it. But it’s not yet clear to me that I’m just “very EA adjacent” as opposed to “fully EA”. And I do want to be an EA I think.
I might modify my values in that direction (why I said I’m not “yet” vegan as opposed to not vegan).
If this is true, then I think you would be an EA. But from what you wrote it seems that you have a relatively large term in your philosophical objective function (as opposed to your revealed objective function, which for most people gets corrupted by personal stuff) on status/glory. I think the question determining your core philosophy would be which term you consider primary. For example if you view them as a means to an end of helping people and are willing to reject seeking them if someone convinces you they are significantly reducing your EV then that would reconcile the “A” part of EA.
A piece of advice I think younger people tend to need to hear is that you should be more willing to accept that “X is something I like and admire, and I am also not X” without having to then worry about your exact relationship to X or redefining X to include themselves (or looking for a different label Y). You are allowed to be aligned with EA but not be an EA and you might find this idea freeing (or I might be fighting the wrong fight here).
I plan to seek status/glory through making the world a better place.
That is, my desire for status/prestige/impact/glory is interpreted through an effective altruistic like framework.
“I want to move the world” transformed into “I want to make the world much better”.
“I want to have a large impact” became “I want to have a large impact on creating a brighter future”.
I joined the rationalist community at a really impressionable stage. My desire for impact/prestige/status, etc. persisted, but it was directed at making the world better.
If this is not answered by the earlier statements, then it’s incoherent/inapplicable. I don’t want to have a large negative impact, and my desire for impact/prestige cannot be divorced from the context of “a much brighter world”.
My EV is personally making the world a brighter place.
I don’t think this is coherent either. I don’t view them as a means to an end of helping people.
But I don’t know how seeking status/glory by making the world a brighter place could possibly be reducing my expected value?
It feels incoherent/inapplicable.
This is true, and if I’m not an EA, I’ll have to accept it. But it’s not yet clear to me that I’m just “very EA adjacent” as opposed to “fully EA”. And I do want to be an EA I think.
I might modify my values in that direction (why I said I’m not “yet” vegan as opposed to not vegan).