I’m curious about your definition of agency and I wonder if its one that is shared by other effective altruists? Agency for me, from not an EA background although a long time observer, has little to do with these sort of necessarily self-serving goals that you are now arguing need to be reined in a bit, maybe. Rather, agency is your ability to consider and pursue them at all—from the Stanford philosophy dictionary: ”...an agent is a being with the capacity to act...” Agency is, for me then, more akin to self-determination than an imperative to pursue something for one’s self.
You end your current essay affirmatively quoting what I see as a quite clearly paternalistic view about how much agency, as you define it, others should basically be allowed to have. I am curious, do you believe that agency is something for you or anyone to control for anyone else, in any capacity? A thing to be regulated? I’d be curious to know if you believe that for your definition of agency as well as my definition (or the one from the Stanford philosophy dictionary).
I am asking these questions neutrally—its genuine curiosity to understand your perspective a bit better.
Am not the author of this post, but I think EAs and rationalists have somewhat coopted the term “agentic” and infused it with a load of context and implicit assumptions about how an “agentic” person behaves, so that it no longer just means “person with agency”. This meaning is transmitted via conversations with people in this social cluster as well as through books and educational sessions at camps/retreats etc.
Often, one of the implicit assumptions is that an “agentic” person is more rational and so pursues their goal more effectively, occasionally acting in socially weird ways if the net effect of their actions seems positive to them.
OK, how interesting. I tend to stick to the academic EA literature, ‘visionary’ type EA books and when on this forum, generally not posts like these kinds of posts. So I suppose I might not encounter too much of this… appropriation… for lack of a better term. You imply this self-serving quality is also infused into what it means for EAs to be rational as well? As if the economic definition of ‘rational self-interest’ is the definition of rational? Am I reading that correctly?
As NinaR said, ’round these parts the word “agentic” doesn’t imply self-interest. My own gloss of it would be “doesn’t assume someone else is going to take responsibility for a problem, and therefore is more likely to do something about it”. For example, if the kitchen at your workplace has no bin (‘trashcan’), an agentic person might ask the office manager to get one, or even just order one in that they can get cheaply. Or if you see that the world is neglecting to consider the problem of insect welfare, instead of passively hoping that ‘society will get its act together’, you might think about what kind of actions would need to be taken by individuals for society to get its act together, and consider doing some of those actions.
That’s a definition I would assume an EA would give for the question- what it means to have agency for EAs, but from my view, it doesn’t mesh with how the OP is describing agency and therefore what is “agentic” in this post nor the post this post is a self-response to. In that first post, the one this is referencing, the OP gives seven recommendations that are clearly self-interested directives (e.g., “Figure out what you need, figure out who can help you get it, ask them for it”). If karma is an indicator, that first post was well received. What I am getting here, if I try to merge your position and my view of the OP’s position, is that self-interested pursuits are means to an ends which is a sort of eventual greater good? Like, I can pursue everything “agentic-ly” because my self-interest is virtuous (This sounds like objectivism to me, btw)?
Now, I actually think this current post is about rolling back that original post a bit, but only a little. Hence my questions about regulating ‘agency’ in others—to try and get some parameters or lane markers for this concept here.
These aren’t defined topics (agency, agentic) in the EA forum, btw. This is partly why I am so interested in how people define them since they seem to be culturally defined here (if in multiple ways) differently than they would be in non-critical philosophy and mainstream economics (from which EA is rooted).
When I read the original OP that this OP is a response to, I am “reading in” some context or subtext based on the fact I know the author/blogger is an EA; something like “when giving life advice, I’m doing it to help you with your altruistic goals”. As a result of that assumption, I take writing that looks like ‘tips on how to get more of what you want’ to be mainly justified by being about altruistic things you want.
I don’t fully understand your comment, but I think agency is meant to be a relatively goal-agnostic cognitive tool, similar to e.g. “being truth-seeking” or “good social skills.” Altruism is about which goals you load in, but at least in theory this is orthogonal to how high your ability to achieve your goals are.
That definition aligns more with the sort of traditional, non-critical philosophical and mainstream economic approach to agency, which means there are now basically three definitions to agency in this post and comment thread from EA peeps. I’m glad I asked because I’ve always just assumed the sort of standard non-critical philosophy and mainstream economic definition (because that’s the origin story of EA too) when reading things here and encountering the term agency—which I do not feel fits at all with the definition intimated or literally given by the OP in this post or the referenced, original post (which was highly karma’d).
“Agency” refers to a range of proactive, ambitious, deliberate, goal-directed traits and habits.
I see that as a definition driven by self-interest, the original essay confirms this. There’s probably room for debate there, but its definitely not a “relatively goal-agnostic cognitive tool,” type definition, for instance.
So, I think I’ve got to stop assuming a shared definition here for this one and probably more importantly, stop assuming that EAs actually take it under consideration, really (there’d probably be a terms entry if it’s definition were valuable here, I suppose). I had no idea there were such post-y things in EA. How exciting. Ha!
If I were to guess what the ‘disagreement’ downvotes were picking up on, it would be this:
I see that as a definition driven by self-interest
Whereas to me, all of the adjectives ‘proactive, ambitious, deliberate, goal-directed’ are goal-agnostic, such that whether they end up being selfish or selfless depends entirely on what goal ‘cartridge’ you load into the slot (if you’ll forgive the overly florid metaphor).
I don’t think self-interest is relevant here if you believe that it is possible for an agent to have an altruistic goal.
Also, as with all words, “agentic” will have different meanings in different contexts, and my comment was based on its use when referring to people’s behaviour/psychology which is not an exact science, therefore words are not being used in very precise scientific ways :)
It was the OP that injected self-interest into the concept of agency, hence my original question. And I totally agree about the meaning of words varying. None of this is a science at all, in my opinion, just non-critical philosophy and economics and where the two meet and intertwine. I’m just trying to understand how EA and EAs define these things, that’s all.
Thanks so much for your input. I really appreciate it.
Throughout both of my posts, I’ve been using a nicher definition than the one given by the Stanford philosophy dictionary. In my last post, I defined it as “the ability to get what you want across different contexts – the general skill of coming up with ambitious goals [1] and actually achieving them, whatever they are.”
But, as another commenter said, the term (for me at least) is now infused with a lot of community context and implicit assumptions.
I took some of this as assumed knowledge when writing this post, so maybe that was a mistake on my part.
On the second point:
I’m a bit confused by the question. I’m not claiming that there’s an ideal amount of agency or that it should be regulated.
Saying that, I expect that some types of agency will be implicitly socially regulated. Like, if someone frequently makes requests of others in a community, other people might start to have a higher bar for saying yes. Ie, there might be some social forces pushing in the opposite direction.
I don’t think that this is what you were getting at, but I wanted to add.
I’m curious about your definition of agency and I wonder if its one that is shared by other effective altruists? Agency for me, from not an EA background although a long time observer, has little to do with these sort of necessarily self-serving goals that you are now arguing need to be reined in a bit, maybe. Rather, agency is your ability to consider and pursue them at all—from the Stanford philosophy dictionary: ”...an agent is a being with the capacity to act...” Agency is, for me then, more akin to self-determination than an imperative to pursue something for one’s self.
You end your current essay affirmatively quoting what I see as a quite clearly paternalistic view about how much agency, as you define it, others should basically be allowed to have. I am curious, do you believe that agency is something for you or anyone to control for anyone else, in any capacity? A thing to be regulated? I’d be curious to know if you believe that for your definition of agency as well as my definition (or the one from the Stanford philosophy dictionary).
I am asking these questions neutrally—its genuine curiosity to understand your perspective a bit better.
Am not the author of this post, but I think EAs and rationalists have somewhat coopted the term “agentic” and infused it with a load of context and implicit assumptions about how an “agentic” person behaves, so that it no longer just means “person with agency”. This meaning is transmitted via conversations with people in this social cluster as well as through books and educational sessions at camps/retreats etc.
Often, one of the implicit assumptions is that an “agentic” person is more rational and so pursues their goal more effectively, occasionally acting in socially weird ways if the net effect of their actions seems positive to them.
OK, how interesting. I tend to stick to the academic EA literature, ‘visionary’ type EA books and when on this forum, generally not posts like these kinds of posts. So I suppose I might not encounter too much of this… appropriation… for lack of a better term. You imply this self-serving quality is also infused into what it means for EAs to be rational as well? As if the economic definition of ‘rational self-interest’ is the definition of rational? Am I reading that correctly?
Thanks for the response!
As NinaR said, ’round these parts the word “agentic” doesn’t imply self-interest. My own gloss of it would be “doesn’t assume someone else is going to take responsibility for a problem, and therefore is more likely to do something about it”. For example, if the kitchen at your workplace has no bin (‘trashcan’), an agentic person might ask the office manager to get one, or even just order one in that they can get cheaply. Or if you see that the world is neglecting to consider the problem of insect welfare, instead of passively hoping that ‘society will get its act together’, you might think about what kind of actions would need to be taken by individuals for society to get its act together, and consider doing some of those actions.
That’s a definition I would assume an EA would give for the question- what it means to have agency for EAs, but from my view, it doesn’t mesh with how the OP is describing agency and therefore what is “agentic” in this post nor the post this post is a self-response to. In that first post, the one this is referencing, the OP gives seven recommendations that are clearly self-interested directives (e.g., “Figure out what you need, figure out who can help you get it, ask them for it”). If karma is an indicator, that first post was well received. What I am getting here, if I try to merge your position and my view of the OP’s position, is that self-interested pursuits are means to an ends which is a sort of eventual greater good? Like, I can pursue everything “agentic-ly” because my self-interest is virtuous (This sounds like objectivism to me, btw)?
Now, I actually think this current post is about rolling back that original post a bit, but only a little. Hence my questions about regulating ‘agency’ in others—to try and get some parameters or lane markers for this concept here.
These aren’t defined topics (agency, agentic) in the EA forum, btw. This is partly why I am so interested in how people define them since they seem to be culturally defined here (if in multiple ways) differently than they would be in non-critical philosophy and mainstream economics (from which EA is rooted).
When I read the original OP that this OP is a response to, I am “reading in” some context or subtext based on the fact I know the author/blogger is an EA; something like “when giving life advice, I’m doing it to help you with your altruistic goals”. As a result of that assumption, I take writing that looks like ‘tips on how to get more of what you want’ to be mainly justified by being about altruistic things you want.
I don’t fully understand your comment, but I think agency is meant to be a relatively goal-agnostic cognitive tool, similar to e.g. “being truth-seeking” or “good social skills.” Altruism is about which goals you load in, but at least in theory this is orthogonal to how high your ability to achieve your goals are.
That definition aligns more with the sort of traditional, non-critical philosophical and mainstream economic approach to agency, which means there are now basically three definitions to agency in this post and comment thread from EA peeps. I’m glad I asked because I’ve always just assumed the sort of standard non-critical philosophy and mainstream economic definition (because that’s the origin story of EA too) when reading things here and encountering the term agency—which I do not feel fits at all with the definition intimated or literally given by the OP in this post or the referenced, original post (which was highly karma’d).
I see that as a definition driven by self-interest, the original essay confirms this. There’s probably room for debate there, but its definitely not a “relatively goal-agnostic cognitive tool,” type definition, for instance.
So, I think I’ve got to stop assuming a shared definition here for this one and probably more importantly, stop assuming that EAs actually take it under consideration, really (there’d probably be a terms entry if it’s definition were valuable here, I suppose). I had no idea there were such post-y things in EA. How exciting. Ha!
If I were to guess what the ‘disagreement’ downvotes were picking up on, it would be this:
Whereas to me, all of the adjectives ‘proactive, ambitious, deliberate, goal-directed’ are goal-agnostic, such that whether they end up being selfish or selfless depends entirely on what goal ‘cartridge’ you load into the slot (if you’ll forgive the overly florid metaphor).
I don’t think self-interest is relevant here if you believe that it is possible for an agent to have an altruistic goal.
Also, as with all words, “agentic” will have different meanings in different contexts, and my comment was based on its use when referring to people’s behaviour/psychology which is not an exact science, therefore words are not being used in very precise scientific ways :)
It was the OP that injected self-interest into the concept of agency, hence my original question. And I totally agree about the meaning of words varying. None of this is a science at all, in my opinion, just non-critical philosophy and economics and where the two meet and intertwine. I’m just trying to understand how EA and EAs define these things, that’s all.
Thanks so much for your input. I really appreciate it.
Yup, another commenter is correct in that I am assuming that the goals are altruistic.
Hey, thanks for asking.
On the first point:
Throughout both of my posts, I’ve been using a nicher definition than the one given by the Stanford philosophy dictionary. In my last post, I defined it as “the ability to get what you want across different contexts – the general skill of coming up with ambitious goals [1] and actually achieving them, whatever they are.”
But, as another commenter said, the term (for me at least) is now infused with a lot of community context and implicit assumptions.
I took some of this as assumed knowledge when writing this post, so maybe that was a mistake on my part.
On the second point:
I’m a bit confused by the question. I’m not claiming that there’s an ideal amount of agency or that it should be regulated.
Saying that, I expect that some types of agency will be implicitly socially regulated. Like, if someone frequently makes requests of others in a community, other people might start to have a higher bar for saying yes. Ie, there might be some social forces pushing in the opposite direction.
I don’t think that this is what you were getting at, but I wanted to add.