Well done on public correction! That’s always hard.
It’s key to separate out “social agency” from the rest of the concept, and coining that term makes this post worthwhile on its own. Your learned helplessness is interesting, because to me the core of agency is indeed nonsocial: fixing the thing yourself, thinking for yourself, writing a blog for yourself, taking responsibility for your own growth (including emotional growth, wisdom, patience, and yes chores).
has inside views
I think you mean “has strong inside views which overrule the outside view”. Inside views are innocuous if you simultaneously maintain an “all things considered” view.
Because of a quirk of the instructors and students that landed in our sample, ESPR 2021 went a little too hard on agency. We try to promote agency and wisdom in equal measure, which usually ends up sounding a lot like this post. Got there in the end!
Your learned helplessness is interesting, because to me the core of agency is indeed nonsocial
Yeah, the learned helplessness is a weird one. I felt kinda sad about it because I used to really pride myself on being self sufficient. I agree that the social part of agency should only be a small part—I think I leaned too far into it.
strong inside views which overrule the outside view
Thanks for the inside view correction. Changing that now, and will add that Owen originally coined social agency.
ESPR 2021 went a little too hard on agency
Fwiw I did get a lot of value out of the push for agency at ESPR. Before that, I was too far in the other direction. Eg: I was anxious that others would think I was “entitled” if I asked for anything or just did stuff; felt like I had to ask for permission for things; cared about not upsetting authority figures, like teachers. I think that I also cared about signalling that I was agreeable—and ESPR helped me get over this.
Wait, I’m not actually sure I want to change the inside view thing, I’m confused. I was kinda just describing a meme-y version of hustling—therefore the low-resolution version of “has inside views” is fine.
has strong inside views which overrule the outside view
“Having inside views”: just having your own opinion, whether or not you shout about it and whether or not you think that it’s better than the outside view.
“Having strong inside views...”: asserting your opinion when others disagree with it, including against the majority of people, majority of experts, etc.
(1) doesn’t seem that agenty to me, it’s just a natural effect of thinking for yourself. (2) is very agenty and high-status (and can be very useful to the group if it brings in decorrelated info), but needs to be earned.
Really good post, and I feel like I have way too many thoughts for this one comment. But anyway here are a few...
Maybe I’m the odd one out (or wrong about my intuition) but I don’t think I ever got the sense-even intuitively or in a low-fidelity way- that “agency” was identical to/implied/strongly overlapped with “a willingness to be social domineering or extractive of others’ time and energy”
Insofar as I have something like a voice in my head giving me corrective advice on this front, it saying asking “if not you than who?” much more than it’s saying “go get it!”
Of course,”if not you, then who?” isn’t always rhetorical; sometimes, you really should refrain from doing something you’re not qualified to do or don’t understand!
Worth distinguishing between being a “team player” in a vibes/aesthetic sense and in a moral/functional sense. If skipping school meant that you were getting out of, say, a tutoring commitment that you had signed up for, I’d say that yeah, you should think hard about whether it’s worth reneging
But what you wrote seems to imply that there was no functional or causally attributable harm to anyone from your missing school. If so, I think doing the weird rationalist thing and resisting the dysfunctional social norm of going to school was completely the right thing to do.
Also, while emotional reactions often encode valuable information, I think the negative emotional reaction you got from being scolded by your teachers was more like a non-functional misfiring due to being out of the evolutionary context; in a 100 person tribe, being scolded by two others indicates that your behavior is indeed likely to harm you, and in a pseudo-moral sense may be meaningfully uncooperative. In this case, (probably) it’s fine for you, your teachers, and whomever else if your teachers don’t think highly of you
I don’t think you’re the odd one out, I think via people’s psychology and other factors, some people hear what you heard and some people hear what Evie describes.
Agree with you that the school example for me doesn’t track what the broader thing is about.
I don’t think I ever got the sense-even intuitively or in a low-fidelity way- that “agency” was identical to/implied/strongly overlapped with “a willingness to be social domineering or extractive of others’ time and energy”
I wrote about this because it was the direction in which I noticed myself taking “be agentic” too far. It’s also based on what I’ve observed in the community and conversations I’ve had over the past few months. But I would expect people to “take the message too far” in different ways (obvs whether someone has taken it too far is subjective, but you know what I mean).
But what you wrote seems to imply that there was no functional or causally attributable harm to anyone from your missing school.
Yeah, nobody was harmed, and I do endorse that I did it. It did feel like a big cost that my teachers trusted/liked me less though.
Note that I was a bit reluctant to include the school example, because there’s lots of missing context, so it’s not conveying the full situation. But the main point was that doing unconventional stuff can make people mad, and this can feel bad and has costs.
A benefit of some of the agency discourse, as I tried to articulate in this post, is that it can foster a culture of encouragement. I think EA is pretty cool for giving people the mindset to actually go out and try to improve things; tall poppy syndrome and ‘cheems mindsets’ are still very much the norm in many places!
I think a norm of encouragement is distinct from installing an individualistic sense of agency in everyone, though. The former should reduce the chances of Goodharting, since you’ll ideally be working out your goals iteratively with likeminded people (mitigating the risk of single-mindedly pursuing an underspecified goal). It’s great to have conviction — but conviction in everything you do by default could stop you from finding the things you really believe in.
Great post. This put words to some vague concerns I’ve had lately with people valorizing “agent-y” characteristics. I’m agentic in some ways and very unagentic in other ways, and I’m mostly happy with my impact, reputation, and “social footprint”. I like your section on not regulating consumption of finite resources: I think that modeling all aspects of a community as a free market is really bad (I think you agree with this, at least directionally).
This post, especially the section on “Assuming that it is low-cost for others to say ‘no’ to requests” reminded me of Deborah Tannen’s book That’s Not What I Meant — How Conversational Style Makes or Breaks Relationships. I found it really enlightening, and I’d recommend it for help understanding the unexpected ways other people approach social interactions.
Props for writing! Some things I think strengthen this point even more: 1.
“I want to take people at their word – if they agree to something I ask, then I’ll believe them.”
If I ask three people for their time, they don’t know whether they’re helping me get from 0 to 1 person helping with this, 1 to 2 or 9 to 10 for all they know. I also may not know that, but it’s a different ask, and people can reasonably have their bar for 0 to 1 or after a certain amount of effort the asker has put in that they’ve assumed they have
2.
We all (probably) have goals that do not feel like hustling and striving, for example:
have meaningful and emotionally close relationships;
call my mum regularly;
be a caring friend.
I think people who care about EA and / or their impact should probably be willing to take a bunch of steps / take on a bunch of costs to avoid resenting EA in the medium to long term.
3. A frame I’ve found useful is to model EA as sort of an agent. If ten minutes of someone else’s time can save me two hours of mine, that can be a very reasonable trade. If I could have spent 30 minutes figuring something out to save someone else 20, I might value their time that much more, and then wouldn’t want to ask them to give that up.
If I ask three people for their time, they don’t know whether they’re helping me get from 0 to 1 person helping with this, 1 to 2 or 9 to 10 for all they know.
Nice, yup, agreed that the information asymmetry is a big part of the problem in this case. I wonder if it could be a good norm to give someone this information when making requests. Like, explicitly say in a message “I’ve asked one other person and think that they will be about as well placed as you to help with this” etc
I do also think that there’s a separate cost to making requests, in that it actually does impose a cost on the person. Like, saying no takes time/energy/decision-power. Obviously this is often small, and in many cases it’s worth asking. But it’s a cost worth considering.
(Now I’ve written this out, I realise that you weren’t claiming that the info asymmetry is the only problem, but I’m going to leave the last paragraph in).
to avoid resenting EA in the medium to long term
This is great and only something I’ve started modelling recently. Curious about what you think this looks like in practice. Like, is it more getting at a mindset of “don’t beat yourself up when you fall short of your altruistic ideals”? Or does it also inform real world decisions for you?
I got a request just last night and was told that the person was asking three people, and while this isn’t perfect for them, I think it was a great thing from my perspective to know.
I don’t think it’s massively relevant for me right now except vaguely paying attention to my mental health and well being, but I think it’s super relevant for new-to-EA and/or young people deciding very quickly how invested to be.
Incidentally, I also appreciate comments like the first quote—not only have you given a summary, you’ve also given an indication of how much of the value of the post is contained in the summary 🙏
I’m glad you wrote this! I was worried about your previous post, and was thinking about writing something on this dimension myself.
It’s funny: this could’ve been mostly avoided by a consideration of Chesterton’s Fence and the EMH? (“If AGENCY was so good, why wouldn’t everyone do it?”)
Anyways, I’m now worried about e.g. high school summer camp programs that prioritize the development of unbalanced agency.
This was a great read! I relate to a lot of these thoughts—have swung back and forth on how much I want to lean into social norms / “guess culture” vs be a stereotypically rationalist-y person, and had a very similar experience at school. I think it’s great you’re thinking deeply and carefully about these issues. I’ve found that my attitude towards how to be “agentic” / behave in society has affected a lot of my major object-level decisions, both good and not so good.
I’m curious about your definition of agency and I wonder if its one that is shared by other effective altruists? Agency for me, from not an EA background although a long time observer, has little to do with these sort of necessarily self-serving goals that you are now arguing need to be reined in a bit, maybe. Rather, agency is your ability to consider and pursue them at all—from the Stanford philosophy dictionary: ”...an agent is a being with the capacity to act...” Agency is, for me then, more akin to self-determination than an imperative to pursue something for one’s self.
You end your current essay affirmatively quoting what I see as a quite clearly paternalistic view about how much agency, as you define it, others should basically be allowed to have. I am curious, do you believe that agency is something for you or anyone to control for anyone else, in any capacity? A thing to be regulated? I’d be curious to know if you believe that for your definition of agency as well as my definition (or the one from the Stanford philosophy dictionary).
I am asking these questions neutrally—its genuine curiosity to understand your perspective a bit better.
Am not the author of this post, but I think EAs and rationalists have somewhat coopted the term “agentic” and infused it with a load of context and implicit assumptions about how an “agentic” person behaves, so that it no longer just means “person with agency”. This meaning is transmitted via conversations with people in this social cluster as well as through books and educational sessions at camps/retreats etc.
Often, one of the implicit assumptions is that an “agentic” person is more rational and so pursues their goal more effectively, occasionally acting in socially weird ways if the net effect of their actions seems positive to them.
OK, how interesting. I tend to stick to the academic EA literature, ‘visionary’ type EA books and when on this forum, generally not posts like these kinds of posts. So I suppose I might not encounter too much of this… appropriation… for lack of a better term. You imply this self-serving quality is also infused into what it means for EAs to be rational as well? As if the economic definition of ‘rational self-interest’ is the definition of rational? Am I reading that correctly?
As NinaR said, ’round these parts the word “agentic” doesn’t imply self-interest. My own gloss of it would be “doesn’t assume someone else is going to take responsibility for a problem, and therefore is more likely to do something about it”. For example, if the kitchen at your workplace has no bin (‘trashcan’), an agentic person might ask the office manager to get one, or even just order one in that they can get cheaply. Or if you see that the world is neglecting to consider the problem of insect welfare, instead of passively hoping that ‘society will get its act together’, you might think about what kind of actions would need to be taken by individuals for society to get its act together, and consider doing some of those actions.
That’s a definition I would assume an EA would give for the question- what it means to have agency for EAs, but from my view, it doesn’t mesh with how the OP is describing agency and therefore what is “agentic” in this post nor the post this post is a self-response to. In that first post, the one this is referencing, the OP gives seven recommendations that are clearly self-interested directives (e.g., “Figure out what you need, figure out who can help you get it, ask them for it”). If karma is an indicator, that first post was well received. What I am getting here, if I try to merge your position and my view of the OP’s position, is that self-interested pursuits are means to an ends which is a sort of eventual greater good? Like, I can pursue everything “agentic-ly” because my self-interest is virtuous (This sounds like objectivism to me, btw)?
Now, I actually think this current post is about rolling back that original post a bit, but only a little. Hence my questions about regulating ‘agency’ in others—to try and get some parameters or lane markers for this concept here.
These aren’t defined topics (agency, agentic) in the EA forum, btw. This is partly why I am so interested in how people define them since they seem to be culturally defined here (if in multiple ways) differently than they would be in non-critical philosophy and mainstream economics (from which EA is rooted).
When I read the original OP that this OP is a response to, I am “reading in” some context or subtext based on the fact I know the author/blogger is an EA; something like “when giving life advice, I’m doing it to help you with your altruistic goals”. As a result of that assumption, I take writing that looks like ‘tips on how to get more of what you want’ to be mainly justified by being about altruistic things you want.
I don’t fully understand your comment, but I think agency is meant to be a relatively goal-agnostic cognitive tool, similar to e.g. “being truth-seeking” or “good social skills.” Altruism is about which goals you load in, but at least in theory this is orthogonal to how high your ability to achieve your goals are.
That definition aligns more with the sort of traditional, non-critical philosophical and mainstream economic approach to agency, which means there are now basically three definitions to agency in this post and comment thread from EA peeps. I’m glad I asked because I’ve always just assumed the sort of standard non-critical philosophy and mainstream economic definition (because that’s the origin story of EA too) when reading things here and encountering the term agency—which I do not feel fits at all with the definition intimated or literally given by the OP in this post or the referenced, original post (which was highly karma’d).
“Agency” refers to a range of proactive, ambitious, deliberate, goal-directed traits and habits.
I see that as a definition driven by self-interest, the original essay confirms this. There’s probably room for debate there, but its definitely not a “relatively goal-agnostic cognitive tool,” type definition, for instance.
So, I think I’ve got to stop assuming a shared definition here for this one and probably more importantly, stop assuming that EAs actually take it under consideration, really (there’d probably be a terms entry if it’s definition were valuable here, I suppose). I had no idea there were such post-y things in EA. How exciting. Ha!
If I were to guess what the ‘disagreement’ downvotes were picking up on, it would be this:
I see that as a definition driven by self-interest
Whereas to me, all of the adjectives ‘proactive, ambitious, deliberate, goal-directed’ are goal-agnostic, such that whether they end up being selfish or selfless depends entirely on what goal ‘cartridge’ you load into the slot (if you’ll forgive the overly florid metaphor).
I don’t think self-interest is relevant here if you believe that it is possible for an agent to have an altruistic goal.
Also, as with all words, “agentic” will have different meanings in different contexts, and my comment was based on its use when referring to people’s behaviour/psychology which is not an exact science, therefore words are not being used in very precise scientific ways :)
It was the OP that injected self-interest into the concept of agency, hence my original question. And I totally agree about the meaning of words varying. None of this is a science at all, in my opinion, just non-critical philosophy and economics and where the two meet and intertwine. I’m just trying to understand how EA and EAs define these things, that’s all.
Thanks so much for your input. I really appreciate it.
Throughout both of my posts, I’ve been using a nicher definition than the one given by the Stanford philosophy dictionary. In my last post, I defined it as “the ability to get what you want across different contexts – the general skill of coming up with ambitious goals [1] and actually achieving them, whatever they are.”
But, as another commenter said, the term (for me at least) is now infused with a lot of community context and implicit assumptions.
I took some of this as assumed knowledge when writing this post, so maybe that was a mistake on my part.
On the second point:
I’m a bit confused by the question. I’m not claiming that there’s an ideal amount of agency or that it should be regulated.
Saying that, I expect that some types of agency will be implicitly socially regulated. Like, if someone frequently makes requests of others in a community, other people might start to have a higher bar for saying yes. Ie, there might be some social forces pushing in the opposite direction.
I don’t think that this is what you were getting at, but I wanted to add.
Well done on public correction! That’s always hard.
It’s key to separate out “social agency” from the rest of the concept, and coining that term makes this post worthwhile on its own. Your learned helplessness is interesting, because to me the core of agency is indeed nonsocial: fixing the thing yourself, thinking for yourself, writing a blog for yourself, taking responsibility for your own growth (including emotional growth, wisdom, patience, and yes chores).
I think you mean “has strong inside views which overrule the outside view”. Inside views are innocuous if you simultaneously maintain an “all things considered” view.
Because of a quirk of the instructors and students that landed in our sample, ESPR 2021 went a little too hard on agency. We try to promote agency and wisdom in equal measure, which usually ends up sounding a lot like this post. Got there in the end!
Small nitpick: “social agency” coined by OCB in comments on the original.
Thanks :)
Yeah, the learned helplessness is a weird one. I felt kinda sad about it because I used to really pride myself on being self sufficient. I agree that the social part of agency should only be a small part—I think I leaned too far into it.
Thanks for the inside view correction. Changing that now, and will add that Owen originally coined social agency.
Fwiw I did get a lot of value out of the push for agency at ESPR. Before that, I was too far in the other direction. Eg: I was anxious that others would think I was “entitled” if I asked for anything or just did stuff; felt like I had to ask for permission for things; cared about not upsetting authority figures, like teachers. I think that I also cared about signalling that I was agreeable—and ESPR helped me get over this.
Wait, I’m not actually sure I want to change the inside view thing, I’m confused. I was kinda just describing a meme-y version of hustling—therefore the low-resolution version of “has inside views” is fine.
I’m not really sure what you mean by this.
“Having inside views”: just having your own opinion, whether or not you shout about it and whether or not you think that it’s better than the outside view.
“Having strong inside views...”: asserting your opinion when others disagree with it, including against the majority of people, majority of experts, etc.
(1) doesn’t seem that agenty to me, it’s just a natural effect of thinking for yourself. (2) is very agenty and high-status (and can be very useful to the group if it brings in decorrelated info), but needs to be earned.
Hmm, interesting. Thanks for clarifying, that does work better in this context (although it’s confusing if you don’t have the info above)
Some quotes on agency that I liked, which I think is more representative of the “do it yourself” attitude.
Really good post, and I feel like I have way too many thoughts for this one comment. But anyway here are a few...
Maybe I’m the odd one out (or wrong about my intuition) but I don’t think I ever got the sense-even intuitively or in a low-fidelity way- that “agency” was identical to/implied/strongly overlapped with “a willingness to be social domineering or extractive of others’ time and energy”
Insofar as I have something like a voice in my head giving me corrective advice on this front, it saying asking “if not you than who?” much more than it’s saying “go get it!”
Of course,”if not you, then who?” isn’t always rhetorical; sometimes, you really should refrain from doing something you’re not qualified to do or don’t understand!
Worth distinguishing between being a “team player” in a vibes/aesthetic sense and in a moral/functional sense. If skipping school meant that you were getting out of, say, a tutoring commitment that you had signed up for, I’d say that yeah, you should think hard about whether it’s worth reneging
But what you wrote seems to imply that there was no functional or causally attributable harm to anyone from your missing school. If so, I think doing the weird rationalist thing and resisting the dysfunctional social norm of going to school was completely the right thing to do.
Also, while emotional reactions often encode valuable information, I think the negative emotional reaction you got from being scolded by your teachers was more like a non-functional misfiring due to being out of the evolutionary context; in a 100 person tribe, being scolded by two others indicates that your behavior is indeed likely to harm you, and in a pseudo-moral sense may be meaningfully uncooperative. In this case, (probably) it’s fine for you, your teachers, and whomever else if your teachers don’t think highly of you
I don’t think you’re the odd one out, I think via people’s psychology and other factors, some people hear what you heard and some people hear what Evie describes.
Agree with you that the school example for me doesn’t track what the broader thing is about.
Thanks for your comment Aaron! :)
I wrote about this because it was the direction in which I noticed myself taking “be agentic” too far. It’s also based on what I’ve observed in the community and conversations I’ve had over the past few months. But I would expect people to “take the message too far” in different ways (obvs whether someone has taken it too far is subjective, but you know what I mean).
Yeah, nobody was harmed, and I do endorse that I did it. It did feel like a big cost that my teachers trusted/liked me less though.
Note that I was a bit reluctant to include the school example, because there’s lots of missing context, so it’s not conveying the full situation. But the main point was that doing unconventional stuff can make people mad, and this can feel bad and has costs.
I enjoyed reading these updated thoughts!
A benefit of some of the agency discourse, as I tried to articulate in this post, is that it can foster a culture of encouragement. I think EA is pretty cool for giving people the mindset to actually go out and try to improve things; tall poppy syndrome and ‘cheems mindsets’ are still very much the norm in many places!
I think a norm of encouragement is distinct from installing an individualistic sense of agency in everyone, though. The former should reduce the chances of Goodharting, since you’ll ideally be working out your goals iteratively with likeminded people (mitigating the risk of single-mindedly pursuing an underspecified goal). It’s great to have conviction — but conviction in everything you do by default could stop you from finding the things you really believe in.
Great post. This put words to some vague concerns I’ve had lately with people valorizing “agent-y” characteristics. I’m agentic in some ways and very unagentic in other ways, and I’m mostly happy with my impact, reputation, and “social footprint”. I like your section on not regulating consumption of finite resources: I think that modeling all aspects of a community as a free market is really bad (I think you agree with this, at least directionally).
This post, especially the section on “Assuming that it is low-cost for others to say ‘no’ to requests” reminded me of Deborah Tannen’s book That’s Not What I Meant — How Conversational Style Makes or Breaks Relationships. I found it really enlightening, and I’d recommend it for help understanding the unexpected ways other people approach social interactions.
Props for writing! Some things I think strengthen this point even more:
1.
If I ask three people for their time, they don’t know whether they’re helping me get from 0 to 1 person helping with this, 1 to 2 or 9 to 10 for all they know. I also may not know that, but it’s a different ask, and people can reasonably have their bar for 0 to 1 or after a certain amount of effort the asker has put in that they’ve assumed they have
2.
I think people who care about EA and / or their impact should probably be willing to take a bunch of steps / take on a bunch of costs to avoid resenting EA in the medium to long term.
3. A frame I’ve found useful is to model EA as sort of an agent. If ten minutes of someone else’s time can save me two hours of mine, that can be a very reasonable trade. If I could have spent 30 minutes figuring something out to save someone else 20, I might value their time that much more, and then wouldn’t want to ask them to give that up.
Nice, yup, agreed that the information asymmetry is a big part of the problem in this case. I wonder if it could be a good norm to give someone this information when making requests. Like, explicitly say in a message “I’ve asked one other person and think that they will be about as well placed as you to help with this” etc
I do also think that there’s a separate cost to making requests, in that it actually does impose a cost on the person. Like, saying no takes time/energy/decision-power. Obviously this is often small, and in many cases it’s worth asking. But it’s a cost worth considering.
(Now I’ve written this out, I realise that you weren’t claiming that the info asymmetry is the only problem, but I’m going to leave the last paragraph in).
This is great and only something I’ve started modelling recently. Curious about what you think this looks like in practice. Like, is it more getting at a mindset of “don’t beat yourself up when you fall short of your altruistic ideals”? Or does it also inform real world decisions for you?
Nice
I got a request just last night and was told that the person was asking three people, and while this isn’t perfect for them, I think it was a great thing from my perspective to know.
I don’t think it’s massively relevant for me right now except vaguely paying attention to my mental health and well being, but I think it’s super relevant for new-to-EA and/or young people deciding very quickly how invested to be.
Okay. Still upvoting though for this general thing:
Incidentally, I also appreciate comments like the first quote—not only have you given a summary, you’ve also given an indication of how much of the value of the post is contained in the summary 🙏
Thanks, that’s useful to know! :)
I’m glad you wrote this! I was worried about your previous post, and was thinking about writing something on this dimension myself.
It’s funny: this could’ve been mostly avoided by a consideration of Chesterton’s Fence and the EMH? (“If AGENCY was so good, why wouldn’t everyone do it?”)
Anyways, I’m now worried about e.g. high school summer camp programs that prioritize the development of unbalanced agency.
Thanks for your comment!
Meh, I don’t think so. This taken to its extreme looks like “be normie.”
I’m pretty confident that (for ESPR at least) this was a one off fluke! I’m not worried about this happening again (see gavin’s comment above).
This was a great read! I relate to a lot of these thoughts—have swung back and forth on how much I want to lean into social norms / “guess culture” vs be a stereotypically rationalist-y person, and had a very similar experience at school. I think it’s great you’re thinking deeply and carefully about these issues. I’ve found that my attitude towards how to be “agentic” / behave in society has affected a lot of my major object-level decisions, both good and not so good.
I’m curious about your definition of agency and I wonder if its one that is shared by other effective altruists? Agency for me, from not an EA background although a long time observer, has little to do with these sort of necessarily self-serving goals that you are now arguing need to be reined in a bit, maybe. Rather, agency is your ability to consider and pursue them at all—from the Stanford philosophy dictionary: ”...an agent is a being with the capacity to act...” Agency is, for me then, more akin to self-determination than an imperative to pursue something for one’s self.
You end your current essay affirmatively quoting what I see as a quite clearly paternalistic view about how much agency, as you define it, others should basically be allowed to have. I am curious, do you believe that agency is something for you or anyone to control for anyone else, in any capacity? A thing to be regulated? I’d be curious to know if you believe that for your definition of agency as well as my definition (or the one from the Stanford philosophy dictionary).
I am asking these questions neutrally—its genuine curiosity to understand your perspective a bit better.
Am not the author of this post, but I think EAs and rationalists have somewhat coopted the term “agentic” and infused it with a load of context and implicit assumptions about how an “agentic” person behaves, so that it no longer just means “person with agency”. This meaning is transmitted via conversations with people in this social cluster as well as through books and educational sessions at camps/retreats etc.
Often, one of the implicit assumptions is that an “agentic” person is more rational and so pursues their goal more effectively, occasionally acting in socially weird ways if the net effect of their actions seems positive to them.
OK, how interesting. I tend to stick to the academic EA literature, ‘visionary’ type EA books and when on this forum, generally not posts like these kinds of posts. So I suppose I might not encounter too much of this… appropriation… for lack of a better term. You imply this self-serving quality is also infused into what it means for EAs to be rational as well? As if the economic definition of ‘rational self-interest’ is the definition of rational? Am I reading that correctly?
Thanks for the response!
As NinaR said, ’round these parts the word “agentic” doesn’t imply self-interest. My own gloss of it would be “doesn’t assume someone else is going to take responsibility for a problem, and therefore is more likely to do something about it”. For example, if the kitchen at your workplace has no bin (‘trashcan’), an agentic person might ask the office manager to get one, or even just order one in that they can get cheaply. Or if you see that the world is neglecting to consider the problem of insect welfare, instead of passively hoping that ‘society will get its act together’, you might think about what kind of actions would need to be taken by individuals for society to get its act together, and consider doing some of those actions.
That’s a definition I would assume an EA would give for the question- what it means to have agency for EAs, but from my view, it doesn’t mesh with how the OP is describing agency and therefore what is “agentic” in this post nor the post this post is a self-response to. In that first post, the one this is referencing, the OP gives seven recommendations that are clearly self-interested directives (e.g., “Figure out what you need, figure out who can help you get it, ask them for it”). If karma is an indicator, that first post was well received. What I am getting here, if I try to merge your position and my view of the OP’s position, is that self-interested pursuits are means to an ends which is a sort of eventual greater good? Like, I can pursue everything “agentic-ly” because my self-interest is virtuous (This sounds like objectivism to me, btw)?
Now, I actually think this current post is about rolling back that original post a bit, but only a little. Hence my questions about regulating ‘agency’ in others—to try and get some parameters or lane markers for this concept here.
These aren’t defined topics (agency, agentic) in the EA forum, btw. This is partly why I am so interested in how people define them since they seem to be culturally defined here (if in multiple ways) differently than they would be in non-critical philosophy and mainstream economics (from which EA is rooted).
When I read the original OP that this OP is a response to, I am “reading in” some context or subtext based on the fact I know the author/blogger is an EA; something like “when giving life advice, I’m doing it to help you with your altruistic goals”. As a result of that assumption, I take writing that looks like ‘tips on how to get more of what you want’ to be mainly justified by being about altruistic things you want.
I don’t fully understand your comment, but I think agency is meant to be a relatively goal-agnostic cognitive tool, similar to e.g. “being truth-seeking” or “good social skills.” Altruism is about which goals you load in, but at least in theory this is orthogonal to how high your ability to achieve your goals are.
That definition aligns more with the sort of traditional, non-critical philosophical and mainstream economic approach to agency, which means there are now basically three definitions to agency in this post and comment thread from EA peeps. I’m glad I asked because I’ve always just assumed the sort of standard non-critical philosophy and mainstream economic definition (because that’s the origin story of EA too) when reading things here and encountering the term agency—which I do not feel fits at all with the definition intimated or literally given by the OP in this post or the referenced, original post (which was highly karma’d).
I see that as a definition driven by self-interest, the original essay confirms this. There’s probably room for debate there, but its definitely not a “relatively goal-agnostic cognitive tool,” type definition, for instance.
So, I think I’ve got to stop assuming a shared definition here for this one and probably more importantly, stop assuming that EAs actually take it under consideration, really (there’d probably be a terms entry if it’s definition were valuable here, I suppose). I had no idea there were such post-y things in EA. How exciting. Ha!
If I were to guess what the ‘disagreement’ downvotes were picking up on, it would be this:
Whereas to me, all of the adjectives ‘proactive, ambitious, deliberate, goal-directed’ are goal-agnostic, such that whether they end up being selfish or selfless depends entirely on what goal ‘cartridge’ you load into the slot (if you’ll forgive the overly florid metaphor).
I don’t think self-interest is relevant here if you believe that it is possible for an agent to have an altruistic goal.
Also, as with all words, “agentic” will have different meanings in different contexts, and my comment was based on its use when referring to people’s behaviour/psychology which is not an exact science, therefore words are not being used in very precise scientific ways :)
It was the OP that injected self-interest into the concept of agency, hence my original question. And I totally agree about the meaning of words varying. None of this is a science at all, in my opinion, just non-critical philosophy and economics and where the two meet and intertwine. I’m just trying to understand how EA and EAs define these things, that’s all.
Thanks so much for your input. I really appreciate it.
Yup, another commenter is correct in that I am assuming that the goals are altruistic.
Hey, thanks for asking.
On the first point:
Throughout both of my posts, I’ve been using a nicher definition than the one given by the Stanford philosophy dictionary. In my last post, I defined it as “the ability to get what you want across different contexts – the general skill of coming up with ambitious goals [1] and actually achieving them, whatever they are.”
But, as another commenter said, the term (for me at least) is now infused with a lot of community context and implicit assumptions.
I took some of this as assumed knowledge when writing this post, so maybe that was a mistake on my part.
On the second point:
I’m a bit confused by the question. I’m not claiming that there’s an ideal amount of agency or that it should be regulated.
Saying that, I expect that some types of agency will be implicitly socially regulated. Like, if someone frequently makes requests of others in a community, other people might start to have a higher bar for saying yes. Ie, there might be some social forces pushing in the opposite direction.
I don’t think that this is what you were getting at, but I wanted to add.