Yes, I gave David my wish list of stuff he could discuss in a comment when he announced his blog. So far he hasn’t done that, but he’s busy with his chosen topics, I expect. I wrote quite a lot in those comments, but he did see the list.
In an answer to Elliot Temple’s question “Does EA Have An Alternative To Rational Written Debate”, I proposed a few ideas, including one on voting and tracking of an EA canon of arguments. Nobody dunked on me for it, though Elliot’s question wasn’t that popular, so I suppose few people actually read it. I appreciated Elliot’s focus on argumentation and procedure. Procedural tools to systematize debates are useful.
I’m not at all familiar with literature on impacts of diversity on decision-making. I’ll follow up on your suggestions of what to read, as much as I can. There are different kinds of diversity (worldview, race, ideology, background, expertise, …), but from what classes I took in communications studies and informal argumentation, I know that models are available and helpful to improve group discussion, and that best practices exist in several areas relevant to group communications and epistemics.
I was watching Cremer discuss ideas and read her Vox article about distributing power and changing group decision strategies. Her proposals seem serious, exciting, and somewhat technical, as do yours, ConcernedEA’s. That implies a learning curve to follow but with results that I expect are typically worth it for EA folks. Any proposal that combines serious + exciting + technical is one that I expect will be worth it for those involved, if the proposal is accepted. However, that is as seen through your perspective, one intending to preserve the community.
As someone on the outside observing your community grapple with its issues, I still hope for a positive outcome for you all. Your community pulls together many threads in different areas, and does have an impact on the rest of the world.
I’ve already identified elsewhere just what I think EA should do, and still believe the same. EA can preserve its value as a research community and supporter of charitable works without many aspects of the “community-building” it now does. Any support of personal connections outside research conferences and knowledge-sharing could end. Research would translate to support of charitable work or nonprofits explicitly tied to obviously charitable missions. I suppose that could include work on existential risk, but in limited contexts.
I have tried to make the point that vices (the traditional ones, ok? Like drugs, alcohol, betting, …) and the more general problem of selfishness are what to focus on. I’m not singling out your community as particularly vice-filled (well, betting is plausibly a strong vice in your community) but just that vices are in the background everywhere, and if you’re looking for change, make positive changes there.
And what do I mean by the “general problem of selfishness”? Not what you could expect, that I think you’re all too selfish. No. Selfishness matters because self-interest matters if altruism is your goal. Every altruistic effort is intended to serve someone else’s self-interest. Meanwhile, selfishness vs altruism is the classic conflict in most ethical decisions. Not the only one, but the typical one. The one to check for first, like, when you’re being self-serving, or when your “ethical goals” aren’t ethical at all. Yet your community has not grappled with the implications. Furthermore, no one here seems to think it matters. In your minds, you put these old-fashioned ways of thinking behind you.
You seem to have put Peter Singer’s work behind you as well, or some of you have, I think that is a mistake as well. I don’t know what kind of personal embarrassing statements or whatever that Peter Singer might have ever made, everyone seems hyper-alert to that kind of thing. But his work in ethics is foundational and should have a prominent place in your thinking and debates.
Furthermore, if you stick with your work on AGI, Bostrom’s work in Superintelligence showed insight and creativity in understanding and assessing AGI and ASI. I can’t say I agree with his thinking in further work that he’s produced, but if I were in your shoes, I wouldn’t stop mentioning his professional work just because he wrote some shameful stuff on-line, once, 20 years ago, and recently acknowledged it. Like Peter Singer, MacAskill, and many others associated with EA, Bostrom’s done impressive and foundational work(in Bostrom’s case, in AI), and it deserves consideration on its merits.
But back to writing about what I think, which has a much less impressive source.
Me.
Problems that plague humanity don’t really change. Vices are always going to be vices if they’re practiced. And selfishness? It plays such a large role in everything that we do, if you ignore it, or focus solely on how to serve others’ self-interests, you won’t grapple with selfishness well when its role is primary, for example, in contexts of existential harm. This will have two results:
your ostensible altruistic goals in those contexts will be abandoned
your further goals won’t be altruistic at all
My heuristics about a positive community are totally satisfied if your EA community focuses on giving what you can, saving the lives that you can, effective charity, effective altruism. That EA is inspiring, even inspiring guilt, but in a good way. Sure, vices are typically in the background, and corruption, plausibly, but that’s not the point. Are your goals self-contradicting? Are you co-opted by special interests already? Are you structurally incapable of providing effective charity? No, well, with caveats, but no. Overall, the mission and approach of the giving side of EA is and has been awesome and inspiring.
When EA folks go further, with your second and third waves, first existential risk prevention, now longtermism, you make me think hard about your effectiveness. You need to rock selfishness well just to do charity well (that’s my hunch). But existential risk and longtermism and community-building.… The demands on you are much much higher, and you aren’t meeting them. You need to stop all your vices, rid your community of them, prohibition-style. You need to intensively study selfishness and perform original academic research about it. I’m not joking. You really need think past current work in evolutionary psychology and utilitarianism and cognitive science. You could need to look into the past at failed research efforts and pick them up again, with new tools or ideas. Not so that you succeed with all your goals, but just so that you can stop yourself from being a significant net harm. Scout mindset was a step in the right direction and not an endpoint in improving your epistemics. Meanwhile, with your vices intact, your epistemics will suffer. Or so I believe.
If I had all the answers about selfishness vs altruism, and how to understand and navigate one’s own, I would share them. It’s a century’s research project, a multidisciplinary one with plausibly unexpected results, involving many people, experiments, different directions, and some good luck.
I don’t want to associate Singer, Cremer, Bostrom, Galef, MacAskill, or any other EA person or person who I might have referenced with my admittedly extreme and alienating beliefs about betting and other vices or with my personal declarations about what the EA community needs to do. I imagine most folks beliefs about vices and selfishness reflect modern norms and that none would not take the position that I am taking. And that’s OK with me.
However, register my standards for the EA community as extreme given the goals you have chosen for yourself. The EA community’s trifecta of ambitions is extreme. So are the standards that should be set for your behavior in your everyday life.
“You need to rock selfishness well just to do charity well (that’s my hunch).”
Selfishness, so designated, is not a public health issue nor a private mental health issue, but does stand in contrast to altruism. To the extent that society allows your actualization of something you could call selfishness, that seems to be your option to manifest, and by modern standards, without judgement of your selfishness. Your altruism might be judged, but not your selfishness, like, “Oh, that’s some effective selfishness” vs “Oh, that’s a poser’s selfishness right there” or “That selfishness there is a waste of money”.
Everyone thinks they understand selfishness, but there don’t seem to be many theories of selfishness, not competing theories, nor ones tested for coherence, nor puzzles of selfishness. You spend a great deal of time on debates about ethics, quantifying altruism, etc, but somehow selfishness is too well-understood to bother?
The only argument over selfishness that has come up here is over self-care with money. Should you spend your money on a restaurant meal, or on charity? There was plenty of “Oh, take care of yourself, you deserve it” stuff going around, “Don’t be guilty, that’s not helpful” but no theory of how self-interest works. It all seems relegated to an ethereal realm of psychological forces, that anyone wanting to help you with must acknowledge.
Your feelings of guilt, and so on, are all tentatively taken as subjectively impactful and necessarily relevant just by the fact of your having them. If they’re there, they matter. There’s pop psychology, methods of various therapy schools, and different kinds of talk, really, or maybe drugs, if you’re into psychiatric cures, but nothing too academic or well thought out as far as what self-interest is, how to perform it effectively, how or whether to measure it, and its proper role in your life. I can’t just look at the problem, so described, and say, “Oh, well, you’re not using a helpful selfishness theory to make your decisions there, you need to...” and be sure I’m accomplishing anything positive for you. I might come up with some clever reframe or shift your attention successfully, but that says nothing about a normative standard of selfishness that I could advocate.
I understand rationalization and being self-serving, but only in well-defined domains where I’ve seen it before, in what some people call “patterns of behavior.” Vices do create pathological patterns of behavior, and ending them is clarifying and helpful to many self-interested efforts. A 100-hundred year effort to study selfishness is about more than vices. Or, well, at least on the surface, depending on what researchers discover. I have my own suspicions.
Anyway, we don’t have the shared vocabulary to discuss vices well. What do you think I mean by them? Is adderall a vice? Lite beer? Using pornography? The occasional cigarette? Donuts? Let’s say I have a vice or two, and indulge them regularly, and other people support me in doing that, but we end doing stuff together that I don’t really like, aside from the vice. Is it correct then to say that I’m not serving myself by keeping my vice going? Or do we just call that a reframe because somebody’s trying to manipulate me into giving up my habits? What if the vice gets me through a workday?
Well, there’s no theories of self-interest that people study in school to help us understand those contexts, or if there are, they don’t get much attention. I don’t mean theories from psychology that tend to fail in practice. It’s a century’s effort to develop and distribute the knowledge to fill that need for good theories.
Galef took steps to understand selfish behavior. She decided that epistemic rationality served humanity and individuals, and decided to argue for it. That took some evaluation of behavior in an environment. It motivated pursuit of rationality in a particular way.
Interestingly, her tests, such as the selective critic test, or the double standard test, reveal information that shifts subjective experience. Why do we need those tests(Not, do we need them, but, why do we need them)? What can we do about the contexts that seem to require them? Right now, your community’s culture encourages an appetite for risk, particularly financial risk, that looks like a vice. Vices seem to attract more vices.
You’re talking about epistemics. A lot of lessons in decision-making are culturally inherited. For various reasons, modern society could lose that inheritance. Part of that inheritance is a common-sense understanding of vices. Without that common-sense there is only a naivete that could mean our extinction. Or that’s how I see it.
For example, in 2020, one of the US’s most popular talk show hosts (Steven Colbert) encouraged viewers to drink, and my governor (Gavin Newsom) gave a speech about loosening rules for food deliveries so that we could all get our wine delivered to our doors while we were in lockdown. I’m not part of the Christian right, but I think they still have the culture to understand that kind of behavior as showing decadence and inappropriateness. I would hope so. Overall, though, my country, America, didn’t see it that way. Not when, at least in people’s minds, there was an existential threat present. A good time to drink, stuck at home, that’s apparently what people thought.
I’m really not interested in making people have a less fun time. That is not my point at all.
I’ve also been unsuccessful in persuading people to act in their own self-interest. I already know it doesn’t work.
If you don’t believe in “vices”, you don’t believe in them. That’s fine. My point here was that it’s not safe to ignore them, and I would like to add, there’s nothing stronger than a vice to make sure you practice self-serving rationalization.
If, for the next 40-60 years, humanity faces a drawn out, painful coping with increasing harms from climate change, as I believe, and our hope for policy and recommendations is communities like yours, and what we get is depressed panicky people indulging whatever vices they can and becoming corrupt as f**k? Well, things will go badly.
Yes, I gave David my wish list of stuff he could discuss in a comment when he announced his blog. So far he hasn’t done that, but he’s busy with his chosen topics, I expect. I wrote quite a lot in those comments, but he did see the list.
In an answer to Elliot Temple’s question “Does EA Have An Alternative To Rational Written Debate”, I proposed a few ideas, including one on voting and tracking of an EA canon of arguments. Nobody dunked on me for it, though Elliot’s question wasn’t that popular, so I suppose few people actually read it. I appreciated Elliot’s focus on argumentation and procedure. Procedural tools to systematize debates are useful.
I’m not at all familiar with literature on impacts of diversity on decision-making. I’ll follow up on your suggestions of what to read, as much as I can. There are different kinds of diversity (worldview, race, ideology, background, expertise, …), but from what classes I took in communications studies and informal argumentation, I know that models are available and helpful to improve group discussion, and that best practices exist in several areas relevant to group communications and epistemics.
I was watching Cremer discuss ideas and read her Vox article about distributing power and changing group decision strategies. Her proposals seem serious, exciting, and somewhat technical, as do yours, ConcernedEA’s. That implies a learning curve to follow but with results that I expect are typically worth it for EA folks. Any proposal that combines serious + exciting + technical is one that I expect will be worth it for those involved, if the proposal is accepted. However, that is as seen through your perspective, one intending to preserve the community.
As someone on the outside observing your community grapple with its issues, I still hope for a positive outcome for you all. Your community pulls together many threads in different areas, and does have an impact on the rest of the world.
I’ve already identified elsewhere just what I think EA should do, and still believe the same. EA can preserve its value as a research community and supporter of charitable works without many aspects of the “community-building” it now does. Any support of personal connections outside research conferences and knowledge-sharing could end. Research would translate to support of charitable work or nonprofits explicitly tied to obviously charitable missions. I suppose that could include work on existential risk, but in limited contexts.
I have tried to make the point that vices (the traditional ones, ok? Like drugs, alcohol, betting, …) and the more general problem of selfishness are what to focus on. I’m not singling out your community as particularly vice-filled (well, betting is plausibly a strong vice in your community) but just that vices are in the background everywhere, and if you’re looking for change, make positive changes there.
And what do I mean by the “general problem of selfishness”? Not what you could expect, that I think you’re all too selfish. No. Selfishness matters because self-interest matters if altruism is your goal. Every altruistic effort is intended to serve someone else’s self-interest. Meanwhile, selfishness vs altruism is the classic conflict in most ethical decisions. Not the only one, but the typical one. The one to check for first, like, when you’re being self-serving, or when your “ethical goals” aren’t ethical at all. Yet your community has not grappled with the implications. Furthermore, no one here seems to think it matters. In your minds, you put these old-fashioned ways of thinking behind you.
You seem to have put Peter Singer’s work behind you as well, or some of you have, I think that is a mistake as well. I don’t know what kind of personal embarrassing statements or whatever that Peter Singer might have ever made, everyone seems hyper-alert to that kind of thing. But his work in ethics is foundational and should have a prominent place in your thinking and debates.
Furthermore, if you stick with your work on AGI, Bostrom’s work in Superintelligence showed insight and creativity in understanding and assessing AGI and ASI. I can’t say I agree with his thinking in further work that he’s produced, but if I were in your shoes, I wouldn’t stop mentioning his professional work just because he wrote some shameful stuff on-line, once, 20 years ago, and recently acknowledged it. Like Peter Singer, MacAskill, and many others associated with EA, Bostrom’s done impressive and foundational work(in Bostrom’s case, in AI), and it deserves consideration on its merits.
But back to writing about what I think, which has a much less impressive source.
Me.
Problems that plague humanity don’t really change. Vices are always going to be vices if they’re practiced. And selfishness? It plays such a large role in everything that we do, if you ignore it, or focus solely on how to serve others’ self-interests, you won’t grapple with selfishness well when its role is primary, for example, in contexts of existential harm. This will have two results:
your ostensible altruistic goals in those contexts will be abandoned
your further goals won’t be altruistic at all
My heuristics about a positive community are totally satisfied if your EA community focuses on giving what you can, saving the lives that you can, effective charity, effective altruism. That EA is inspiring, even inspiring guilt, but in a good way. Sure, vices are typically in the background, and corruption, plausibly, but that’s not the point. Are your goals self-contradicting? Are you co-opted by special interests already? Are you structurally incapable of providing effective charity? No, well, with caveats, but no. Overall, the mission and approach of the giving side of EA is and has been awesome and inspiring.
When EA folks go further, with your second and third waves, first existential risk prevention, now longtermism, you make me think hard about your effectiveness. You need to rock selfishness well just to do charity well (that’s my hunch). But existential risk and longtermism and community-building.… The demands on you are much much higher, and you aren’t meeting them. You need to stop all your vices, rid your community of them, prohibition-style. You need to intensively study selfishness and perform original academic research about it. I’m not joking. You really need think past current work in evolutionary psychology and utilitarianism and cognitive science. You could need to look into the past at failed research efforts and pick them up again, with new tools or ideas. Not so that you succeed with all your goals, but just so that you can stop yourself from being a significant net harm. Scout mindset was a step in the right direction and not an endpoint in improving your epistemics. Meanwhile, with your vices intact, your epistemics will suffer. Or so I believe.
If I had all the answers about selfishness vs altruism, and how to understand and navigate one’s own, I would share them. It’s a century’s research project, a multidisciplinary one with plausibly unexpected results, involving many people, experiments, different directions, and some good luck.
I don’t want to associate Singer, Cremer, Bostrom, Galef, MacAskill, or any other EA person or person who I might have referenced with my admittedly extreme and alienating beliefs about betting and other vices or with my personal declarations about what the EA community needs to do. I imagine most folks beliefs about vices and selfishness reflect modern norms and that none would not take the position that I am taking. And that’s OK with me.
However, register my standards for the EA community as extreme given the goals you have chosen for yourself. The EA community’s trifecta of ambitions is extreme. So are the standards that should be set for your behavior in your everyday life.
I wrote:
“You need to rock selfishness well just to do charity well (that’s my hunch).”
Selfishness, so designated, is not a public health issue nor a private mental health issue, but does stand in contrast to altruism. To the extent that society allows your actualization of something you could call selfishness, that seems to be your option to manifest, and by modern standards, without judgement of your selfishness. Your altruism might be judged, but not your selfishness, like, “Oh, that’s some effective selfishness” vs “Oh, that’s a poser’s selfishness right there” or “That selfishness there is a waste of money”.
Everyone thinks they understand selfishness, but there don’t seem to be many theories of selfishness, not competing theories, nor ones tested for coherence, nor puzzles of selfishness. You spend a great deal of time on debates about ethics, quantifying altruism, etc, but somehow selfishness is too well-understood to bother?
The only argument over selfishness that has come up here is over self-care with money. Should you spend your money on a restaurant meal, or on charity? There was plenty of “Oh, take care of yourself, you deserve it” stuff going around, “Don’t be guilty, that’s not helpful” but no theory of how self-interest works. It all seems relegated to an ethereal realm of psychological forces, that anyone wanting to help you with must acknowledge.
Your feelings of guilt, and so on, are all tentatively taken as subjectively impactful and necessarily relevant just by the fact of your having them. If they’re there, they matter. There’s pop psychology, methods of various therapy schools, and different kinds of talk, really, or maybe drugs, if you’re into psychiatric cures, but nothing too academic or well thought out as far as what self-interest is, how to perform it effectively, how or whether to measure it, and its proper role in your life. I can’t just look at the problem, so described, and say, “Oh, well, you’re not using a helpful selfishness theory to make your decisions there, you need to...” and be sure I’m accomplishing anything positive for you. I might come up with some clever reframe or shift your attention successfully, but that says nothing about a normative standard of selfishness that I could advocate.
I understand rationalization and being self-serving, but only in well-defined domains where I’ve seen it before, in what some people call “patterns of behavior.” Vices do create pathological patterns of behavior, and ending them is clarifying and helpful to many self-interested efforts. A 100-hundred year effort to study selfishness is about more than vices. Or, well, at least on the surface, depending on what researchers discover. I have my own suspicions.
Anyway, we don’t have the shared vocabulary to discuss vices well. What do you think I mean by them? Is adderall a vice? Lite beer? Using pornography? The occasional cigarette? Donuts? Let’s say I have a vice or two, and indulge them regularly, and other people support me in doing that, but we end doing stuff together that I don’t really like, aside from the vice. Is it correct then to say that I’m not serving myself by keeping my vice going? Or do we just call that a reframe because somebody’s trying to manipulate me into giving up my habits? What if the vice gets me through a workday?
Well, there’s no theories of self-interest that people study in school to help us understand those contexts, or if there are, they don’t get much attention. I don’t mean theories from psychology that tend to fail in practice. It’s a century’s effort to develop and distribute the knowledge to fill that need for good theories.
Galef took steps to understand selfish behavior. She decided that epistemic rationality served humanity and individuals, and decided to argue for it. That took some evaluation of behavior in an environment. It motivated pursuit of rationality in a particular way.
Interestingly, her tests, such as the selective critic test, or the double standard test, reveal information that shifts subjective experience. Why do we need those tests(Not, do we need them, but, why do we need them)? What can we do about the contexts that seem to require them? Right now, your community’s culture encourages an appetite for risk, particularly financial risk, that looks like a vice. Vices seem to attract more vices.
You’re talking about epistemics. A lot of lessons in decision-making are culturally inherited. For various reasons, modern society could lose that inheritance. Part of that inheritance is a common-sense understanding of vices. Without that common-sense there is only a naivete that could mean our extinction. Or that’s how I see it.
For example, in 2020, one of the US’s most popular talk show hosts (Steven Colbert) encouraged viewers to drink, and my governor (Gavin Newsom) gave a speech about loosening rules for food deliveries so that we could all get our wine delivered to our doors while we were in lockdown. I’m not part of the Christian right, but I think they still have the culture to understand that kind of behavior as showing decadence and inappropriateness. I would hope so. Overall, though, my country, America, didn’t see it that way. Not when, at least in people’s minds, there was an existential threat present. A good time to drink, stuck at home, that’s apparently what people thought.
I’m really not interested in making people have a less fun time. That is not my point at all.
I’ve also been unsuccessful in persuading people to act in their own self-interest. I already know it doesn’t work.
If you don’t believe in “vices”, you don’t believe in them. That’s fine. My point here was that it’s not safe to ignore them, and I would like to add, there’s nothing stronger than a vice to make sure you practice self-serving rationalization.
If, for the next 40-60 years, humanity faces a drawn out, painful coping with increasing harms from climate change, as I believe, and our hope for policy and recommendations is communities like yours, and what we get is depressed panicky people indulging whatever vices they can and becoming corrupt as f**k? Well, things will go badly.