Director of Research at PAISRI
Wonderful! A great way to be proven wrong!
In isolation I agree. But I found nothing new or interesting in this post. Since votes control how visible a post is, I view votes as purely a signal about how much I want to see and how much I want others to see content like this. Since I didn’t find it new or interested it was a poor use of my time to read it, hence the down vote.
When I down vote I like to tell people why so they have useful feedback on what makes people down vote.
I know many people vote to say “yay” or “boo”. I disagree with this voting style, and my votes generally should not be interpreted that way. I down vote to say “I don’t think you should bother reading this” and I up vote to say “I think you should read this”.
I agree, but is this a post that will make that change? I don’t see any really compelling arguments or stories here that are likely to change minds.
There’s no reason a person can’t be both earnest and still be hitting the applause light button. Intent matters, but so does outcomes.
I don’t recall recent EA discussion of this topic, but this is an extremely well-worn topic in general. This is sort of a professionalism 101 topic that most people debate in high school as something of a toy topic because the arguments are already well explored.
Downvoted because I don’t feel like there’s any substance here and it’s not worth spending the time to read. I think most people already agree with this sentiment and know the arguments presented in one way or another, so it feels like this post is just flashing the applause lights.
I’d probably have at least not downvoted and maybe would have upvoted this post if it contained some new content, like a proposal for how to get people not to glorify looks.
Hmm, I think these arguments comparing to other causes are missing two key things:
they aren’t sensitive to scope
they aren’t considering opportunity cost
Here’s an example of how that plays out. From my perspective, the value of the very large number of potential future lives dwarfs basically everything else. Like the value of worrying about most other things is close to 0 when I run the numbers. So in the face of those numbers, working on anything other than mitigating x-risk is basically equally bad from my perspective because that’s all missed opportunity in expectation to save more future lives.
But I don’t actually go around deriding people who donate to breast cancer research as if they donated to Nazis even though they, by comparison in scope to mitigating x-risks and the missed opportunity to have more mitigated x-risk, did approximately similarly “bad” things from my perspective. Why?
I take their values seriously. I don’t agree, but they have a right to value what they want, even if I disagree. I don’t personally have to help them, but I also won’t oppose them unless they come into object level conflict with my own values.
Actually, that last sentence makes me realize a point I failed to make in the post! It’s not that I think EAs must support things they disagree with at the object level, but that at the meta level metaethical uncertainty implies we should have an uncomfortable willingness to “help our ‘enemies’” at the meta level even as we might oppose them at the object level.
To your footnote, I’m not sure how many people are directly uncomfortable, but I do find arguments that roughly boil down to “but what about Nazis?” lazy as they try to run around the discussion by pointing to a thing that will make most readers go “Nazis bad, I agree with whatever says ‘Nazis bad’ most strongly!”. This doesn’t mean thinking Nazis are bad is an unreasonable position or something, only that it looms so large it swamps many people’s ability to think clearly.
Rationalists the to taboo comparing things to Nazis or using Nazis as an example for this reason, but not all EAs are rationalists and it is a specific point in idea space that most everyone will agree is bad, but I’m also pretty sure we can cook up worse views even more people would disagree with (cf. the baby eaters of Three Worlds Collide).
I’d bite the bullet and say “yes”. I disagree with Nazism, but to be intellectually consistent I have to accept that even beliefs about what is good that I find personally unpalatable deserve consideration. This is very similar to my stance on free speech: people should be allowed to say things that I disagree with, and I’m generally in favor of making it easier for people to say things, including things I disagree with.
To your point about not caring about the difference between good and evil, this sort of misses the point I’d like to make. How do you know what is good and evil? Well, you made some value judgement, and that judgment is yours. Even if you’re a moral realist, the fact remains that you’re discovering moral facts and can be mistaken about the facts. Since all we have access to is what claims people make about what they believe is best, we’re limited in how prescriptive we can be without risking, e.g., punishing ourselves if moral fashion changes.
I’ve edited my post to make it clear I think this is an off topic discussion within the context of this question. I think it’s fine for this comment to stay because it was there before I made this clarification, but I have asked the moderators to convert this from an answer to a proper comment.
I don’t think it actually has (1).
Engaged Buddhism is, as I see it, best understood as a movement among Western Liberals who are also Buddhists, and as such as primarily infused with Western liberal values. These are sometimes incidentally the best way to do good, but unlike EA they don’t explicitly target doing the most good, they instead uphold an ideology that values things like racial equality, human dignity, and freedom on religion (including freedom to reject religion).
As for (2), I’m not sure how much there is to learn. There’s likely some things, but I also worry that paying too much attention to Engaged Buddhism might be a distraction because it suffers common failure modes that EA seeks to avoid. For example, people I know who are part of Engaged Buddhism would rather volunteer directly, even if it’s ineffective, than earn to give, because they want to be directly engaged. That’s fine, but from what I’ve seen the whole movement is oriented more around satisfying a desire to help rather than actually doing the most good.
I think there’s some case for specialization. That is, some people should dedicate their lives to meditation because it is necessary to carry forward the dharma. Most people probably have other comparative advantages. This is not a typical way of thinking about practice, but I think there’s a case to be made that we could look at becoming a monk, for example, as a case of exercises comparative advantage as part of an ecosystem of practitioners who engage in various ways based on their comparative abilities (mostly focused on what they could be doing in the world otherwise).
I use this sort of reasoning myself. Why not become a monk? Because it seems like I can have a larger positive impact on the world as a lay practitioner. Why would I become a monk? If the calculus changed and it was my best course of action to positively impact the world.
A couple comments.
First, I think there’s something akin to creating a pyramid scheme for EA by leaning too heavy on this idea, e.g. “earn to give, or better yet get 3 friends to earn to give and you don’t need to donate yourself because you had so much indirect impact!”. I think david_reinstein’s comment is in the same vein and good.
Second, this is a general complaint about the active/passive distinction that is not specific to your proposal but since your proposal relies on it I have to complain about it. :-)
I don’t think the active/passive distinction is real (or at real enough to be useful). I think it just looks that way to people who only earn money by directly trading their labor for it. So-called passive income still requires work (otherwise money would just earn you more money with zero effort), just less of it. And that’s the key. Thus I think it’s better to talk about leverage rather than active/passive.
To say a bit more, trading labor for money/impact by default has 1:1 leverage, i.e. you get linear return on your labor. For example, literally handing out malaria nets, literally serving food to the destitute, etc.. Then you can do work that gets a bit of leverage but is still linear. So maybe you can leverage your knowledge, network, etc. to have 1:n leverage. This might be working as a researcher, doing work for an EA meta-org, etc.. Then there’s opportunities to have non-linear levage where each unit of work gets quadratic or exponential returns. In the realm of money and “passive” income this is stuff like investing in or starting a company (I know, not what people usually think of as “passive” income). In EA this might be defining a new field, starting a new EA org, etc..
Note though that we rely on people having impact in all these different ways for the economy/ecosystem to function. Yes, 1:1 leverage work would best be automated, but sometimes it can’t be, and then it’s a bottleneck and we need someone to do it. If you squeeze out too much of this type work you get something like a high-income/impact trap: no one can be bothered to do important work because it isn’t high leverage enough!
So, I think people should try to have as much leverage as they can, but also we need to be careful about how we promote leverage, especially in EA where there are fewer feedback systems in the economy to help the EA ecosystem self-regulate, so that we don’t end up without anyone to do the essential, low-leverage work.
Maybe I can help Chris explain his point here, because I came to the comments to say something similar.
The way I see it, neartermists and longtermists are doing different calculations and so value money and optics differently.
Neartermists are right to be worried about spending money on things that aren’t clearly impacting measures of global health, animal welfare, etc. because they could in theory take that money and funnel it directly into work on that stuff, even if it had low marginal returns. They should probably feel bad if they wasted money on a big party because that big party could have saved some kids from dying.
Longtermists are right to not be too worried about spending money. There’s astronomical amounts of value at stake, so even millions or billions of dollars wasted doesn’t matter if it ended up saving humanity from extinction. There might be nearterm reasons related to the funding pipeline they should care (so optics), but long term it doesn’t matter. Thus, longtermists will want to be more free with money in the hopes of, for example, hitting on something that solves AI alignment.
That both these things try to exist under EA causes tension, since the different ways of valuing outcomes result in different recommended behaviors.
This is probably the best case for splitting EA in two: PR problems for one half stop the other half from executing.
We should be careful about claiming the GOP is the “worse party”. Worse for whom? Maybe they are doing things you don’t like, but half the country thinks the Democrats are the worse party. We should be wise to the state of normative uncertainty we are in. Neither party is really worse except by some measure, and because of how they are structured against each other one party being worse means the other is better by that measure. If you wanted to make a case that one party or the other is better for EA and then frame the claim that way I think it’d be fine.
Yes, causing a party to lose its base is a great way to force the party to change, though note that this isn’t an isolated system, changing the GOP will also change the Democratic Party and that might not actually be for the better. Some might argue we were better off before Southern white voters were “betrayed” by the Democratic Party on civil rights legislation and abortion, since my understanding is that that caused the shift to the current party alignment structure and ended a long era of bipartisanship. Looking back, many have said they would have moved slower to avoid the long term negative consequences caused by moving fast and then not really getting the desired outcome due to reactionary pushback. This suggests we might be better off trying for slow change given uncertain effects of what will happen in a dynamic system.
to the fall of US democracy and a party that has much worse views on almost every subject under most moral frameworks.
This seems like a pretty partisan take and fails to adequately consider metaethical uncertainty. There’s nothing about this statement that I couldn’t imagine a sincere Republican with good intentions saying about Democrats and being basically right (and wrong!) for the same reasons (right assuming their normative framework, wrong when we suppose normative uncertainty).
While I don’t want to suggest that you or any other person who feels the GOP has an obligation to work for them, part of the reason they are able to be hostile to various groups is because those groups are not part of how they get elected. If tomorrow the GOP was dependent on LGBTQ votes to win elections, they’d transform into a different party.
So while I’m not expert enough here to see how to change the current situation, I think there is something interesting about changing the incentive gradients for both parties to make them both more inclusive (both construct on outgroup—GOP: minorities and foreigners, Democrats: rural and working-class white people) and I expect that to have positive outcomes.
The more I practice, the more I’ve come to believe that that only thing that really matters is that you do it. Not that you do it well by whatever standard one might judge, but just that you do it. 30 minutes of quiet time is a foundation on which more can be explored and discovered. You don’t have to sit a special way, do a special thing with your mind, or do anything else in particular for it to be worth the effort, although all those things can help and are worth doing if you’re called to them!
You should totally learn a bunch of techniques or practice a certain way if you feel called to it, but also I think there’s a lot to be said for simply spending 30 minutes with the intention to be present with what is, even if that means 30 minutes spent with your mind racing or fidgeting. The time itself will work on you to allow you to find your own way.