Director of Research at PAISRI
I’m a big fan of ideas like this. One of the things I think EAs can bring to charitable giving that is otherwise missing from the landscape is being risk-neutral, and thus willing to bet on high variance strategies that, taken as a whole in a portfolio, may have the same or hopefully higher expect returns compared to typical risk-averse charitable spending that tends to focus on things like making no money is wasted to the exclusion of taking necessary risks to realize benefits.
Taking a predictive processing perspective, we should expect to see an initial decrease in happiness upon finding oneself living a less expensive lifestyle because it would be a regular “surprise” violating the expected outcome, but then over time for this surprise to go away as daily evidence slowly retrains the brain the to expect less and so have less negative emotional valence around upon perceiving the actual conditions.
However I’d still expect someone who “fell from grace” like this to be somewhat sadder than a person who rose to the same level of wealth or grew up at it because they’d have more sad moments of nostalgia for better times that would be missing from the others, but this would likely be a small effect an not easily detectable (would expect it to be washed out by noise in a study).
Without rising to the level of maliciousness, I’ve noticed a related pattern to ones you describe here where sometimes my writing attracts supporters who don’t really understand my point and whose statements of support I would not endorse because they misunderstand the ideas. They are easy to tolerate because they say nice things and may come to my defense against people who disagree with me, but much like with your many flavors of malicious supporters they can ultimately have negative effects.
I like the general idea here, but personally I dislike comments that don’t tell the the reader new information, so just saying the equivalent of “yay” without adding something is likely to get a downvote from me if the comment is upvoted, especially if it gets upvoted above more substantial comments.
I was quite surprised to hear how large the Fraunhofer Society is given I’ve never heard of it before! I think in and of itself this is a kind of evidence against their effectiveness, although I could also imagine they’ve turned out some winning innovations as parts of contracts and so their involvement gets lost because I think of it as a thing that company X did.
It seems unclear to me that the level of CO2 emissions from one model being greater than one car necessarily implies that AI is likely to have an outsized impact on climate change. I think there’s some missing calculations here about number of models, number of cars, how much additional marginal CO2 is being created here not accounted for by other segments, and how much marginal impact on climate change is to be expected from the additional CO2 from AI models. That in hand, we could potentially assess how much additional risk there is from AI in the short term on climate change.
Mixed. On the one hand, I feel like I’m less involved because I have less time for engaging with people on the forum and during events and am spending less time on EA-aligned research and writing.
On the other, that’s in no small part because I took a job that pays a lot more than my old one, dramatically increasing my ability to give, but it also requires a lot more of my time. So I’ve sort of transitioned towards an earning-to-give relationship with EA that leaves me feeling more on the outside but still connected and benefiting from EA to guide my giving choices and keep me motivated to give rather than keep more for myself.
While I appreciate what the author is getting at, as presented I think it shows a lack of compassion for how difficult it is to do what one reckons one ought to do.
It’s true you can simply “choose” to be good, but this is about as easy as saying all you have to do to do X for a wide variety of things X that don’t require special skills is choose to do X, such as wake up early, exercise, eat healthier food when it is readily available, etc.. Despite this, lots of people try to explicitly choose to do these things and fail anyway. What’s up?
The issue lies in what it means to choose. Unless you suppose some sort of notion of free will, choosing is actually not that easy to control because there’s a lot of complex brain functions essentially competing to get you to doing whatever the next thing you do is, and so “choosing” actually looks a lot more like “setting up a lot of conditions both in the external world and in your mind such that a particular choice happens” rather than some atomic, free-willed choice spontaneously happening. Getting to the point where you can feel like you can simply choose to do the right thing all the time requires a tremendous amount of alignment between different parts of the brain competing to produce your next action.
I think it’s best to take this article as a kind of advice. Sometimes it will be that the only thing keeping you from doing what you believe you ought to do is just some minor hold-up where you don’t believe you can do it, and accepting that you can do it suddenly means that you can, but most of the time the fruit will not hang so low and instead there will be a lot else to do in order to do what one considers morally best.
Cool. Yeah, when I saw this it sort of jumped out at me as potentially helping deal with what I see as a problem, which is that there are a bunch of folks who are either EA aligned or identify as EA and are also anti-LW, and I would argue that for those folks they are to some extent throwing the baby out with the bathwater, so having a nice way to rebrand and talk about some of the insights from LW-style rationality that are clearly present in EA and that we might reasonably like to share with others without actually relying on LW-centric content is useful.
To what extent are you thinking (without so far explicitly saying it) that “good judgment” is a possible EA rebranding of LessWrong-style rationality?
Reading this article about the security value of inefficiency, I get the idea that a possibly neglected policy area for EAs is economic resilience, i.e. the idea that we can increase welfare of people both in the short and long term by ensuring our economies don’t become brittle or fragile and collapse, wiping out welfare gains from modern economies and cutting off paths to greater welfare gains through economic growth in the future, or at least setting such growth back, causing harm, or making it economically unviable to work on averting existential risks.
Seems possibly related to other policy work focused on things like improving institutions for similar reasons, but more directed at economic policy rather than institution design.
One place where EAs paying taxes in the US can probably have differential impact is in making donations less than the standard deduction(s) they can take on their taxes such that they would not benefit from itemized deductions from donating to registered charities. Impact concerns aside, unless you’re donating enough to exceed your standard deduction, you don’t get much or any tax benefits from donating to registered charities, and so all of your donations will be post-tax anyway so you have a unique opportunity to give funds to EA-aligned causes that are otherwise neglected by larger donors because they can’t get the tax benefits.
Some examples would include giving small (less than $10k USD) “angel” donations to not-yet-fully-established causes that are still organizing themselves and do not or will not ever have charitable tax status and participating in a donor lottery.
Plenty of caveats to this of course, like if you have employer matching that makes it worthwhile to give to registered charities even if you yourself won’t reap any tax benefits, and state-level standard deductions are smaller than federal ones so it’s often worth itemizing charitable giving on state returns even if it’s not on federal returns.
Might help to see how this is handled, if at all, with pain scales. For example, I can imagine someone thinking they’re having 9⁄10 or 10⁄10 pain, say from an injury, but then after something much worse happening, say a cluster headache or a kidney stone, they realize their injury pain was only a 6⁄10 or 7⁄10 and the cluster headache or kidney stone was the actual 10⁄10.
I know there is already some stuff about how the pain scale has cross cultural issues, with people from different cultures reporting and possibly even experience their pain as more or less worse than others from other cultures, so might be an entry point to this line of investigation.
I really enjoyed reading this, and learned a lot about Bentham I didn’t know (which wasn’t a lot, since I haven’t spent a lot of time studying him). I get the sense that his ideas on utilitarianism are convergent with, say, typical virtue ethics in the limit, only he get there by a different route. I also get the sense he didn’t foresee super-optimization and was very much thinking about humans who do something closer to satisficing.
I think I agree, but my point is maybe more that the policy as worded now should allow this, so the policy probably needs to be worded more clearly so that a post like this is more clearly excluded.
FWIW, I don’t think this post actually endorses a specific candidate, and instead is asking if endorsing a specific candidate makes sense. Maybe that’s too close for comfort, but I don’t see this post as arguing for a particular candidate, but asking for arguments for or against a particular candidate. Thus as the policy is worded now this seems okay for frontpage or community to me.
FWIW, I think this is a better fit for LessWrong than EA Forum.
Enough meditation seems to pretty reliably increase empathy. My guess is there are studies purporting to show this, but I’m making this suggestion mostly based on personal observation. There’s some risk of survivorship bias in this, though, so I don’t know how repeatable this suggestion is for the average person.
At its heart, EA seems to naturally tend to promote a few things:
a larger moral circle is better than a smaller one
considered reasoning (“rationality”) is better than doing things for other reasons alone
efficiency in generating outcomes is better than being less efficient, even if it means less appealing at an emotional level
I don’t know that any of this are what EA should promote, and I’m not sure there’s anyone who can unilaterally make the decision of what is normative for EA, so instead I offer these as the norms I think EA is currently promoting in fact, regardless of what anyone thinks EA should be promoting.