Book a 1:1 with me: https://cal.com/tylerjohnston/book
Share anonymous feedback with me: https://www.admonymous.co/tylerjohnston
Book a 1:1 with me: https://cal.com/tylerjohnston/book
Share anonymous feedback with me: https://www.admonymous.co/tylerjohnston
Agreed — my comment is a point in favor of moving from messenger to “a messaging app made strictly for messaging.” That could be Signal or Whatsapp, or something else entirely.
Oh wow, this is really good to know. Thank you!
Thank you for putting this together! It’s interesting to think about overall trends in volume and direction of giving.
Effective animal charities will likely receive something like 5% of EA funding in 2022, the smallest figure since 2015 by a wide margin.
It would be interesting, though perhaps difficult, to see an analysis like this account for multi-year grants, assuming it isn’t already. For instance, part of why animal welfare funding might look so much larger in 2021 compared to 2022 is that Open Phil, the biggest EA funder in the space as far as I know, made multiple large grants in 2021 that pay out over the course of two-three years (e.g., GFI, THL).
So, if I’m interpreting this correctly, lower 2022 numbers for animal welfare might not reflect a deprioritization or funding gap, but just multi-year grants for the largest organizations having been made recently.
I’m very sympathetic to some of the signalling benefits of being (or at least appearing to be) frugal.
I just graduated from a uni with a large EA presence, and most of my very-motivated do-gooder friends were outside of EA (either affiliated with a homeless shelter I worked at, grad student union organizing, or various social justice causes on campus). Most of them were seemingly convinced that the EAs on campus weren’t actually interested in doing good, because there was money being spent on sending students to fly abroad for conferences, hosting discussion groups, opening an office/hang out space in our insanely expensive city, etc. Which, to be fair, was a far cry from how the campus homeless shelter I worked at was spending money — we cherished small donations from our fundraising drives, spending it almost exclusively on programs benefitting the guests we served, often just getting together basic bits of clothing and hygiene products.
I tried to explain to my friends the EA argument for spendy-ness (I still believe it is the best way to do good, deep down) but I just couldn’t seem to convince them that it wasn’t a ruse of motivated reasoning. Looking back on it, I’m bummed that some of my most passionate and talented friends, who were already choosing careers based on serving others, were turned off by this. I wish my friends’ first image of EA had even more similar to mine — things like the GWWC pledge and Singer’s famine affluence and morality — as I think that would have sold them on EA in the same way it first sold me on it. But they just saw social gatherings and professional development, and for students who were skipping out on studying and social events to do on-the-ground organizing and volunteering and public service, EA just didn’t seem all that selfless to begin with. I couldn’t really convince them otherwise, and I’m sad about that.
Thank you so much for writing this up! I’ve also found a lot of benefits from ACT, and it seems like there is quite a strong empirical grounding. I’d also second your recommendation that anyone interested seek out the book A Liberated Mind by Steven Hayes, one of the progenitors of ACT.
In that book, Hayes also mentions some research they did on ACT for goal-related outcomes, like improving school grades and chess performance. I don’t have the links on hand, but I recall that they saw significant effects from relatively minor interventions, so I think it could be especially promising for people looking to deal not only with larger mental illness issues, but also smaller life goals.
I have found more benefit from ACT than traditional CBT myself. As this post mentions, one of the fundamental differences is that CBT tries to change your thinking about difficult things, whereas ACT simply asks you to accept them wholeheartedly. Because of this, I’ve sometimes felt like CBT is kind of just gaslighting yourself, which may in fact be effective, but I suspect that ACT might produce more stable long-term improvements — especially for people who have epistemic issues with the idea of simply reframing difficult truths. I think ACT also has many similarities to mindfulness practice and certain forms of Buddhism, and people who find those ideas interesting and helpful will likely see some benefit from ACT.
Consider Using a Reading Ruler!
Digital reading rulers are tools that create parallel lines across a page of text, usually tinted a certain color, which scroll along with the text as you read. They were originally designed as a tool to aid comprehension for dyslexic readers, based on what was once a very simple strategy: physically moving a ruler down a page as you read.
There is some recent evidence showing that reading rulers improve speed and comprehension in non-dyslexic readers, as well. Also, many reading disabilities are probably something of a spectrum disorder, and I suspect it’s possible to have minor challenges with reading that slightly limit speed/comprehension but don’t create enough of a problem to be noticed early in life or qualify for a diagnosis.
Because of this, I suggest most regular readers at least try using one and see what they think. I’ve had the surprising experience that reading has felt much easier to me while using one, so I plan to continue to use reading rulers for large books and articles in the foreseeable future.
There are browser extensions that can offer reading rulers for articles, and the Amazon Kindle app for iOS added reading rulers two years ago. I’d be curious to hear if anyone else has had a positive experience with them.
OPP was making grants in the Global Health and Wellbeing space (which includes animal welfare) long before this.
The data exist via their grants database [1] — it doesn’t look to me like there was any shift away from longtermism that coincided with SBF/FTX entering the space (if anything, it looks like the opposite could be true in 2022).
Man, this interview really broke my heart. I think I used to look up to Sam a lot, as a billionaire whose self-attested sole priority was doing as much as possible to help the most marginalized + in need, today and in the future.
But damn… “I had to be good [at talking about ethics]… it’s what reputations are made of.”
Just unbelievable.
I hope this is a strange, pathological reaction to the immense stress of the past week for him, and not a genuine unfiltered version of the true views he’s held all along. It all just makes me quite sad, to be honest.
For me, one of the main takeaways of the FTX debacle was a reminder of the fact we have something to lose. That a load of money isn’t just a number or a means to personal enrichment, but rather its value is weighed in the absolutely mind-boggling number of people and animals that our efforts today could impact.
So, in a strage way, I’m really glad that I’m surrounded by people who care enough for this to have hurt, and for it to have hurt for the right reasons.
It’s a reminder that this community is largely comprised by people who are remarkably driven to make the world better a better place, even long after they’re no longer in it. It helps me recalibrate to see this as a bump in the road and focus on the next steps, knowing there’s a lot of talent and a lot of motivation and a lot of care around me.
So, thanks to you all! I appreciate you.
I appreciate this proposal for large-scale “vegan offsetting”! I agree that it’s important that the rest of the world, full of people who aren’t going to change their diets overnight, find ways to help with reducing animal suffering.
That being said, I’m not sure if moral offsetting checks out theoretically, and there are some unique complications with the vegan case that come up in the comments on this post. I also don’t think the idea is intuitive outside of a subset of consequentialist and mathematically-inclined people, since most of the public probably isn’t okay with offsetting something like murder or domestic animal abuse.
If we’re really trying to get the vast majority of the world — people who like meat and hate torture — on board, I think a stronger solution might be helping meat eaters advocate for legislation or corporate action to improve farm conditions, or maybe even replace some of their consumption with alternatives like higher-welfare animal products or compelling alt proteins like Beyond. These might be preferable because, unlike offsetting, they make sense from a variety of moral perspectives and also make individual change feel achievable (which is important because the theories of change for the most effective animal charities do rely on people changing the products they eat eventually).
Hmm, that’s interesting — I would be curious to see how many people offsetting appeals to in the broader public. This actually comes up in a SSC post, where he draws out the weird optics pretty well.
And I agree it’s not one or the other — in fact, I think the Askell piece brings that up as a point against offsetting. If we’re pursuing both, it might not be as useful to think of it as offsetting some inaction or moral wrong, but rather giving money plain and simple (in addition to other personal changes you’re making).
The animal welfare movement (if my understanding is correct) has barely been able to move the needle on veganism over the decades it has been revealing its horrors.
I might push back on this — in fact, I think the reason that it remains a major EA cause area is because there’s clear evidence of tractability. I suppose the significance of change could be debated, but 30 years ago, people barely knew what vegans were, and today there’s been a massive rise in awareness + acceptance + self-identification with the movement (though changes in consumption habits are a more complicated question) and just in the last 10 there’s been a ton of momentum improving things for animals and making veganism an easier ask (banning of battery cages in the EU, corporate cage-free campaigns pushing US cage-free from 5% to 35% in less than a decade, cultivated meat coming into existence and having the potential to scale, etc.)
We need to make available an ask that could be just as, or more effective, but easier for a lot of people: fund effective farmed animal welfare charities and be part of the solution-we can help you do it in 10 minutes.
FWIW, this ask is already out there (EA Funds and Animal Charity Evaluators both have pools you can contribute to in 2 minutes, where experts will then direct the money in a more thorough way). They don’t suggest a single dollar amount as an “offset,” probably for some of the reasons mentioned above, but everything else is there for people who do want to contribute financially rather than with their own dietary choices.
Interesting — if you’d ever be interested in expanding on your post, I’d be curious to hear your response to the objections I bring up, or that are mentioned in the comments here.
Yeah the question about progress in the vegan movement is complicated and as you point out, there is a big difference between animal welfare improvements and the public actually going vegan.
For the raw stats of whether or not people identified as veg*n are consuming less meat, the best review I’ve read isn’t super optimistic, but I do think that awarness of veganism is increasing, the plant-based food industry is scaling super quickly, and better alternatives will hopefully make dietary shift more accessible to people. So especially when you compare where we are today to something like where we are 30+ years ago, I do think the progress is there, which is especially promising given that funding is lower as you mentioned.
But if a fundraising strategy like this could prove effective, I would be on board pretty easily. My only end goal is the world getting better, whether it’s because of choices individuals make or the choices the charities they help fund make. I’m still a bit pessimistic about the prospects, but fingers crossed that there is something here if someone does look into it.
I used to think along these lines, but I’ve been coming around recently, in part thanks to James Ozden’s writings on the radical flank effect. My current best guess is that the optimal path forward involves a predominant focus on the most pressing, incremental changes (welfare improvements for chickens and fish) while also having some people occasionally jumping in the public square and loudly reminding everyone that we’re doing something truly awful at a massive scale.
Right! I appreciated reading your post about this.
I think the objection that I find is most relevant is that moral offsetting only seems intuitive to a subset of consequentalist-leaning people (who may be overrepresented on this forum), but strikes many as morally abhorrent, at least for harming living creatures. I guess carbon offsetting is more popular, but I don’t think an offset for beating your dog would be widely admired, so I’m not sure what people would make of an offset about the treatment of farmed animals. But I think people thinking caged eggs are wrong but then offsetting them so they can keep eating them might not be seen with any moral credibility by the wider public.
I also think the other objections raised in the forum post are interesting — that it might be psychologically complicated to both eat animals raised under poor conditions and still aim to better their lot, and that the signaling effects of being vegan (or abstaining from particularly bad animal products, in your case) are probably underrated.
I agree on this — what you bring up is more about the immediate logic of demand offsetting, and less about the optics or longer-term implications of demand offsetting. My first objection was that this doesn’t scale well to the broader public as OP mentioned (because to them you are voluntarily purchasing and eating a product from an animal you think was mistreated, while also sparing a totally separate animal, or two). So I don’t think it avoids the bad optics that things like murder offsets would carry.
But it’s not that it doesn’t make a certain sense within the consequentialist framework (which I think it does, though I hesitate on account of the other objections I mentioned — how this would impact someone’s psychology long-term and the lack of some signaling effects in abstaining from low-welfare products).
There isn‘t one exactly, but poking around the grants made by Open Philanthropy and EA funds will give you a good idea of what orgs and projects look promising to the experts who disburse those funds.
This was really upsetting to read. I really feel for the people impacted, and even if it’s not perfect, I’m glad that this piece was published and don’t want to miss any lessons to take from it.
Most sexual harassment is never reported. I wonder if we could reduce any perceived barriers to reporting by creating a wider air gap between CEA (which has, by its nature, conflicts of interest inside the community) and the people tasked with first receiving and responding to reports. Right now, it seems reports are read first by CEA staff, and the confidentiality policies are a bit vague.[1] It could lower the barrier to reporting if the complaint was initially received and handled by a person or organization outside of the EA community (personally and professionally), or at least adjacent to it.
Then, after discussing with the external affiliate and learning more about confidentiality, policies, steps forward, etc., people can decide what they want to do next (be it ending the conversation there, forwarding it to the community health team, forwarding the complaint to other institutions —even straight to law enforcement for severe issues, etc.)
To be clear, I think the community health team has done, and will continue to do, a great job — this would just be about who is the initial point of contact, in case it makes people more comfortable speaking up.
From the google form: “If you have questions about how we’ll handle confidentiality, we’re happy to discuss that at the start of our conversation. Different team members have different policies, because they handle different kinds of cases. If you start talking with a member of our team and they can’t promise the level of confidentiality you want, we can refer you to a different team member who will be able to keep to a stricter confidentiality policy.” All of this relies on a team member at CEA first reading and responding to the complaint, of course.
Thank you for all the work your team has done, and is doing, on this issue.
And thanks for clarifying the point about reading and responding — I worded it poorly and I’ve retracted it in my comment. But I do think the sort of thing I was gesturing at is just what you mentioned: right now, the structure is intended such that info is given after a conversation with CEA has been started and some level of nuance and specificity to the individual situation has been divulged.
I see the benefit to that — I guess there are tradeoffs in everything — but I also wonder if some people might prefer more info on confidentality and options without having to open dialogue with CEA disclosing any specifics of their situation. I don’t know if that’s true, though. I’m not an expert on this by any means, just trying to contribute to brainstorming a bit. I do think reading the forum post you linked helped me understand a bit more about this.
Aside from the valid concerns about data security, I strongly endorse this if only because I am happier without most social media and I know a non-negligible, perhaps growing, subset of people feel similarly.
I’ve disliked having to maintain a facebook account as a barrier to entry to talking to people I care about, and using a messaging app made strictly for messaging feels like a useful change.