Lotte, massive thanks for finding the articles! The month-long intervention in particular seems very similar to this idea, so it’s very useful to know about :-)
JDLC
Firstly, I’d note that OAISI or @James Lester (OAISI Prez) might be able to provide better resources or Oxford links, if you’ve not spoken to them yet!
Here’s a fairly long list of (what I think are) good options. Note that they’re all more blog post than academic article. I personally think that’s better based on my experience with what I/most people engaged with more at undergrad, but that obviously depends on the Oxford culture.
Unsurprisingly, I’d mainly recommend 80,000 Hours content. Their overview case for AI risks hits the broad points, but is light on detail unless you follow the links. Their profiles on power-seeking AI, gradual disempowerment, AI misuse and power concentration are recent, broadly non-technical, engaging, and somewhat reputable. I think the first one (power seeking) is the best of the four to recommend, but it’s a bit longer. I’ve also heard excellent things about the AI in Context video, but haven’t watched it myself.
If you want to max out credibility about AI risk being worth taking seriously, consider pointing to the Superintelligence Statement and FLI Open Letter.
Linch’s intro is great, recent, shorter and non-technical, but doesn’t come with much credibility (sorry Linch).
For something really hard hitting, Yudkowsky’s Time piece has always stuck with me. I’d be careful about this one though: as a first introduction it can easily come across as ‘crazy man ranting’ and lead to broad dismissal of AI risk.
Finally, AISafety.info has good arguments and you can explore at your own pace, but unsure how suitable it is for a reading list.
Hope this is helpful!
Hello Belindar!
Firstly, I’m sorry that you didn’t get responses and that your post was downvoted. I think that’s because job-searching posts aren’t really a norm on the Forum—but if you’re new then it’s not expected for you to know that.
I don’t personally know of any roles, but the 80,000 Hours job board is a great place to look, if you haven’t seen it already. (Note: this specific link comes with some pre-selected filters based on your post, but check them to see if they’re relevant).
You might also find High Impact Professionals useful, and their talent directory. The Probably Good job board is also quite good. Hope this helps!
To clarify, is Bob’s mistake:
Continuing to work on AI Safety?
Wanting to move to farmed animal welfare?
(I’m 90% sure you think the mistake is 2, but the phrasing of the sentence isn’t fully clear)
Thanks for engaging!
Somehow I didn’t even realise there were fully-vegan services, thanks for pointing it out! There’s definitely some good benefits to it, slight downside is that my initial scan puts them as ~1.5x base cost of Gousto, so there’s a tradeoff there. I will consider this more.
The gift option might be cool even independently of this sort of trial, esp. with Christmas gift for Veganuary, as you mention.
Very good idea on the info campaign, and the further study. Would definitely require a closer collaboration with the kit service, for them to monitor which boxes are for which trial participants, this might be another point in favour of choosing a fully-vegan service.
For the final point, I’ve added comparisons to the ‘Cost-Effectiveness Estimates’ section. The midpoint of 19 SAD/$ is below these estimates, but the optimistic case of 49 SAD/$ is comparable with some of them.
Good point—unfortunately I don’t have a good answer! At small scales I think getting most/all participants via referrals can limit this effect, but at large scales I’m unsure.
Good feedback, thanks. Have added a definition link to the first usage.
For reference, it’s Suffering Adjusted Days, a metric that Ambitious Impact came up with to measure animal welfare interventions. It’s similar to Disability Adjusted Life Years, for animals.
Paying for people to try plant-based—a plan and analysis
The key point, though, is that cases like Ocado and Albert Heijn are exceptions, not the norm.
As a partial pushback (and for reference for any vegans!), 6⁄8 biggest UK supermarkets which allow online shopping have these sort of filters. (Proof here)
These are definitely different to the ‘whole website vegan toggle’ option, and only available on some subset of pages. They also miss the ‘norm-building’ impact of having a very visible ‘vegan toggle’. However, I’d tentatively doubt supermarkets would consider having the toggle, considering how crammed supermarket website homepages already are. (This of course probably isn’t representative for China, which I think is the main point in this post.)Most online supermarkets lack the resources and incentives to systematically review and continuously update tens of thousands of SKUs for vegan status.
Tesco uses Spoon Guru to create/manage filters (according to their app). Seems like it could be a more tractable ‘off the shelf’ solution for other supermarkets.
Hey, could you be a bit more specific about what ‘counterarguments’ you want?
Cool post, a couple of questions for you (or others):
1. Your footnote says:
I think this stands for as long as humans are similar enough to each other that our elite can’t conscience hoarding vitae when some people have <1.
In our current world, some elite (eg. billionaires) hoard vitae whilst many people (perhaps most people) have <1, and most seem to conscience it fine. So I’m sceptical that the “redistributive question” is trivial or likely to happen automatically. Agree?
2. Do you think 1 vitae is the same for everyone? For example, you suggest compute and communications bandwidth as parts of the unit. I think these are really important to some people (the average coastal elite) and really unimportant to others (the type of person who fancies living in an off-grid cabin). So is vitae equivalent for all people (you all need compute, whether you like it or not), or is vitae more like a substitute for ‘whatever you need to maintain a high quality of life for yourself personally’? If the latter, would a base level of income work instead of vitae, and then the income could be spent by individuals on whatever their preferences are?
3. How come you picked “average American coastal elite” as your bar for the minimum we should aim to move everyone towards? Why not double that quality of life, or half that quality? Off the cuff, I’d be fairly comfortable with something like 0.7 vitae as the bar to aim for (using your units).
Bit late, but you might find some other ‘peripheral’ books at Impact Books.
Take the ‘impactfulness’ of collection with some salt (eg. I don’t think the Feynman Lectures or Philosphy Gym are particularly suited to answering how much good you can do with the resources available to you).
In terms of ‘EA Canon’ status, books on there that I’d consider are: Life You Can Save; Superforecasting; Moral Ambition (recent, so maybe future EA Canon); maybe Reasons and Persons (inspired a lot of EA thinking, but quite Philosophically dense); Avoiding the Worst / Suffering Focussed Ethics (S-Risks/Longtermism); some combo of The Alignment Problem / Human Compatible / Uncontrollable / EABIED (AGI risks). Bear in mind that I haven’t read all of these books, mostly making those suggestions from how I’ve seen others talk about/reference/recommend them.
If you’re worried about the idea of GiveWell funds being weird, you could just suggest one of the GiveWell top recommended charities. Or even better, suggest all 4 and let them pick.
In other words: “Simple language is more impactful.”
Hey Stijn, a few critical points on this.
I’m worried about claiming any specific petitions are “easily thousands of times more effective than most other petitions”, for reasons similar to this post.
I’m unsure how you’re judging ‘tractability’ here, but I’m doubtful about the tractable routes to change for some of these. For example, the shrimp change.org petition made for a class project with ~400 signatures. Even if this petition got 10x or 100x the signatures, I don’t understand the Theory of Change that results in any person/group/organisation making meaningful change. (For some petitions, like UK Parliament ones, there is a clearer route to impact, but it still requires lots more work and luck for the debate to become actual change).
Even if some petitions are super impactful compared to others, petitions might not be impactful compared to other interventions. This is somewhat offset by petitions being really ‘cheap’ (low time, non-fungible time, low/no funding costs). However, if you’re recommending people sign petitions they don’t know much about, they might reasonably want to spent time researching the issues, which increases time from ~10-15 sec to ~10-15 mins, which is a non-trivial amount of time.
Regardless, I made this really simple website to visualise your 10 recommended petitions more clearly (like 2-30 mins with ChatGPT). Would be open to working on this more, depending on yours and other’s thoughts/responses to the concerns above!
Hi Brad, really good post, appreciate it! I’ve got one positive, one question, and one challenge.
Positive: The analysis of which industries are more amenable to Profit for Good seems interesting. It would be great to see more about which are industries are likely best/worst, and especially why (which you have partly done here).
Question: Does this model apply to publicly held companies, or could it be adapted for them? I imagine a large portion of the $100Tn you mentioned is from publicly traded companies. I also assume there’s a competitive advantage to public ownership, (but I only think this because lots of the largest companies seem to do it). However, the model you propose seems to require private/foundation ownership.
Challenge: Even if Profit for Good is advantageous in general, it doesn’t mean that the most impactful Profit for Good is advantageous. For example, the most successful companies might focus on causes the public already cares about, like cancer research (which is probably less impactful than, say, GiveWell). This is especially relevant for causes like ending factory farming. Many interventions raise meat prices (intentionally or unintentionally), which might deter customers, and in the worst case could result in a comparative disadvantage.
Would like to hear your thoughts or pushbacks
This is interesting. What generally happens when you point out the ~inconsistency? Do people tend to reject Speciesism, reject anti-humanism, or accept/defend maintaining both? (Or something else!)
I think this idea and article are great. This (decision-relevant/skill-building work as a social group) seems like exactly like what EA Groups should be doing. The article is well-written, clear and potentially important.
I don’t have enough knowledge to respond to your questions, but here’s some thoughts:
Digging wells in Niger seems to be cost-effective, however I wouldn’t necessarily generalise that digging wells is cost-effective. (You don’t do this, just pointing out for others.)
As you say, a lot of the country is on a large aquifer. This might make this intervention very good in Niger, but not scalable to other places.
Similarly, you’ve taken maximum values for rate reductions due to Niger having a larger burden. This wouldn’t translate to other places.
There’s no data here about the overhead of Wells4Wellness (for example salary costs). This could change calculations.
With regards to your Question 4: What do you expect the ‘major quality of life improvement’ to look like?
(I ask this both genuinely, my knowledge of this area is poor, and as a ‘coaching-style’ question to answer your question).
Having said that, do you know if W4W is likely to have room for significantly more funding? It seems like a good organisation to support!
To clarify, I don’t think it’s against the community guidelines or actively wrong. I was just trying to explain why I think you post was downvoted, in case you were confused by it!