Studying final year of dentistry in Lithuania with the intention of doing earning to give (most likely in Denmark), but currently evaluating if this was the right choice (due to limited earning potential and options for doing good). If I continue this path, I expect to donate minimally 50 % (aiming for 65-70%) or 40.000-60.000 $ annually (at the beginning of my career). While I do expect to mainly do “giving now”, I do expect, in periods of limited effective donating opportunities, to do “investing to give”.
As a longtermist and total utilitarian (for the most part), finding the cause that increases utilities (no matter the time or type of sentient being) the most time/cost-effectively is my goal. In the pursuit of this goal, I so far care mostly about WAW (wild animal welfare), x-risk and s-risk (but feel free to change my mind).
I heard about EA for the first time in 2018 in relation to an animal rights organization I worked for part-time (Animal Alliance in Denmark). However I have only had minimal interactions with EAs.
Due to reading and my time working at Animal Alliance etc., I’m relatively knowledgeable in the following areas: effective communication, investing (stocks) and personal development.
Male, 23 years old and diagnosed with aspergers (ASD) and dyslexia.
Thanks a lot for the post! I’m happy that people are trying to combine the field of longtermism and animal welfare.
Here’s a few initial thoughts from a non-professional (note I didn’t read the full post so I might have missed something):
I generally believe that moral circle expansion, especially for wild animals and artificial sentience, is one of the best universal ways to help ensure a net positive future. I think that invertebrates or artificial sentience will make up the majority of moral patients in the future. I also suspect this to be good in a number of different future scenarios, since it could lower the chance of s-risks and better the scenario for animals (or artificial sentience) no matter if there will be a lock-in scenario or not.
I think progression in short-term direct WAW interventions is also very important since I find it hard to believe that many people will care about WAW unless they can see a clear way of changing the status quo (even if current WAW interventions will only have a minimal impact). I also think short-term WAW interventions could help to change the narrative that interfering in nature is inherently bad.
(Note: I have personally noticed several people that have similar values as me (in terms of caring greatly about WAW in the far future) only caring little about short-term interventions.)
It could of course be argued that working directly on reducing the likelihood of certain s-risk and working on AI-alignment might be a more efficient way of ensuring a better future for animals. I certainly think this might be true, however I think these measures are less reliable due to the uncertainty of the future.
I think Brian Tomasik has written great pieces on why animal-focused hedonistic imperative and gene drive might be less promising and more unlikely than it seems. I do personally also believe that it’s unlikely to ever happen on a large scale for wild animals. However, I think if it happens and it’s done right (without severely disrupting ecosystems), genetic engineering could be the best way of increasing net well-being in the long term. But I haven’t thought that much about this.
Anyways, I wouldn’t be surprised if you already have considered all of these arguments.
I’m really looking forward to your follow-up post :)