Feel free to message me on here.
JackM
Thanks for this will definitely give this a read
Impressive timeline!
Minor feedback: it could be nice to have an overall name for each of those big picture eras. Also I remember that Richard Dawkins has written about the suffering of wild animals but it isn’t in your timeline: see https://en.wikipedia.org/wiki/Wild_animal_suffering#Extent_of_suffering_in_nature
Apologies if that’s an intentional omission.
I like this quotation from the book which I think is quite powerful:
“The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored.”
I don’t know if you guys have capacity but it might be useful for a separate post to list the problems that you considered and decided not to include, with short explanations as to why. This may reduce the probability of people independently investigating them which could save time, or increase the probability of people investigating them if they think you wrongfully excluded them which could be helpful. Just an idea
Hi Arden, yeah that makes sense. You’ve definitely given the EA community a lot to work on with this post so probably not worth overcomplicating things.
I wonder if we need to make a very clear distinction between value drift in the case of an individual investing their own money with the intention of donating it later on, and value drift in the case of an individual legally-binding themselves to donate, for example by giving to a donor-advised fund.
In the latter case, which seems to be the most relevant in this context, I think the linked sources and the ~10% individual value drift figure are pretty irrelevant. A priority should probably be estimating a specific value drift estimate in the case of legally-binded giving, which will require some historical research into donor-advised funds or similar legal vehicles.
So MichaelA when you say “0.5% is very low given the evidence we have”, I’m not convinced we actually have any relevant evidence at all, or at least I haven’t seen it be presented.
That all makes sense. I do think we need to make the clear distinction between ‘individual’ value drift and ‘legally-binded’ value drift, but you’re probably right that the ~10% may be the best starting point we have for the latter.
It might be that the only way to get a decent estimate of legally-binded value drift in an EA setting is to actually set up a fund and see what happens. I suspect it would make sense to start cautious with putting money into the fund until a low value-drift has been demonstrated (which would admittedly take some time—perhaps a few generations). Overall I suspect it would be worth setting up such a fund for its informational value.
The 80,000 Hours podcast should host debates
Fair points, although I do think it could work with skilled moderation. Generally from my experience EAs aren’t that prone to picking sides and being dogmatic, but I do think debates would have to be handled with care. In any case it may be worth giving it a go to see.
Thanks, I agree!
Nice to hear from you Keiran. That idea is interesting. For me the “figure out what the truth is” aspect is the most important thing. I don’t necessarily think there has to be a “winner” of the debate, I just want two experts to be able to hash out their differences and hopefully in doing so get closer to the truth. Maybe the anti-debate format is conducive to that.
In terms of a wishlist of topics see below for some ideas. This thread has a lot of good ideas. I’m not massively concerned about who is involved as long as they are experts or well-versed in what they’re talking about.
Stuart Russell “vs” someone on whether AI is a significant x-risk that we should focus on or not
Will MacAskill “vs” Toby Ord on ‘hinginess’ of the present/probability of x-risk this century/broad or targeted approaches to long-termism
Someone (maybe Toby Ord) “vs” someone (maybe Simon Knutsson) on whether we should focus on suffering more than on happiness, or should be treating these the same
Hauke Hillebrandt “vs” someone on economic growth vs randomista development
Maybe a debate on population ethics (e.g. total view vs not total view)
Maybe a debate on the validity of long-termism. I’m not actually sure who a prominent critic of long-termism is but such a debate would be very interesting
In short, any debate that is ongoing in EA circles, that has credible voices on either side and that has important implications for what we should do.
By the way I enjoy listening to the Unbelievable? podcast in which the host Justin Brierley often hosts debates between atheists and christians. He moderates pretty well, even though he is a christian it is not clear at all from his moderating which side he is on! You may (or may not) find it useful to check out a debate from that podcast. In normal times they actually get the guests in the same room and video the debate so it’s available as both an audio and visual podcast.
I like that idea. I think it would be great if we could do both!
Glad to hear it!
In the episode you say:
And so I do want to make it clear that insofar as I’ve expressed, let’s say, some degree of ambivalence about how much we ought to be prioritising AI safety and AI governance today, my sort of implicit reference point here is to things like pandemic preparedness, or nuclear war or climate change, just sort of the best bets that we have for having a long run social impact.
I was wondering what you think of the potential of broader attempts to influence the long-run future (e.g. promoting positive values, growing the EA movement) as opposed to the more targeted attempts to reduce x-risks that are most prominent in the EA movement.
I sometimes think of an idea for a forum post that I want someone other than me to write about, perhaps because I don’t have the expertise or time to write it.
An idea could be to have a dedicated area to suggest posts for someone else to write. These suggestions could be upvoted or downvoted so that we can see what the community would most like to see written about.
It would be good to have a way to stop say twenty people then writing the same post at the same time. Perhaps people could put their name next to the suggestions that they are interested in taking on. I’m not saying that we can’t have more than one person writing on a specific topic, but this could give some indication of how many people feel like they can write on the topic and are interested in doing so.
Would be interested in hearing thoughts on this idea.
I actually completely agree. I’m sort of against there being a winner and loser because that might imply that the winner’s side of the argument is now objectively better and should be adopted by EAs. I doubt anything will be ‘settled’ by a podcast episode, but it should hopefully identify points of contention and help us get closer to the truth
Thanks for your reply! I also feel positively about broader attempts and am glad that these are being taken more seriously by prominent EA thinkers.
I think that studying and explaining the evolution of views within the community would be an interesting and valuable project in its own right.
I second this. I think Halstead’s question is an excellent one and finding an answer to it is hugely important. Understanding what went wrong epistemically (or indeed if anything did in fact go wrong epistemically) could massively help us going forward.
I wonder how we get the ball rolling on this...?
Thanks. I do wonder though if EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified. I’m not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?