I’m not updating this anymore. But your post made me curious. I will try to read it shortly.
Leo
Congratulations. Are you planning to upload recordings of the presentations? Where can I access the conference program?
This was a nice post. I haven’t thought about these selfishness concerns before, but I did think about possible dangers arising from aligned servant AI used as a tool to improve military capabilities in general. A pretty damn risky scenario in my view and one that will hugely benefit whoever gets there first.
Here (https://thehumaneleague.org/animals) you’ll find many articles on the subject. For example, this one: What really happens on a chicken farm.
He later abdicated the throne in 2014, ending the monarchy.
Not really. He abdicated in favor of his son, who is the present king of Spain. Ending the monarchy is an idea that never crossed his mind.
Related: EA forum suggestion: In-line comments (Similar to google docs commenting) and perhaps this comment.
In case you’d prefer the EA Forum format, this post was also crossposted here some time ago: https://forum.effectivealtruism.org/posts/oRx3LeqFdxN2JTANJ/epistemic-legibility
I think the first link should be https://trends.google.com/trends/explore?q=longtermism
Spatterings of Latin
I can’t think of one single post where this is a serious issue. There may be exceptions that I ignore, but generalizing this is exaggerated.
Was the winner ‘efflorescence’ or ‘peripeteia’?
Klingt exotisch, aber wenn man das Wort 10x sagt, dann merkt man das nicht mehr
I believe this happens because , to my knowledge, German words ending in -ismus are only combined with proper names (‘Marxismus’) or foreign words (specially adjectives), that is Lehnwörter, like ‘Liberalismus’, ‘Föderalismus’. But I’m not a native speaker, so I can’t really tell how “exotic” this neologism sounds.
Legal Priorities Summer Institute
The Economics of Animal Welfare
Have you checked this https://forum.effectivealtruism.org/events? There are some meetups in Berkeley.
I think this is very useful. Added.
Langzeitigkeit
Langzeitethizismus
But I think the best is the already proposed ‘Langzeitethik’.
Great article! Another thing I just realized: I dislike the clock metaphor. It seems to suggest that we will eventually reach midnight, no matter what. Perhaps a time bomb (which can be deactivated) would be a better illustration.
My version tried to be an intuitive simplification of the core of Bostrom’s paper. I actually don’t identify these assumptions you mention. If you are right, I may have presupposed them while reading the paper, or my memory may be betraying me for the sake of making sense of it. Anyway, I really appreciate you took the time to comment.
I would like to understand how that is a valid objection, because I honestly don’t see it. To simplify a bit, if you think that 1 (‘humanity won’t reach a posthuman stage’) and 2 (‘posthuman civilizations are extremely unlikely to run vast numbers of simulations’) are false, it follows that humanity will probably both reach a posthuman stage and run a vast number of simulations. Now if you really think this will probably happen, I can see no reason to deny that it has already happened in the past. Why postulate that we will be the first simulators? There’s no empirical evidence to support it, given that we are talking about extremely detailed, realistic simulations, and as it was already agreed that simulations are so many, it seems very, very unlikely that we are located at the first level. In other words, if one believes that intelligent life is part of a process which normally culminates with a massive ancestor-simulation program, the fact that there is intelligent life is not enough to find out in what part of the process it is located.
The expected impact of waiting to sell will diminish as time goes on, because you are liable to change your values or, more probably, your views about what and how best to prioritize. This is especially true if you have a track record of changing your mind about things (like most of us). While the expected impact of waiting is, say, the value of two kidneys, conditional on not changing your mind, this same impact will be equal to the value of one kidney, or less, if you have a 50% chance or more of changing your mind. So I guess your comment is valid only if you are very confident that you will not change your mind about donating a kidney between now and the estimated time when you can sell it.