As I said last time, trying to quantify agreement/disagreement is much more confusing to determine and to read, than just measuring, out of an extra $100m, how many $ millions people would assign to global health/animal welfare. The banner would go from 0 to 100, and whatever you vote, let say 30m, would mean that 30m should go to one cause and 70m to the other. As it is, just to mention one paradox, if I wholly disagree with the question, it means that I think it wouldn’t be better to spend the money on animal welfare than on global health, which in turn could mean a) I want all the extra funding to go to global health, b) I don’t agree at all with the statement, because I think it would be better to allocate the money differently, say 10m/90m. Now if you vote as having a 90% of agreement, it could mean b, or it could mean that you almost fully agree for other reasons, for example, because you think there’s a 10% chance that you are wrong.
Leo
There’s substantial discussion on this topic following Eliezer’s take on this.
I think I would prefer to strongly disagree, because I don’t want my half agree to be read as if I agreed to some extent with the 5% statement. This is because “half agree” is ambiguous here. People could think that it means 1) something around 2,5% of funding/talent or 2) that 5% could be ok with some caveats. This should be clarified to be able to know what the results actually mean.
This is a great experiment. But I think it would have been much clearer if the question was phrased as “What percentage of talent+funding should be allocated to AI welfare?”, with the banner showing a slider from 0% to 100%. As it is now, if I strongly disagree with allocating 5% and strongly agree with 3% or whatever, I feel like I should still place my icon on the extreme left of the line. This would make it look like I’m all against this cause, which wouldn’t be the case.
In case anyone is interested, here is the recording of Condor Initiative’s director Carmen Csilla Medina talking about Condor Camp
The expected impact of waiting to sell will diminish as time goes on, because you are liable to change your values or, more probably, your views about what and how best to prioritize. This is especially true if you have a track record of changing your mind about things (like most of us). While the expected impact of waiting is, say, the value of two kidneys, conditional on not changing your mind, this same impact will be equal to the value of one kidney, or less, if you have a 50% chance or more of changing your mind. So I guess your comment is valid only if you are very confident that you will not change your mind about donating a kidney between now and the estimated time when you can sell it.
I’m not updating this anymore. But your post made me curious. I will try to read it shortly.
Congratulations. Are you planning to upload recordings of the presentations? Where can I access the conference program?
This was a nice post. I haven’t thought about these selfishness concerns before, but I did think about possible dangers arising from aligned servant AI used as a tool to improve military capabilities in general. A pretty damn risky scenario in my view and one that will hugely benefit whoever gets there first.
Here (https://thehumaneleague.org/animals) you’ll find many articles on the subject. For example, this one: What really happens on a chicken farm.
He later abdicated the throne in 2014, ending the monarchy.
Not really. He abdicated in favor of his son, who is the present king of Spain. Ending the monarchy is an idea that never crossed his mind.
Related: EA forum suggestion: In-line comments (Similar to google docs commenting) and perhaps this comment.
In case you’d prefer the EA Forum format, this post was also crossposted here some time ago: https://forum.effectivealtruism.org/posts/oRx3LeqFdxN2JTANJ/epistemic-legibility
I think the first link should be https://trends.google.com/trends/explore?q=longtermism
Spatterings of Latin
I can’t think of one single post where this is a serious issue. There may be exceptions that I ignore, but generalizing this is exaggerated.
Was the winner ‘efflorescence’ or ‘peripeteia’?
Klingt exotisch, aber wenn man das Wort 10x sagt, dann merkt man das nicht mehr
I believe this happens because , to my knowledge, German words ending in -ismus are only combined with proper names (‘Marxismus’) or foreign words (specially adjectives), that is Lehnwörter, like ‘Liberalismus’, ‘Föderalismus’. But I’m not a native speaker, so I can’t really tell how “exotic” this neologism sounds.
Have you checked this https://forum.effectivealtruism.org/events? There are some meetups in Berkeley.
I think this is very useful. Added.
I see, thanks. I guess I would have preferred a more accurate, unambiguous aggregation of everyone’s opinion, to have a clearer sense of the preferences of the community as a whole, but I’m starting to think that it’s just me.