re hiring thread: I at least am still subscribed to the “Who’s hiring” thread and I read every comment.
re agreement karma: I still really don’t like it and find it very confusing ):
I can’t imagine a case where I strongly disagree with something, but want to increase its visibility to others, nor a case where I want to decrease visibility (e.g. because something is demagogic) but still want to signal that I agree with the conclusion.
I think your comment is a good example (and from the votes it looks like I’m not the only one). You’re making a good faith, sensible argument for a position I don’t hold—I think the disagreement karma is a big improvement.
I think your comment deserves an upvote for contributing to the discussion, but I disagree and wanted to indicate that.
I’m really enjoying the irony. But still in the vast majority of cases my regular and agreement votes would go the same way. I downvote comments when I think they cause harm or promote bad ideas (which necessarily means I disagree with them), and strongly downvote them when they promote outright dangerous ideas.
and strongly downvote them when they promote outright dangerous ideas.
This seems like a pretty dishonest action to me fwiw, unless you’re referring to technical information hazards (in which case reporting the comment is also appropriate).
Why dishonest? What do you take a strong downvote to mean? I think I’m really misunderstanding most people here’s notion about the role of upvoted and downvotes.
As examples for both my stated actions, if a user wrote “you’re suggesting something that Trump wanted to do, so I think it’s bad” I’d downvote that comment; If a user wrote “The public doesn’t know what’s good for them, we should eventually find a way to do good without ever having to answer to politicians”, I’d think that’s the kind of arrogance that’s outright dangerous and should be contained, and I’d therefore strongly downvote it.
unless you’re referring to technical information hazards
It’s a separate discussion that I’m planning to write a post about (but probably never will 😅) - but I think EAs widely overestimate the size of the space of infohazards, and almost no comment a sane person could make would ever be one. I further think this is dangerous in itself, as it builds on a wrong belief that we’re better equipped to tackle problems than bad actors are to rediscover them.
So if someone wrote a detailed recipe for a novel pathogen, yeah I’d report them. Anything less than that, not really.
One of the most important skills in life is to separate uncomfortable/repugnant conclusions from their truth values. In other words, just because a conclusion is uncomfortable/repungant does not equal that the conclusion is false, and vice versa, comfortable conclusions do not equal true conclusions.
(I think it’s likely that I misunderstood at least some of the other arguments in this thread).
I think good arguments with uncomfortable/repugnant conclusions should be a) upvoted to the extent that they are good arguments and b) agreed or disagreed with to the extent that we believe the conclusions are true.
(and we may believe the bottom-line conclusions to be false for reasons that are outside the scope of the presented arguments).
I think we should be very willing to accept uncomfortable/repugnant conclusions to the extent that we believe they’re true. Our movement is effective altruism, not effective feel-good-about-ourselvesism. Since we probably live in the midst of multiple unknown moral catastrophes, one of the most important things we can do (other than averting imminent existential risk) is to carefully figure out which are the avertable moral catastrophes we currently live in. This search probably means evaluating the evidence we have, and seek out new evidence, and look at the world with deliberation, care, and good humor. In comparison, I expect moral disgust to be substantially less truth-tracking in comparison, and on the margins even net negative.
Losing access to our ability to think clearly is just really costly[1]. I’m not saying that we shouldn’t give this up at any price. But we should at least set the price to be very very high, and not be willing to sacrifice clear thinking quite so easily.
A repugnant conclusion is only as true as the assumptions that went into it and the inference rules that chain it to them. I would agree with (and upvote) a comment that says “your assumptions ABC imply conclusion X which is horrible, so they can’t be right as stated”, and would disagree with (and downvote) a comment that says “Not only are you right about ABC, but we should even act according to conclusion X that they imply, even if it would seem horrible to some”.
Edit: I forgot to add that, while it’s a minor point in your comment, I really disagree that that’s “one of the most important skills in life”. Some applications might be important, e.g. “believing your plan is going to fail early enough to pivot to something else”, but there are quite a few more important ones.
re hiring thread: I at least am still subscribed to the “Who’s hiring” thread and I read every comment.
re agreement karma: I still really don’t like it and find it very confusing ):
I can’t imagine a case where I strongly disagree with something, but want to increase its visibility to others, nor a case where I want to decrease visibility (e.g. because something is demagogic) but still want to signal that I agree with the conclusion.
I think your comment is a good example (and from the votes it looks like I’m not the only one). You’re making a good faith, sensible argument for a position I don’t hold—I think the disagreement karma is a big improvement.
I think your comment deserves an upvote for contributing to the discussion, but I disagree and wanted to indicate that.
I’m really enjoying the irony. But still in the vast majority of cases my regular and agreement votes would go the same way. I downvote comments when I think they cause harm or promote bad ideas (which necessarily means I disagree with them), and strongly downvote them when they promote outright dangerous ideas.
I think that’s fine, and you can just do this :) If a feature isn’t useful to you, you don’t have to use it.
This seems like a pretty dishonest action to me fwiw, unless you’re referring to technical information hazards (in which case reporting the comment is also appropriate).
Though perhaps I’m misunderstanding you.
Why dishonest? What do you take a strong downvote to mean? I think I’m really misunderstanding most people here’s notion about the role of upvoted and downvotes.
As examples for both my stated actions, if a user wrote “you’re suggesting something that Trump wanted to do, so I think it’s bad” I’d downvote that comment; If a user wrote “The public doesn’t know what’s good for them, we should eventually find a way to do good without ever having to answer to politicians”, I’d think that’s the kind of arrogance that’s outright dangerous and should be contained, and I’d therefore strongly downvote it.
It’s a separate discussion that I’m planning to write a post about (but probably never will 😅) - but I think EAs widely overestimate the size of the space of infohazards, and almost no comment a sane person could make would ever be one. I further think this is dangerous in itself, as it builds on a wrong belief that we’re better equipped to tackle problems than bad actors are to rediscover them.
So if someone wrote a detailed recipe for a novel pathogen, yeah I’d report them. Anything less than that, not really.
I don’t understand your logic at all. How is it contributing from your POV?
Found this, a good example
Easy answer, any uncomfortable/repungant conclusion would fall under: upvote on karma but downvote on disagree/agree.
One example is this uncomfortable conclusion:
https://forum.effectivealtruism.org/posts/t3Spus6mhWPchgjdM/valuing-lives-instrumentally-leads-to-uncomfortable
One of the most important skills in life is to separate uncomfortable/repugnant conclusions from their truth values. In other words, just because a conclusion is uncomfortable/repungant does not equal that the conclusion is false, and vice versa, comfortable conclusions do not equal true conclusions.
(I think it’s likely that I misunderstood at least some of the other arguments in this thread).
I think good arguments with uncomfortable/repugnant conclusions should be a) upvoted to the extent that they are good arguments and b) agreed or disagreed with to the extent that we believe the conclusions are true.
(and we may believe the bottom-line conclusions to be false for reasons that are outside the scope of the presented arguments).
I think we should be very willing to accept uncomfortable/repugnant conclusions to the extent that we believe they’re true. Our movement is effective altruism, not effective feel-good-about-ourselvesism. Since we probably live in the midst of multiple unknown moral catastrophes, one of the most important things we can do (other than averting imminent existential risk) is to carefully figure out which are the avertable moral catastrophes we currently live in. This search probably means evaluating the evidence we have, and seek out new evidence, and look at the world with deliberation, care, and good humor. In comparison, I expect moral disgust to be substantially less truth-tracking in comparison, and on the margins even net negative.
Losing access to our ability to think clearly is just really costly[1]. I’m not saying that we shouldn’t give this up at any price. But we should at least set the price to be very very high, and not be willing to sacrifice clear thinking quite so easily.
(“At first they came for our epistemology. And then they...well, we don’t know what happened next”)
A repugnant conclusion is only as true as the assumptions that went into it and the inference rules that chain it to them. I would agree with (and upvote) a comment that says “your assumptions ABC imply conclusion X which is horrible, so they can’t be right as stated”, and would disagree with (and downvote) a comment that says “Not only are you right about ABC, but we should even act according to conclusion X that they imply, even if it would seem horrible to some”.
Edit: I forgot to add that, while it’s a minor point in your comment, I really disagree that that’s “one of the most important skills in life”. Some applications might be important, e.g. “believing your plan is going to fail early enough to pivot to something else”, but there are quite a few more important ones.