Agreed with the specific reforms. Blind hiring and advertising broadly seem wise.
dan.pandori
It’s hard to be above baseline for multiple dimensions, and eventually gets impossible.
Oh for sure, I wasn’t thinking you were implying making it a requirement. I was trying to say that even a nudge towards explaining downvotes is a nudge towards evil (for me).
Maybe the net advantage of explaining downvotes would be good, but I personally should probably be discouraged from explaining my downvotes.
I disagree and I downvoted this because explaining why you downvoted something is disproportionately likely to end up with me arguing with someone on the internet. I find this really unpleasant.
I’m happy to have a rule for giving an explanation to you if I downvote your posts. I’ve talked with you as a person outside of internet arguments, so I’m not as worried about getting into a protracted argument.
But as a general rule, I think I should be discouraged from explaining my downvotes so that I keep up my mental health.
Separately, if this was a thread that had agree/disagree enabled I would just click disagree! The comment is fine, and I try to reserve downvote for things that are mean or grossly incorrect if agree/disagree is available.
Fair point! I was assuming that by collective decision making you meant much closer to 1 person 1 vote, but if it’s well defined term I’m not sure of the definition.
I haven’t heard much discussion on a market-based feedback system, and I’d be very interested in seeing it tried. Perhaps for legal or technical reasons it wouldn’t work out super well (similar to current prediction markets), but it seems well worth the experiment.
I think that this incorrectly conflates prediction markets and collective decision making.
(Prediction) markets are (theoretically) effective because folks that are able to reliably predict correctly will end up getting more money, and there are incentives in place for correct predictions to be made. It seems that the incentives for correct decision making are far weaker in collective decision making, and I don’t see any positive feedback loop where folks that are better at noticing impactful projects will get their opinions weighted more highly.
I think that if you put those feedback systems in place, it ends up rapidly looking much more like the situation as it is today than what most folks would call collective decision making.
While I agree with this question in the particular, there’s a real difficulty because absence of evidence is only weak evidence of absence with this kind of thing.
Can you elaborate? I don’t understand what problem this solves.
This post makes it harder than usual for me to tell if I’m supposed to upvote something because it is well-written, kind, and thoughtful vs whether I agree with it.
I’m going to continue to use up/downvote for good comment/bad comment and disagree/agree for my opinion on the goodness of the idea.
[EDIT: addressed in the comments. Nathan at least seems to endorse my interpretation]
Thanks, I think this is an excellent response and I agree both are important goals.
I’m curious to learn more about why you think that steelmanning is good for improving one’s beliefs/impact. It seems to me that that would be true if you believe yourself to be much more likely to be correct than the author of a post. Otherwise, it seems that trying to understand their original argument is better than trying to steelman it.
I could see that perhaps you should try to do both (ie, both the author’s literal intent and whether they are directionally correct)?
[EDIT: I’m particularly curious because I think that my current understanding seems to imply that steelmanning like this would be hubristic, and I think that probably that’s not what you’re going for. So almost certainly I’m missing some piece of what you’re saying!]
I’d be interested to see some of those tried for sure!
I imagine you’d also likely agree that these proposals tradeoff against everything else that the EA orgs could be doing, and it’s not super clear any are the best option to pursue relative to other goals right now.
I agree. I think that it’s incredibly difficult to have civil conversations on the internet, especially about emotionally laden issues like morality/charity.
I feel bad when I write a snotty comment and that gets downvoted, and that has a real impact on me being more likely to write a kind argument in one direction rather than a quick zinger. I am honestly thankful for this feedback on not being a jerk.
Do you think that group bargaining/voting in EA would be a good thing for funding/prioritization?
I personally like the current approach that has individual EAs and orgs make their own decisions on what is the best thing to do in the world.
For example, I would be unlikely to fund an organization that the majority of EAs in a vote believed should be funded, but I personally believed to be net harmful. Although if this situation were to occur, I would try to have some conversations about where the wild disagreement was stemming from.
In the interests of taking your words to heart, I agree that EAs (and literally everyone) are bad at steelmanning criticisms.
However, I think that saying the ‘and literally everyone’ part out loud is important. Usually when people say ‘X is bad at Y’ they mean that X is worse than typical at Y. If I said, ‘Detroit-style pizza is unhealthy,’ then there is a Gricean implicature that Detroit-style pizza is less healthy than other pizzas. Otherwise, I should just say ‘pizza is unhealthy’.
Likewise, when you say ‘EAs seem particularly bad at steelmanning criticisms,’ the Gricean implication is that EAs are worse at this than average. In another thread above, you seemed to imply that you aren’t familiar with communities that are better at incorporating and steelmanning criticism (please correct me if I’m mistaken here).
There is an important difference between ‘everyone is bad at taking criticism’/‘EAs and everyone else are bad at taking criticism’/‘EAs are bad at taking criticism’. The first two statements implies that this is a widespread problem that we’ll have to work hard to address, as the default is getting it wrong. The last statement implies that we are making a surprising mistake, and it should be comparatively easy to fix (as others are doing better than us).
I don’t generally like steelmanning, for reasons that this blog post does a decent job of summarizing. When folks read what I write, I’d rather that they assume that I thought about using a weaker or stronger version of a statement, and instead went with the strength I did because I believe it to be true.
If an issue is framed as black or white, and I believe it to be grey, then I assume we have a disagreement. I try to assume that if an author decided to frame an issue in a particular way, it’s because that’s what they believe to be true.
Possibly high effort, but what do you see as the best 10% (and worst 10%)?
[aside, made me chuckle]
This is an inevitable issue with the post being 70 pages long.
I think online discussions are more productive when its clear exactly what is being proposed as good/bad, so I appreciate you separately commenting on small segments (which can be addressed individually) rather than the post as a whole.
Thanks for including this! I really liked the shrimp sticker, and partly I liked it because it simply came across as friendly. I honestly didn’t know that live shrimp have different ordinary posture and color compared to cooked shrimp, and that makes the sticker feel a lot less friendly to me!
I’d ideally like a sticker with what looks like a happy shrimp. A live shrimp in a circle with something like ‘expanding the moral circle’ feels like almost exactly the vibe I’d love to send out, for what it’s worth.
Separately, I get that making merch/art/anything like this is difficult, so I appreciate the work that has already gone into putting the store together.
I wanted to mention that I went through the first week’s lectures and exercises and I was really impressed at the quality!
Also a software engineer, and this also is a pretty spot on description for me. 25 hours of productive work is about my limit before I start burning out and making dumb mistakes.
IANAL: I view ‘effective altruism’ to not be owned, and if any organization claims to own the term I’m going to ignore them. I expect most folks to share my opinion here.