Developing my worldview. Interested in meta-ethics, epistemics, psychology, AI safety, and AI strategy.
Jack R
It seems possible to me that you have a concept-shaped hole for the concept “bad people”
I have found it useful and interesting to build a habit of noticing an intuition and then thinking of arguments for why that intuition is worth listening to. It has caused me to find some pretty interesting dynamics that it seems like naive consequentialists/utilitarians aren’t aware of.
One concern about this is that you might be able to find arguments for any conclusion that you seek out arguments for; the counter to this is that your intuition doesn’t give random answers, and is actually fairly reliably correct, hence explicit arguments that explain your intuition are some amount more likely than random to correspond reality, making these arguments useful to discover.This definitely goes better if you are aware of the systematic errors your intuition can make (i.e. cognitive biases).
I’m noticing two ways of interpreting/reacting to this argument:
“This is incredibly off-putting; these consequentialists aren’t unlike charismatic sociopaths who will try to match my behavior to achieve hidden goals that I find abhorrent” (see e.g. Andy Bernard from The Office; currently, this is the interpretation that feels most salient to me)
“This is like a value handshake between consequentialists and the rest of society: consequentialists may have different values than many other people (perhaps really only at the tail ends of morality), but it’s worth putting aside our differences and working together to solve the problems we all care about rather than fighting battles that result in predictable loss”
Makes sense—thanks Asya!
This is good to know—thank you for making this connection!
Notably, (and I think I may feel more strongly about this than others in the space), I’m generally less excited about organizers who are ambitious or entrepreneurial, but less truth-seeking, or have a weak understanding of the content that their group covers.
Do you feel that you’d rather have the existing population of community builders be a bit more ambitious or a bit more truth-seeking? Or: if you could suggest improvement on only one of these virtues to community builders, which would you choose? ETA: Does the answer feel obvious to you, or is it a close call?
“Interesting” is subjective, but there can still be areas that a population tends to find interesting. I find David’s proposals of what the EA population tends to find interesting plausible, though ultimately the question could be resolved with a survey
Thanks for this! I enjoyed the refresher + summaries of some of the posts I hadn’t yet read.
I’m not familiar with the opposite type of circle format
Me neither really—I meant to refer to a hypothetical activity.
And thanks for the examples!
Does anyone have an idea why doom circles have been so successful compared to the opposite type of circle where people say nice things about each other that they wouldn’t normally say?
Relatedly, I have a hypothesis that the EA/rationalist communities are making mistakes that they wouldn’t make if they had more psychology expertise. For instance, my impression is that many versions of positivity measurably improve performance/productivity and many versions of negativity worsen performance (though these impressions aren’t based on much research), and I suspect if people knew this, they would be more interested in trying the opposite of a doom circle.
Ah I see — thanks!
Thanks!
Is it correct that this assumes that the marginal cost of supporting a user doesn’t change depending on the firm’s scale? It seems like some amount of the 50x difference between EAF and reddit could be explained by the EAF having fewer benefits of scale since it is a smaller forum (though should this be counter balanced by it being a higher quality forum?)
Continuing the discussion since I am pretty curious how significant the 50x is, in case there is a powerful predictive model here
Could someone show the economic line of reasoning one would use to predict ex ante from the Nordhaus research that the Forum would have 50x more employees per user? (FYI, I might end up working it out myself.)
Maybe someone should user-interview or survey Oregonians to see what made people not want to vote for Carrick
No worries! Seemed mostly coherent to me, and please feel free to respond later.
I think the thing I am hung up on here is what counts as “happiness” and “suffering” in this framing.
Could you try to clarify what you mean by the AI (or an agent in general) being “better off?”
I’m actually a bit confused here, because I’m not settled on a meta-ethics: why isn’t it the case that a large part of human values is about satisfying the preferences of moral patients, and human values consider any or most advanced AIs as non-trivial moral patients?
I don’t put much weight on this currently, but I haven’t ruled it out.
If you had to do it yourself, how would you go about a back-of-the-envelope calculation for estimating the impact of a Flynn donation?
Asking this question because I suspect that other people in the community won’t actually do this, and since you are maybe one of the best-positioned people to do this since you seem interested in it.
Yeah, I had to look this up
Concept-shaped holes are such a useful concept; from what I can tell, it seems like a huge amount of miscommunication happens because people have somewhat different understandings of the same word.
I think I interpret people’s advice and opinions pretty differently now that I’m aware of concept-shaped holes.