“In the day I would be reminded of those men and women,
Brave, setting up signals across vast distances,
Considering a nameless way of living, of almost unimagined values.”
Emrik
In favour of compassion, and against bandwagons of outrage
Briefly, the life of Tetsu Nakamura (1946-2019)
Sort forum posts by: Occlumency (Old & Upvoted)
Why defensive writing is bad for community epistemics
Announcing the EA Gather Town Event Hall as a global hub for online events
I’m honestly confused and surprised you got rejected, based on reading your linked application. I would probably have found it valuable to talk to you at a conference like this, for insights into how you do what you do, because you clearly do some of it well.
I just really hope it isn’t anti-animal-welfare bias, because I do so hope that EAs with different priorities keep intermingling.
When people already well-respected in the community criticise something in EA, it can often be a source of prestige and a display of their own ability to think independently. But if a relative newcomer were to suggest the very same criticisms, it will often be interpreted very differently. Other aspiring EAs might intuitively classify them as “normie” rather than “EA above the pack”.
So depending on where in the local status hierarchy you find yourself, you might have very different perceptions on how risky it is for community members in general to voice contrarian opinions.
Friendship is Optimal: EAGs should be online
[Question] Are you allocated optimally in your own estimation?
EAGT update: bespoke rooms for remote orgs/local groups on the EA Gather.Town
This is amazing! Any recommendations for which are the most important parts of the book for people who are decently familiar with EA and LW, according to you? Especially looking for moral and practical arguments I might have overlooked, and I don’t need to be persuaded to care about animal/insect/machine suffering in the first place.
Welcome to the forum! I agree that EAs often have a really troubling relationship with their own feelings, and scruples to a fault. If you have strong reason to believe that Sam acted unethically, I have no objections against directing your feelings of anger at him. But I would urge people to carry their anger with dignity, both for the sake of community norms and their own sense of self-worth.
While I agree that humour is a great de-stressor, I have faith in our ability to find alternative ways to entertain ourselves that don’t involve kicking someone while they’re down.
Strong upvoted because I think it’s important to preserve whatever embers of weirdness and anti-professionalism we have left in EA, and safeguard it as if it were our last bastion of hope against the forces of bureaucratic stagnation. (Though I’d be happy to discuss this.)
I’d be curious to know why people downvoted this. I don’t think we can claim to be good at inclusive diversity unless we support the kind of diversity that doesn’t immediately feel like our ingroup. If you can tolerate anything other than your outgroup, you aren’t actually tolerating anything.[1]
- ^
Although if the group itself is pernicious in some important way, then I’d change my mind about upvoting. Right now, however, all I know is that they have a weird niche and a corner for EAs to keep in touch.
- ^
It’d be cool if the forum had a commenting feature similar to Google Docs, where comments and subcomments are attached directly to sentences in the post. Readers would then be able to opt in to see the discussion for each point on the side while reading the main post. Users could also choose to hide the feature to reduce distractions.
For comments that directly respond to particular points in the post, this feature would be more efficient (for reading and writing) relative to the current standard since they don’t have to spend words specifying what exactly they’re responding to.
- 18 Aug 2022 18:25 UTC; 2 points) 's comment on EA forum suggestion: In-line comments(Similar to google docs commenting) by (
Here are choice parts of my model of deference:
Whether you should defer or not depends not only on your estimation of relative expertise but also on what kind of role you want to fill in the community, in order to increase the altruistic impact of the community. I call it role-based social epistemology, and I really should write it longly at some point.
You can think of the roles as occupying different points on the production possibilities frontier for the explore-exploit trade-off. If you think of rationality as an individual project, you might reason that you should aim for a healthy balance between exploring and exploiting due to potential diminishing returns to either one. But if you instead take the perspective of “how can I coordinate with my community in order to maximize the impact we produce?” you start to see why specializing could be optimal.
If you are a Decision-Maker, you’re optimizing for allocating resources efficiently (e.g. money, work, power, etc.), and the impact of your allocation depends on how accurate your related beliefs are. And because accurate beliefs are so important to your decisions, you should opportunistically defer to people whenever you think they might have better information than you (Aumann-agreement style), as long as you think you’re decently calibrated and you’re deferring to advice with sufficient bandwidth. You should be Exploiting existing knowledge and expertise by deferring to it. But because you frequently defer to others, you may not be safe to defer to in turn due to potential negative externalities associated with information cascades that can be hard to correct.
If you are an Explorer, your job is to optimize for the chance of discovering important insights that can help the community make progress on important open problems. This is fundamentally a different project compared to just trying to acquire accurate beliefs. Now, you want to actively avoid ending up with the same belief states as other people to some extent. Notice that the problems are still open, which means that existing tools and angles-of-attack may be insufficient for the task. Evaluate paradigms/approaches for how neglected they are. Remember, it doesn’t matter whether you’re right about what other people are right about as long as you are extremely right about what other people are wrong about. So if you want to maximize the chance that the community ends up solving the problem, you want to coordinate with other explorers in order to search separate parts of the idea-tree. What matters is that the right fruits are picked, not that you end up picking them. We’re in a parallel tree search paradigm, and this has implications for how we individually should balance the explore-exploit trade-off.
If you are an Expert/Forecaster, your job is to acquire accurate beliefs that are safe to defer to. If there’s a difficult and important question (crucial consideration) for which better forecasts could marginally improve the careers/donations of a lot of people, this could be an important way to produce impact. Your impact here depends on the accuracy of your beliefs, so unlike the Explorer, you don’t have strong reasons to avoid common belief states. Your impact also depends on how safe you are to defer to, because you can potentially do a lot of harm by reinforcing false information cascades. And these considerations are newcomblike, so you should act by that rule which, when followed by the proportion of other experts you predict will follow it due to the same reasoning as you, maximizes community impact. Sometimes that means you want to report your independent impressions, and sometimes that means you want to share and elicit likelihood ratios instead of posterior beliefs. A common failure mode here is to over-optimize for making your beliefs legible, which in extreme cases turns into a race to the bottom, and in median cases turns into myopic empiricism where you predictably end up astray because you refuse to update on a large class of illegible (but Bayesian) evidence.
The limiting case of a Decision-Maker always reporting their independent impressions is (roughly) an Expert. But only insofar as it’s psychologically feasible to maintain a long-term separation between independent and all-things-considered impressions, and I have my doubts.
What kind of knowledge-work you want to do depends not only on your comparative advantages but also on your model of how the community produces altruistic impact. If on your model community impact is marginally bottlenecked by insights, you should probably consider aiming for ambitious insight-production. If on the other hand, you think you can have more impact by contributing to marginally better forecasts about what problems are most important to work on, maybe consider aiming for producing deference-safe predictions. And if you just happen to have a bunch of money lying around, you just don’t have the luxury of recklessly diverging from expert consensus, and you should use everything in your toolbox to make sure you’re allocating them efficiently.
No one is pure any of these. The roles are separated by what optimization criteria they use, and you optimize for different things in different areas of your life, and over your lifetime. But I think it’s usefwl to carve out the roles, so you can notice when you need to put which hat on, and the different things that implies for how you should play.
- 10 Aug 2022 23:38 UTC; 8 points) 's comment on Emrik’s Quick takes by (
This is excellent. Personally, (3) does everything for me. I don’t need to think I’m especially clever if I think I’m ok being dumb. I’m not causing harm if I express my thoughts, as long as I give people the opportunity to ignore or reject me if they think I don’t actually have any value to offer them. Here are some assorted personal notes on how being dumb is ok, so you don’t need to be smart in order not to worry about it.
Exhibit A: Be conspicuously dumb as an act of altruism!
It must be ok to be dumber than average in a community, otherwise it will iteratively evaporate half its members until only one person remains. If a community is hostile to the left half of the curve, the whole community suffers. And the people who are safely in the top 10% are only “safe” because the dumber people stick around.
So if you’re worried about being too dumb for the community… consider that maybe you’re actually just contributing to lowering the debilitating pressure felt by the community as a whole. Perhaps even think of yourself as a hero, shouldering the burden of being dumber-than-average so that people smarter than you don’t have to. Be conspicuously safe in your own stupidity, and you’re helping others realise that they can be safe too. ^^
Exhibit B: Naive kindness perpetuates shame
Self-fulfilling norm tragedies. When the naive mechanism by which good people try to make something better, makes it worse instead.
1. No one wants intelligence to be the sole measure of a human’s worth. Everyone affirms that “all humans are created equal.”
2. Everyone worries that other people think dumb people are worth less because they’re dumb.
3. So everyone also worries that other people will think they think that dumb people are worth less. They don’t want to be seen as offensive, nor do they want to accidentally cause offense. They want to be good and be seen as good.
4. That’s why they’re overly cautious about even speaking about dumbness, to the point of pretending it doesn’t even exist. (Remember, this follows from their kind motivations.)
5. But by being overly cautious about speaking about dumbness, and by pretending it doesn’t exist, they’re also unwittingly reinforcing the impression that dumbness is shamefwl. Heck, it’s so shamefwl that people won’t even talk about it!
You can find similar self-reinforcing patterns for other kinds of discrimination/prejudices. All of it seems to share a common solution: break down barriers to talking openly about so-called “shamefwl” things. I didn’t say it was easy.
Exhibit C: Why I use the word “dumb”
I’m in favour of using the word “dumb” as a non-derogatory antonym of “smart”.
The way society is right now you’d think the sole measure of human worth is how smart you are. My goal here is to make it feel alright to be dumb. And a large part of the problem is that no one is willing to point at the thing (dumbness) and treat it as a completely normal, mundane, and innocuous part of everyday life.
Every time you use an obvious euphemism for it like “less smart” or “specialises in other things”, you are making it clear to everyone that being dumb is something so shamefwl that we need to pretend it doesn’t exist. And sure, when you use the word “dumb” instead, someone might misunderstand and conclude that you think dumb people are bad in some way. But euphemisms *guarantee* that people learn the negative association.
Compare it to how children learn social norms. The way to teach your child that being dumb is ok is to actually behave as if that’s true, and euphemisms are doing the exact opposite. We don’t use “not-blue” to refer to brown eyes, but if we did you can be sure your children will try to pretend their eyes are blue.
Exhibit D: You need a space where you can be dumb
Where’s the space in which you can speak freely, ask dumb questions, reveal your ignorance, display your true stupidity? You definitely need a space like that. And where’s the space in which you must speak with care, try to seem smarter and more knowledgeable than you are, and impress professionals? Unfortunately, this too becomes necessary at times.
Wherever those spaces are, keep them separate. And may the gods have mercy on your soul if you only have the latter.
- 8 Jun 2022 21:37 UTC; 19 points) 's comment on Deference Culture in EA by (
- 10 Aug 2022 23:38 UTC; 8 points) 's comment on Emrik’s Quick takes by (
- EAGT Coffee Talks: Two toy models for theoretically optimal pre-paradigmatic research by 22 Aug 2022 21:33 UTC; 8 points) (
- EAGT Coffee Talks: Two toy models for theoretically optimal pre-paradigmatic research by 24 Aug 2022 11:49 UTC; 5 points) (LessWrong;
I think it’d be easy to come up with highly impactfwl things to do with free reign over Twitter? Like, even before I’ve thought about it, there should be a high prior on usefwl patterns. Brainstorming:
Experiment with giving users control over recommender algorithms, and/or designing them to be in the long-term interests of the users themselves (because you’re ok with foregoing some profit in order to not aggressively hijacking people’s attention)
Optimising the algorithms for showing users what they reflectively prefer (eg. what do I want to want to see on my Twitter feed?)[1]
Optimising algorithms for making people kinder (eg. downweighting views that come from bandwagony effects and toxoplasma), but still allowing users to opt-out or opt-in, and clearly guiding them on how to do so.
Trust networks
Liquid democracy-like transitive trust systems (eg. here, here)
I can see several potential benefits to this, but most of the considerations are unknown to me, which just means that there could still be massive value that I haven’t seen yet.
This could be used to overcome Vingean deference limits and allow for hiring more competent people more reliably than academic credentials (I realise I’m not explaining this, I’m just pointing to the existence of ideas enabled with Twitter)
This could also be a way to “vote” for political candidates or decision-makers in general too, or be used as a trust metric to find out whether you want to vote for particular candidates in the first place.
Platform to arrange vote swapping and similar, allow for better compromises and reduce hostile zero-sum voting tendencies.
Platform for highly visible public assurance contracts (eg. here), could be potentially be great for cooperation between powerfwl actors or large groups of people.
This also enables more visibility for views that held back by pluralistic ignorance. This could be both good and bad, depending on the view (eg. both “it’s ok to be gay” and “it’s not ok to be gay” can be held back by pluralistic ignorance).
Could also be used to coordinate actions in a crisis
eg. the next pandemic is about to hit, and it’s a thousand times more dangerous than covid, and no one realises because it’s still early on the exponential curve. Now you utilise your power to influence people to take it seriously. You stop caring about whether this will be called “propaganda” because what matters isn’t how nice you’ll look to the newspapers, what matters is saving people’s lives.
Something-something nudging idk.
Mostly, even if I thought Sam was in the wrong for considering a deal with Elon, I find it strange to cast a negative light on Will for putting them in touch. That seems awfwly transitive. I think judgments for transitive associations are dangerous, especially given incomplete information. Sam/Will probably thought much longer on this than I have, so I don’t think I can justifiably fault their judgment even if I had no ideas on how to use twitter myself.
- ^
This idea was originally from a post by Paul Christiano some years ago where he urged FB to adopt an algorithm like this, but I can’t seem to find it rn.
Forum suggestion: Option to publish your post as “anonymous” or blank, that then reverts to reveal your real forum name in a week.
This would be an opt-in feature that lets new and old authors gain less biased feedback on their posts, and lets readers read the posts with less of a bias from how they feel about the author.At the moment, information cascades amplify the number of votes established authors get based on their reputation. This has both good (readers are more likely to read good posts) and bad (readers are less likely to read unusual perspectives, and good newbie authors have a harder time getting rewarded for their work) consequences. The anonymous posting feature would redistribute the benefits of cascades more evenly.
I don’t think the net benefit is obvious in this case, but it could be worth exploring and testing.
This question is studied in veritistic social epistemology. I recommend playing around with the Laputa network epistemology simulation to get some practical model feedback to notice how it’s similar and dissimilar to your model of how the real world community behaves. Here are some of my independent impressions on the topic:
Distinguish between testimonial and technical evidence. The former is what you take on trust (epistemic deference, Aumann-agreement stuff), and the latter is everything else (argument, observation, math).
Under certain conditions, there’s a trade-off between the accuracy of crowdsourced estimates (e.g. surveys on AI risk) and the widespread availability of decision-relevant current best guesses (cf. simulations of the “Zollman effect”).
Personally, I think simulations plausibly underestimate the effect. Think of it like doing Monte-Carlo Tree Search over ideaspace, where we want to have a certain level of randomness to decide which branches of the tree to go down. And we arguably can’t achieve that randomness if we get stuck in certain paradigms due to the Einstellung effect (sorry for jargon). Communicating paradigms can be destructive of underdeveloped paradigms.
To increase the breadth of exploration over ideaspace, we can encourage “community bubbliness” among researchers (aka “small-world network”), where communication inside bubbles is high, and communication between them is limited. There’s a trade-off between the speed of research progress (for any given paradigm) and the breadth and rigour of the progress. Your preference for how to make this trade-off could depend on your view of AI timelines.
How much you should update on someone’s testimony depends on your trust function relative to that person. Understanding trust functions is one of the most underappreciated leverage points for improving epistemic communities and “raising sanity waterlines”, imo.
If a community has a habit of updating trust functions naively (e.g. increase or decrease your trust towards someone based on whether they give you confirmatory testimonies), it can lead to premature convergence and polarisation of group beliefs. And on a personal level, it can indefinitely lock you out of areas in ideaspace/branches on the ideatree you could have benefited from exploring. [Laputa example] [example 2]
Committing to only updating trust functions based on direct evidence of reasoning ability and sincerity, and never on object-level beliefs, can be a usefwl start. But all evidence is entangled, and personally, I’m ok with locking myself out of some areas in ideaspace because I’m sufficiently pessimistic about there being any value there. So I will use some object-level beliefs as evidence of reasoning-ability and sincerity and therefore use them to update my trust functions.
Deferring to academic research can have the bandwidth problem[1] you’re talking about, and this is especially a problem when the research has been optimised for non-EA relevant criteria. Holden’s History is a good example: he shouldn’t defer to expert historians on questions related to welfare throughout history, because most academics are optimising their expertise for entirely different things.
Deferring to experts can also be a problem when experts have been selected for their beliefs to some extent. This is most likely true of experts on existential risk.
Deferring to community members you think know better than you is fairly harmless if no one defers to you in turn. I think a healthy epistemic community has roles for people to play for each area of expertise.
Decision-maker: If you make really high-stakes decisions, you should use all the evidence you can, testimonial or otherwise, in order to make better decisions.
Expert: Your role is to be safe to defer to. You realise that crowdsourced expert beliefs provide more value to the community if you try to maintain the purity of your independent impressions, so you focus on technical evidence and you’re very reluctant to update on testimonial evidence even from other experts.
Explorer: If most of your contributions come from contributing with novel ideas, perhaps consider taking risks by exploring neglected areas in ideaspace at the cost of potentially making your independent impressions less accurate on average compared to the wisdom of the crowd.
Honestly, my take on the EA community is that it’s surprisingly healthy. It wouldn’t be terrible if EA kept doing whatever it’s doing right now. I think it ranks unreasonably high in the possible ways of arranging epistemic communities. :p
I like this term for it! It’s better than calling it the “Daddy-is-a-doctor problem”.