“In the day I would be reminded of those men and women,
Brave, setting up signals across vast distances,
Considering a nameless way of living, of almost unimagined values.”
Emrik
I’m honestly confused and surprised you got rejected, based on reading your linked application. I would probably have found it valuable to talk to you at a conference like this, for insights into how you do what you do, because you clearly do some of it well.
I just really hope it isn’t anti-animal-welfare bias, because I do so hope that EAs with different priorities keep intermingling.
When people already well-respected in the community criticise something in EA, it can often be a source of prestige and a display of their own ability to think independently. But if a relative newcomer were to suggest the very same criticisms, it will often be interpreted very differently. Other aspiring EAs might intuitively classify them as “normie” rather than “EA above the pack”.
So depending on where in the local status hierarchy you find yourself, you might have very different perceptions on how risky it is for community members in general to voice contrarian opinions.
This is amazing! Any recommendations for which are the most important parts of the book for people who are decently familiar with EA and LW, according to you? Especially looking for moral and practical arguments I might have overlooked, and I don’t need to be persuaded to care about animal/insect/machine suffering in the first place.
Welcome to the forum! I agree that EAs often have a really troubling relationship with their own feelings, and scruples to a fault. If you have strong reason to believe that Sam acted unethically, I have no objections against directing your feelings of anger at him. But I would urge people to carry their anger with dignity, both for the sake of community norms and their own sense of self-worth.
While I agree that humour is a great de-stressor, I have faith in our ability to find alternative ways to entertain ourselves that don’t involve kicking someone while they’re down.
Strong upvoted because I think it’s important to preserve whatever embers of weirdness and anti-professionalism we have left in EA, and safeguard it as if it were our last bastion of hope against the forces of bureaucratic stagnation. (Though I’d be happy to discuss this.)
I’d be curious to know why people downvoted this. I don’t think we can claim to be good at inclusive diversity unless we support the kind of diversity that doesn’t immediately feel like our ingroup. If you can tolerate anything other than your outgroup, you aren’t actually tolerating anything.[1]
- ^
Although if the group itself is pernicious in some important way, then I’d change my mind about upvoting. Right now, however, all I know is that they have a weird niche and a corner for EAs to keep in touch.
- ^
It’d be cool if the forum had a commenting feature similar to Google Docs, where comments and subcomments are attached directly to sentences in the post. Readers would then be able to opt in to see the discussion for each point on the side while reading the main post. Users could also choose to hide the feature to reduce distractions.
For comments that directly respond to particular points in the post, this feature would be more efficient (for reading and writing) relative to the current standard since they don’t have to spend words specifying what exactly they’re responding to.
- 18 Aug 2022 18:25 UTC; 2 points) 's comment on EA forum suggestion: In-line comments(Similar to google docs commenting) by (
Here are choice parts of my model of deference:
Whether you should defer or not depends not only on your estimation of relative expertise but also on what kind of role you want to fill in the community, in order to increase the altruistic impact of the community. I call it role-based social epistemology, and I really should write it longly at some point.
You can think of the roles as occupying different points on the production possibilities frontier for the explore-exploit trade-off. If you think of rationality as an individual project, you might reason that you should aim for a healthy balance between exploring and exploiting due to potential diminishing returns to either one. But if you instead take the perspective of “how can I coordinate with my community in order to maximize the impact we produce?” you start to see why specializing could be optimal.
If you are a Decision-Maker, you’re optimizing for allocating resources efficiently (e.g. money, work, power, etc.), and the impact of your allocation depends on how accurate your related beliefs are. And because accurate beliefs are so important to your decisions, you should opportunistically defer to people whenever you think they might have better information than you (Aumann-agreement style), as long as you think you’re decently calibrated and you’re deferring to advice with sufficient bandwidth. You should be Exploiting existing knowledge and expertise by deferring to it. But because you frequently defer to others, you may not be safe to defer to in turn due to potential negative externalities associated with information cascades that can be hard to correct.
If you are an Explorer, your job is to optimize for the chance of discovering important insights that can help the community make progress on important open problems. This is fundamentally a different project compared to just trying to acquire accurate beliefs. Now, you want to actively avoid ending up with the same belief states as other people to some extent. Notice that the problems are still open, which means that existing tools and angles-of-attack may be insufficient for the task. Evaluate paradigms/approaches for how neglected they are. Remember, it doesn’t matter whether you’re right about what other people are right about as long as you are extremely right about what other people are wrong about. So if you want to maximize the chance that the community ends up solving the problem, you want to coordinate with other explorers in order to search separate parts of the idea-tree. What matters is that the right fruits are picked, not that you end up picking them. We’re in a parallel tree search paradigm, and this has implications for how we individually should balance the explore-exploit trade-off.
If you are an Expert/Forecaster, your job is to acquire accurate beliefs that are safe to defer to. If there’s a difficult and important question (crucial consideration) for which better forecasts could marginally improve the careers/donations of a lot of people, this could be an important way to produce impact. Your impact here depends on the accuracy of your beliefs, so unlike the Explorer, you don’t have strong reasons to avoid common belief states. Your impact also depends on how safe you are to defer to, because you can potentially do a lot of harm by reinforcing false information cascades. And these considerations are newcomblike, so you should act by that rule which, when followed by the proportion of other experts you predict will follow it due to the same reasoning as you, maximizes community impact. Sometimes that means you want to report your independent impressions, and sometimes that means you want to share and elicit likelihood ratios instead of posterior beliefs. A common failure mode here is to over-optimize for making your beliefs legible, which in extreme cases turns into a race to the bottom, and in median cases turns into myopic empiricism where you predictably end up astray because you refuse to update on a large class of illegible (but Bayesian) evidence.
The limiting case of a Decision-Maker always reporting their independent impressions is (roughly) an Expert. But only insofar as it’s psychologically feasible to maintain a long-term separation between independent and all-things-considered impressions, and I have my doubts.
What kind of knowledge-work you want to do depends not only on your comparative advantages but also on your model of how the community produces altruistic impact. If on your model community impact is marginally bottlenecked by insights, you should probably consider aiming for ambitious insight-production. If on the other hand, you think you can have more impact by contributing to marginally better forecasts about what problems are most important to work on, maybe consider aiming for producing deference-safe predictions. And if you just happen to have a bunch of money lying around, you just don’t have the luxury of recklessly diverging from expert consensus, and you should use everything in your toolbox to make sure you’re allocating them efficiently.
No one is pure any of these. The roles are separated by what optimization criteria they use, and you optimize for different things in different areas of your life, and over your lifetime. But I think it’s usefwl to carve out the roles, so you can notice when you need to put which hat on, and the different things that implies for how you should play.
- 10 Aug 2022 23:38 UTC; 8 points) 's comment on Emrik’s Quick takes by (
This is excellent. Personally, (3) does everything for me. I don’t need to think I’m especially clever if I think I’m ok being dumb. I’m not causing harm if I express my thoughts, as long as I give people the opportunity to ignore or reject me if they think I don’t actually have any value to offer them. Here are some assorted personal notes on how being dumb is ok, so you don’t need to be smart in order not to worry about it.
Exhibit A: Be conspicuously dumb as an act of altruism!
It must be ok to be dumber than average in a community, otherwise it will iteratively evaporate half its members until only one person remains. If a community is hostile to the left half of the curve, the whole community suffers. And the people who are safely in the top 10% are only “safe” because the dumber people stick around.
So if you’re worried about being too dumb for the community… consider that maybe you’re actually just contributing to lowering the debilitating pressure felt by the community as a whole. Perhaps even think of yourself as a hero, shouldering the burden of being dumber-than-average so that people smarter than you don’t have to. Be conspicuously safe in your own stupidity, and you’re helping others realise that they can be safe too. ^^
Exhibit B: Naive kindness perpetuates shame
Self-fulfilling norm tragedies. When the naive mechanism by which good people try to make something better, makes it worse instead.
1. No one wants intelligence to be the sole measure of a human’s worth. Everyone affirms that “all humans are created equal.”
2. Everyone worries that other people think dumb people are worth less because they’re dumb.
3. So everyone also worries that other people will think they think that dumb people are worth less. They don’t want to be seen as offensive, nor do they want to accidentally cause offense. They want to be good and be seen as good.
4. That’s why they’re overly cautious about even speaking about dumbness, to the point of pretending it doesn’t even exist. (Remember, this follows from their kind motivations.)
5. But by being overly cautious about speaking about dumbness, and by pretending it doesn’t exist, they’re also unwittingly reinforcing the impression that dumbness is shamefwl. Heck, it’s so shamefwl that people won’t even talk about it!
You can find similar self-reinforcing patterns for other kinds of discrimination/prejudices. All of it seems to share a common solution: break down barriers to talking openly about so-called “shamefwl” things. I didn’t say it was easy.
Exhibit C: Why I use the word “dumb”
I’m in favour of using the word “dumb” as a non-derogatory antonym of “smart”.
The way society is right now you’d think the sole measure of human worth is how smart you are. My goal here is to make it feel alright to be dumb. And a large part of the problem is that no one is willing to point at the thing (dumbness) and treat it as a completely normal, mundane, and innocuous part of everyday life.
Every time you use an obvious euphemism for it like “less smart” or “specialises in other things”, you are making it clear to everyone that being dumb is something so shamefwl that we need to pretend it doesn’t exist. And sure, when you use the word “dumb” instead, someone might misunderstand and conclude that you think dumb people are bad in some way. But euphemisms *guarantee* that people learn the negative association.
Compare it to how children learn social norms. The way to teach your child that being dumb is ok is to actually behave as if that’s true, and euphemisms are doing the exact opposite. We don’t use “not-blue” to refer to brown eyes, but if we did you can be sure your children will try to pretend their eyes are blue.
Exhibit D: You need a space where you can be dumb
Where’s the space in which you can speak freely, ask dumb questions, reveal your ignorance, display your true stupidity? You definitely need a space like that. And where’s the space in which you must speak with care, try to seem smarter and more knowledgeable than you are, and impress professionals? Unfortunately, this too becomes necessary at times.
Wherever those spaces are, keep them separate. And may the gods have mercy on your soul if you only have the latter.
- 8 Jun 2022 21:37 UTC; 19 points) 's comment on Deference Culture in EA by (
- 10 Aug 2022 23:38 UTC; 8 points) 's comment on Emrik’s Quick takes by (
- EAGT Coffee Talks: Two toy models for theoretically optimal pre-paradigmatic research by 22 Aug 2022 21:33 UTC; 8 points) (
- EAGT Coffee Talks: Two toy models for theoretically optimal pre-paradigmatic research by 24 Aug 2022 11:49 UTC; 5 points) (LessWrong;
I think it’d be easy to come up with highly impactfwl things to do with free reign over Twitter? Like, even before I’ve thought about it, there should be a high prior on usefwl patterns. Brainstorming:
Experiment with giving users control over recommender algorithms, and/or designing them to be in the long-term interests of the users themselves (because you’re ok with foregoing some profit in order to not aggressively hijacking people’s attention)
Optimising the algorithms for showing users what they reflectively prefer (eg. what do I want to want to see on my Twitter feed?)[1]
Optimising algorithms for making people kinder (eg. downweighting views that come from bandwagony effects and toxoplasma), but still allowing users to opt-out or opt-in, and clearly guiding them on how to do so.
Trust networks
Liquid democracy-like transitive trust systems (eg. here, here)
I can see several potential benefits to this, but most of the considerations are unknown to me, which just means that there could still be massive value that I haven’t seen yet.
This could be used to overcome Vingean deference limits and allow for hiring more competent people more reliably than academic credentials (I realise I’m not explaining this, I’m just pointing to the existence of ideas enabled with Twitter)
This could also be a way to “vote” for political candidates or decision-makers in general too, or be used as a trust metric to find out whether you want to vote for particular candidates in the first place.
Platform to arrange vote swapping and similar, allow for better compromises and reduce hostile zero-sum voting tendencies.
Platform for highly visible public assurance contracts (eg. here), could be potentially be great for cooperation between powerfwl actors or large groups of people.
This also enables more visibility for views that held back by pluralistic ignorance. This could be both good and bad, depending on the view (eg. both “it’s ok to be gay” and “it’s not ok to be gay” can be held back by pluralistic ignorance).
Could also be used to coordinate actions in a crisis
eg. the next pandemic is about to hit, and it’s a thousand times more dangerous than covid, and no one realises because it’s still early on the exponential curve. Now you utilise your power to influence people to take it seriously. You stop caring about whether this will be called “propaganda” because what matters isn’t how nice you’ll look to the newspapers, what matters is saving people’s lives.
Something-something nudging idk.
Mostly, even if I thought Sam was in the wrong for considering a deal with Elon, I find it strange to cast a negative light on Will for putting them in touch. That seems awfwly transitive. I think judgments for transitive associations are dangerous, especially given incomplete information. Sam/Will probably thought much longer on this than I have, so I don’t think I can justifiably fault their judgment even if I had no ideas on how to use twitter myself.
- ^
This idea was originally from a post by Paul Christiano some years ago where he urged FB to adopt an algorithm like this, but I can’t seem to find it rn.
Forum suggestion: Option to publish your post as “anonymous” or blank, that then reverts to reveal your real forum name in a week.
This would be an opt-in feature that lets new and old authors gain less biased feedback on their posts, and lets readers read the posts with less of a bias from how they feel about the author.At the moment, information cascades amplify the number of votes established authors get based on their reputation. This has both good (readers are more likely to read good posts) and bad (readers are less likely to read unusual perspectives, and good newbie authors have a harder time getting rewarded for their work) consequences. The anonymous posting feature would redistribute the benefits of cascades more evenly.
I don’t think the net benefit is obvious in this case, but it could be worth exploring and testing.
Some (controversial) reasons I’m surprisingly optimistic about the community:
1) It’s already geographically and social-network bubbly and explores various paradigms.
2) The social status gradient is aligned with deference at the lower levels, and differentiation at the higher levels (to some extent). And as long as testimonial evidence/deference flows downwards (where they’re likely to improve opinions), and the top-level tries to avoid conforming, there’s a status push towards exploration and confidence in independent impressions.
3) As long as deference is mostly unidirectional (downwards in social status) there are fewer loops/information cascades (less double-counting of evidence), and epistemic bubbles are harder to form and easier to pop (from above). And social status isn’t that hard to attain for conscientious smart people, I think, so smart people aren’t stuck at the bottom where their opinions are under-utilised? Idk.
Probably more should go here, but I forget. The community could definitely be better, and it’s worth exploring how to optimise it (any clever norms we can spread about trust functions?), so I’m not sure we disagree except you happen to look like the grumpy one because I started the chain by speaking optimistically. :3
- 28 Oct 2022 17:43 UTC; 2 points) 's comment on Sort forum posts by: Occlumency (Old & Upvoted) by (
Being friends with someone is also a great way of learning about their capabilities, motivations and reliability, so I think it could be rational for rich funders to be giving grants to their friends moreso than strangers.
tl;dr: I intended to be supportive. I knew my comment could be misinterpreted, but I didn’t think the misinterpretations would do anyone harm. Although I did not expect it to be misinterpreted by Luisa. And Charles He said he read it closely and didn’t decipher my intention, so I’m kinda irrational and will try to update. On rereading it myself, I agree it was very opaque.
My comment was entirely not intended as pushback on anything. I find Luisa’s ability to put in so much conscious effort into this admirable and I appreciate it as inspiration to do the same. She did not seem like she had above-average guilt-feelings for prioritising dealing with her problems when there are always others who suffer more. But because she mentioned luck, and I’m aware that this is something many people struggle with including me, it seemed plausible just on priors that she had an inkling of it. If that’s true, then there’s an off-chance that my encouragement could help, and if it’s not, then my encouragement would fall flat and do no harm.
My tone tried to be supportive by pointing out the laughable absurdity of not feeling ok taking one’s problems seriously unless they were worse than they are. I think pointing this out is high priority, because the dynamic makes for incredibly unfortunate incentives. When people speak to me about my own problems, I often find a humoristic tone to be easier to deal with (and less painfwl) compared to when people conform to an expectation that we all need to be Awfwly Severe and tiptoe around what’s being said. Although I’m aware that my intended tone would only come across if you interpreted with a lot of charity and a justifiably high prior on “Emrik will not try to be rude to someone vulnerably talking about their own depression”.[1]
- ^
Why would I keep making comments that can’t be understood without charity? Because I believe the community and the world would be better if collectively learned to interpret with more charity. And I go by the rule “act as if we are already closer to optimal social norms than we in fact are,” because when norms are stuck in inadequate equilibria, we can’t make progress on them unless we are more people acting by this rule.
- 19 Oct 2022 19:42 UTC; -18 points) 's comment on My experience experimenting with a bunch of antidepressants I’d never heard of by (
- ^
Big support!
By making agreement a separate axis, people will feel safer upvoting something for quality/novelty/appreciation with less of a risk that it’s confounded with agreement. Unpopular opinions that people still found enlightening should get marginally more karma. And we should be optimising for increased exposure to information that people can update on in either direction, rather than for exposure to what people agree with.[1]
We now have an opinion poll included for every comment/post. This just seems like a vast store of usefwl-but-imperfect information. Karma doesn’t already provide it, since it has more confounders.
But, observing how it empirically plays out is just going to matter way more than any theoretical arguments I can come up with.
- ^
Toy model here, but: The health of an epistemic community depends on, among other things, an optimal ratio between the transmission coefficients of technical (gears-level) evidence vs testimonial (deference) evidence. If the ratio is high, people are more likely to be exposed to arguments they haven’t heard yet, increasing their understanding and ability to contribute to the conversation. If the ratio is low, people are mainly interested in deferring to what other people think, and understanding is of secondary importance.
An automatic jargon-explainer for commonly used jargon. This gets the best of both worlds, for readers and writers. People can use jargon more often,[1] and not have to worry about it not landing with readers. And readers unaware of the jargon can hover over the word to see what it means, while readers who already do know can keep reading. Makes it easier to read for people within a wider range of inferential distance.
- ^
Efficient communication without having to link to each jargony word, since that might get distracting and take attention away from links they do want to emphasise.
- 10 Sep 2022 22:59 UTC; 1 point) 's comment on EA Forum feature suggestion thread by (
- ^
I think there are several things wrong with the Equal Weight View, but I think this is the easiest way to see it:
Let’s say I have which I updated from a prior of . Now I meet someone who A) I trust to be rational as much as myself, and B) I know started with the same prior as me, and C) I know cannot have seen the evidence that I have seen, and D) I know has updated on evidence independent of evidence I have seen.
They say .
Then I can infer that they updated from to by multiplying with a likelihood ratio of . And because C and D, I can update on that likelihood ratio in order to end up with a posterior of .
The equal weight view would have me adjust down, whereas Bayes tells me to adjust up.
- 10 Aug 2022 23:38 UTC; 8 points) 's comment on Emrik’s Quick takes by (
FWIW, I think it’d be pretty hard (practically and emotionally) to fake a project plan that EA funders would be willing to throw money at. So my prior is that cheating is rare and an acceptable cost to being a high-risk funder. EA is not about minimising crime, it’s about maximising impact, and before we crack down on funding we should check our motivations. I don’t want anyone to change their high-risk strategy based on hearsay, but I do want our top funders to be on the lookout so that they might catch a possible problem before it becomes rampant.
I like the culture-aligning suggestions for other reasons, though. I think the long-term future will benefit from the EA community remaining aligned with actually caring about people.
Actually appreciate this comment. I should’ve been more clear about when I was using universal vs existential quantifiers and anything in between. I do not advocate that everyone should withhold anger, because perhaps (as is likely) some people do in fact know much more than me, and they know enough that anger is justified.
This question is studied in veritistic social epistemology. I recommend playing around with the Laputa network epistemology simulation to get some practical model feedback to notice how it’s similar and dissimilar to your model of how the real world community behaves. Here are some of my independent impressions on the topic:
Distinguish between testimonial and technical evidence. The former is what you take on trust (epistemic deference, Aumann-agreement stuff), and the latter is everything else (argument, observation, math).
Under certain conditions, there’s a trade-off between the accuracy of crowdsourced estimates (e.g. surveys on AI risk) and the widespread availability of decision-relevant current best guesses (cf. simulations of the “Zollman effect”).
Personally, I think simulations plausibly underestimate the effect. Think of it like doing Monte-Carlo Tree Search over ideaspace, where we want to have a certain level of randomness to decide which branches of the tree to go down. And we arguably can’t achieve that randomness if we get stuck in certain paradigms due to the Einstellung effect (sorry for jargon). Communicating paradigms can be destructive of underdeveloped paradigms.
To increase the breadth of exploration over ideaspace, we can encourage “community bubbliness” among researchers (aka “small-world network”), where communication inside bubbles is high, and communication between them is limited. There’s a trade-off between the speed of research progress (for any given paradigm) and the breadth and rigour of the progress. Your preference for how to make this trade-off could depend on your view of AI timelines.
How much you should update on someone’s testimony depends on your trust function relative to that person. Understanding trust functions is one of the most underappreciated leverage points for improving epistemic communities and “raising sanity waterlines”, imo.
If a community has a habit of updating trust functions naively (e.g. increase or decrease your trust towards someone based on whether they give you confirmatory testimonies), it can lead to premature convergence and polarisation of group beliefs. And on a personal level, it can indefinitely lock you out of areas in ideaspace/branches on the ideatree you could have benefited from exploring. [Laputa example] [example 2]
Committing to only updating trust functions based on direct evidence of reasoning ability and sincerity, and never on object-level beliefs, can be a usefwl start. But all evidence is entangled, and personally, I’m ok with locking myself out of some areas in ideaspace because I’m sufficiently pessimistic about there being any value there. So I will use some object-level beliefs as evidence of reasoning-ability and sincerity and therefore use them to update my trust functions.
Deferring to academic research can have the bandwidth problem[1] you’re talking about, and this is especially a problem when the research has been optimised for non-EA relevant criteria. Holden’s History is a good example: he shouldn’t defer to expert historians on questions related to welfare throughout history, because most academics are optimising their expertise for entirely different things.
Deferring to experts can also be a problem when experts have been selected for their beliefs to some extent. This is most likely true of experts on existential risk.
Deferring to community members you think know better than you is fairly harmless if no one defers to you in turn. I think a healthy epistemic community has roles for people to play for each area of expertise.
Decision-maker: If you make really high-stakes decisions, you should use all the evidence you can, testimonial or otherwise, in order to make better decisions.
Expert: Your role is to be safe to defer to. You realise that crowdsourced expert beliefs provide more value to the community if you try to maintain the purity of your independent impressions, so you focus on technical evidence and you’re very reluctant to update on testimonial evidence even from other experts.
Explorer: If most of your contributions come from contributing with novel ideas, perhaps consider taking risks by exploring neglected areas in ideaspace at the cost of potentially making your independent impressions less accurate on average compared to the wisdom of the crowd.
Honestly, my take on the EA community is that it’s surprisingly healthy. It wouldn’t be terrible if EA kept doing whatever it’s doing right now. I think it ranks unreasonably high in the possible ways of arranging epistemic communities. :p
I like this term for it! It’s better than calling it the “Daddy-is-a-doctor problem”.