“In the day I would be reminded of those men and women,
Brave, setting up signals across vast distances,
Considering a nameless way of living, of almost unimagined values.”
Emrik
Thanks! I like “impact chain” better because it’s not just a detailed visualisation. Using the term “chain” emphasises that you should segment it into independent links so that you feel where the probability seeps out.
Love the beatifwl and poetic style of this. Makes me feel happy inside!
I’m a bit confused about your view here. Why can’t EA Funds, with enough money, fund specific research projects and new charitable organizations? Why can’t they “work with mega-donors (...) to pursue much larger projects”?
Also, it seems like you have more faith than me in the collective wisdom of many non-experts, compared to a team of experts whose job is to work on the questions full-time. Do you think a donation would be better allocated by votes from 1000 average EAs who each spend 2 hours of research each, or by a team of 10 highly experienced EAs who each spend 200 hours of research each?
Strongly agree with trying to think about creative ways of effectively pooling wisdom together to make better donations collectively. Prediction markets, as you point out, is an example of a really high-impact related idea, so there might be more in the vicinity. I should’ve made that clear from the start. I just don’t think this particular suggestion (Iterative Public Donation) beats what we’ve currently got. But, as I’ve said, I like the way you think. :)
This is really cool! Love seeing tangible efforts for near-term and large-scale reduction in animal suffering. The lack of easy and cost-comparative alternatives to ARs worry me a bit, though.
I suspect the “ARs causes secondary poisoning to pets” angle would more effectively convince governments to mandate using alternatives. I don’t know if they have much power to enforce it, though.
General strategies include rodent-proofing housing and waste disposal systems; limiting access to harbourage, food and water; and introducing predators (both wild and domesticated)
Do you guys think introducing predators (presumably cats) is a welfare improvement over ARs?
Profile pictures for EA Forum?
One of the reasons I like posting on Facebook is because it gives me plenty of opportunities to display my personality (e.g. profile picture, banner, friends, etc.). You might argue that it distracts from the content of the post, but… I don’t think that’s much of an effect? I’d be more incentivised to interact with the forum if I could show more of myself here.
I’m strongly in favour. I don’t think keeping the forum more impersonal will improve the rationality of users (or very weakly if so). But I do think it’ll make me (N=1) more likely to want to interact with it. I would use a non-photo image to display my personality.
The largest consideration against, I think, is that it may make existing users feel like this is an ickier place, and they may enjoy the impersonal feel of it.
“You don’t see profile pictures on journal articles, or court documents, or computer code.”
Makes me wonder what you think journal articles and court documents are optimised for… :P I don’t think we should take hints from systems that are optimised for something other than we are optimising for.
Correct me if I’m wrong, but I think in Christianity, there’s a lot of respect and positive affect for the “ordinary believer”. Christians who identify as “ordinary Christians” feel good about themselves for that fact. You don’t have to be among the brightest stars of the community in order to feel like you belong.
I think in EA, we’re extremely kind, but we somehow have less of this. Like, unless you have 2 PhD’s by the age of 25 and you’re able to hold your own in a conversation about AI-alignment theory with the top researchers in the world… you sadly have to “settle” for menial labour with impact hardly worth talking about. I’m overstating it, of course, but am I wrong?
I’m not saying ambition is bad. I think shooting for the stars is a great way to learn your limits. But I also notice a lot of people suffering under intellectual pressure, and I think we could collectively be more effective (and just feel better) if we had more… room for “ordinary folk dignity”?
Nono, I’m not trying to point to a problem of EAs trying to make others feel unwelcome or dumb. I think EA is extremely kind, and almost universally tries hard to make people feel welcome. I’m just pointing to the existence of an unusually strong intellectual pressure, perhaps combined with lots of focus on world-saving heroes and talk about “what should talented people do?”
I think ambition is good, but I think we can find ways of encouraging ambition while also mitigating at least some of the debilitating intelligence-dysphoria many in our community suffer from.
I’m writing this in reaction to talking to three of my friends who suffer under the intellectual pressure they feel. (Note that the following are all about the intellectual pressure they get from EA, and not just in general due to academic life.)
Friend1: “EA makes me feel real dumb XD i think i feel out of place by being less intelligent”_
Friend2: “I’m not worried that I’m not smart, but I am worried that I am not smart enough to meet a certain threshold that is required for me to do the things I want to do. … I think I have very low odds of achieving things I deeply want to achieve. I think that is at least partially responsible for me being as extremely uncomfortable about my intelligence as I am, and not being able to snap out of it.”
_
Me: “Do you ever refrain from trying to contribute intellectually because you worry about taking up more attention than it’s worth?”
Friend3: “hmm, not really for that reason. because I’m afraid my contribution will be wrong or make me look stupid. wrong in a way that reflects negatively on me—stupid errors, revealing intellectual or character weakness.
_
Some of this is a natural and unavoidable result of the large focus EA places on intellectual labour, but I think it’s worse than it needs to be. I think some effort to instil some “ordinary EA dignity” into our culture wouldn’t hurt. I might have a skewed sample, however.
- 21 May 2022 5:41 UTC; 2 points) 's comment on Apply to attend an EA conference! by (
And to respond to your question about what I meant by “menial labour”. I was being poetic. I just mean that I feel like EA places a lot of focus on the very most high-status jobs, and I’ve heard friends despairing for having to “settle” for anything less. I sense that this type of writing might not be the norm for EA shortform, but I wasn’t sure.
I like the words inside beliefs and outside beliefs, almost-but-not-quite analogous to inside- and outside-view reasoning. The actual distinction we want to capture is “which beliefs should we report in light of social-epistemological considerations” and “which beliefs should we use to make decisions to change the world”.
On the social-epistemological point: Yes, it varies by context.
One thing I’d add is that I think it’s hard to keep inside/outside (or independent and all-things-considered) beliefs separate for a long time. And your independent beliefs are almost certainly going to be influenced by peer evidence, and vice versa.
I think this means that if you are the kind of person whose main value to the community is sharing your opinions (rather than, say, being a fund manager), you should try to cultivate a habit of mostly attending to gears-level evidence and to some extent ignore testimonial evidence. This will make your own beliefs less personally usefwl for making decisions, but will make the opinions you share more valuable to the community.
Forum suggestion: Option to publish your post as “anonymous” or blank, that then reverts to reveal your real forum name in a week.
This would be an opt-in feature that lets new and old authors gain less biased feedback on their posts, and lets readers read the posts with less of a bias from how they feel about the author.At the moment, information cascades amplify the number of votes established authors get based on their reputation. This has both good (readers are more likely to read good posts) and bad (readers are less likely to read unusual perspectives, and good newbie authors have a harder time getting rewarded for their work) consequences. The anonymous posting feature would redistribute the benefits of cascades more evenly.
I don’t think the net benefit is obvious in this case, but it could be worth exploring and testing.
There are a bunch of illegible factors involved in hiring the right person, though. If the reason for rejection is something like “we think you’d be a bad culture fit,” then it seems legally risky to be honest.
What’s the case for thinking that grantmaking skills is a bottleneck?
Love this! Brilliant.
Also, “renown” → “renowned”.
I didn’t read this short story as supporting cancel culture at all. To me, the good guys in this story are the people who advocate for recognising that people can have both good and bad sides. And the main point of celebration is that they’re talking about factory farming as a troubling past history, just like they talk about slavery today. Did you read it differently?
Great post! The main reason academics suffer from “myopic empiricism” is that they’re optimising for legibility (an information source is “legible” if it can be easily trusted by others), both in their information intake and output. Or, more realistically, they’re optimising for publishing esteemable papers, and since they can’t reference non-legible sources of evidence, they’ll be less interested in attending to them. One way to think about it is that “myopic academics” are trapped in an information bubble that repels non-legible information.
And I think this is really important. We need a source of highly legible data, and academic journals provide exactly that (uh, in theory). It only starts being a big problem once those papers start offering conclusions about the real world while refusing to leave their legibility bubble. And that sums up all the failures you’ve listed in the article.
The moral of the story is this: scientists really should optimise for legibility in their data production, and this is a good thing, but if they’re going to offer real-world advice, they better be able to step out of their legibility bubble.