I don’t know why people overindex on loud grumpy twitter people. I haven’t seen evidence that most FAccT attendees are hostile and unsophisticated.
quinn
I would like for XPT to be assembled with the sequence functionality plz https://forum.effectivealtruism.org/users/forecasting-research-institute I like the sequence functionality for keeping track of my progress or in what order did I read things
This also brings to mind time, where it seems like projects and roles are uncorrelated enough right now that it’s fine to date, but two years of unforeseen career developments between the two of you create something like a formal power asymmetry. Are you obligated to redteam dates with respect to where your respective careers might end up in the future?
Yes! To be clear, reading or many forms of recommending is not the red flag, the curiosity or DADA-like view of the value prop of books like that make sense to me. The specific way it comes across in the passage on the Adorian Deck saga definitely makes hiding behind “defensive cynicism” very weak and sounds almost dishonest. The broader view is more charitable toward Emerson in this particular way (see this subthread).
My comment was still when I was mid reading OP. Earlier in the essay there’s an account of the Adorian Deck situation, then the excerpts from the book, which is as far as I got before I wrote this comment. Later in OP does the case for that Emerson is interested in literature like this for DADA reasons become clearer and defensible.
For commenting before I got to the end of the post, I apologize.
- Sep 8, 2023, 9:46 PM; 9 points) 's comment on Sharing Information About Nonlinear by (
But like. It seems that the tide is turning toward “oh, flooding the EA forum with anonymous sniping from the sidelines is the Cool And Correct Thing To Do Now” and that seems like two or three distinct kinds of bad.
Yes, this tends to bug me a lot. I think Ben is being different here, because
Not anonymous
More transparency about what the on the ground facts actually are as best he can tell before coming up with interpretations or judgments (than the usual “sniping from the sidelines” post)
48 Laws of Power sounds like quite the red flag of a book! It’s usually quite hard to know if someone begrudgingly takes on zero-sum worldviews for tactical reasons or if they’re predisposed / looking for an excuse to be cunning, but an announcement like this (in the form of just being excited about this book) seems like a clear surrender of anyone’s obligation to act cooperatively toward you.
I just think this is a “law of opposite advice” situation! You’re right, but the point is that EAs are already trying so so hard to correct in this direction that it’s a little silly sometimes (certainly on the forum). The hugboxing frame makes a lot of sense to me.
This is extremely old, this version https://2016.webcampzg.org/talks/view/superintelligence-the-idea-that-eats-smart-people/ is from 2016.
Very interesting, I hadn’t heard about this!!
I’ve only really gotten to know scrupulous agreeable vegans who are aware of the hostile vegan perception and so correct against it really hard. Because of this social context, I definitely roll my eyes at nonvegans complaining about aggro vegans, cuz it sounds made up! I also think “does anyone mind if I order a steak” and maybe even napkin math is not only a reasonable equilibrium, I go further and say that it’s like a minimum viable equilibrium for any defensible understanding of moral uncertainty or epistemic cooperation. And I’ve definitely met plenty of nonvegans who would interpret the suggestion of this norm as smug virtue signaling going too far and whatever, and I don’t even know how to become sympathetic to this reaction haha.
More to the point: yes, in 2015 part of my exhaustion and disillusionment with veganism was basically that if 1⁄10 restaurants are viable for me then i’m just deleting 90% of interactions from my lightcone in a way that’s annoying to go out of my way to correct for. This definitely matters, it was salient to me! I’ve always been into finding common ground with my species through bantering in local slang on the sidewalk or getting takeout, it’s restorative and helps me fight various sources of jaded cope with the brokenness of the world.
it’s true that the correlation between framings of the problem socially overlapping with longtermism and longtermism could be made spurious! there’s a lot of bells and whistles on longtermism that don’t need to be there, especially for the 99% of what needs to be done in which fingerprints never come up.
It does seem like a misjudgment, cuz the point of “my friends are sucked into a charismatic cult leader” doesn’t necessarily have a lot to do with object level conclusions? It’s about framing, the way attention is directed. An example of what I mean is “believing true things is hard and evolution’s spaghetti code is unusually bad at it” is a frame (a characterization of an open problem), and you don’t just throw it away when you say “this particular study was very credulously believed because no one had tried replicating it by the time thinking fast and slow was published, but you should’ve smelled/predicted something was wrong back then”. If you’re worried about overconfidence or overdeferrence amongs your friend group, it’s pretty unrealistic for them to just take the wrong outputs at face value—people correcting someone’s mistakes is just the peer review process working as intended! If you really want to be concerned about this, you should show us that “if you’re starting from correcting his object level mistake, then you’re not being maximally efficient or clear in your own pursuit of answers”. I think that would work!
Apparently some old school news anchor, like 1950s of some kind, said “we don’t tell people what to think. we tell them what to think about”. This seems obviously to me to be the true source of fraught cult leader stuff, if there is any!!!
this is clearly a law of opposite advice situation.
take me in, but don’t let anyone enter after me.
well hm you’re probably underrating the degree to which people don’t like possibility of being held to a lower standard, feeling like it’s condescending, etc. when there’s stated adjustments for demographic representation. (perhaps polgar sisters is the example that’s close to my fingertips recently, but if you go to enough professional conferences or talk to enough people it doesn’t take long to run into some minority rolling their eyes at the inclusion effort).
who fear competition and refuse diversity for selfish reasons
I mean people have been telling me that my immutable characteristics put me on thin ice cuz everyone’s bored by cringefail whitemales for as long as I can remember, and it pretty much always makes me go “fine I’ll just be more clever or work harder”, which is probably a habit that’s been good for me if you think about it cuz it leads to me cultivating a higher standard for myself, lmao!
This consideration is already “priced in” to givedirectly’s worldview, the whole “reforming paternalistic versions of charity by transferring cash / they know what they need better than we do” is well-established and remains held in high regard to this day.
Yeah I thought of it from the perspective of “not being told what to think but being told what to think about”—Like you could say “the most profitable (in karma of a website) strategy is to disagree with a ‘founder’-like figure of that very website” of course, but indeed if you’ve accepted his frame of the debate then didn’t he “win” in a sense? This seems technically true often (not always!) but I find it uncompelling.
I’ve spotted several issues with the sequences that the rationalists seemingly haven’t.
where did you write these down?
Sometimes I try to channel my anger at birth lotteries into motivation, along the lines of “someone who’d do a way better job than you in this situation is muscled out cuz they didn’t have the foresight to be born correctly, so you really owe it to them not to eff it up”, and the results are just as mixed as any “obligation” framing. Yet it seems more productive than convincing myself that that anger is actually the object level cause area, cuz if I did that I would just read the news all day about why outgroup is keeping us from making immigration policy less wrong and get zero done.
I read to the lighthouse, not far away in time from when I read methods, and I was annoyed or confused about why I was reading it. And there was a while ten years ago when satantango and gravity’s rainbow were occupying a massive subset of my brain at all times, so I’m not like “too dumb for litfic” or whatever.
I think OP omitted many details of why it might be plausible, and I wouldn’t expect the disagree voters to have any idea about what’s going on there:
To me, the literary value of EA stories is thinking through psychological context of trying to think more clearly, trying to be good, whatever. Building empathy for “how you would derive EA-shaped things and then build on them from within a social reward surface that isn’t explicitly selecting for that?”, “what emotional frictions or mistake classes would I expect?”, seems plausibly quite valuable for a ton of people.
Basically every other thing you could say about the value prop of reading Methods is downstream of this! “upskilling” was an intense word choice, but only 95% wrong.
With that in mind, I do think a ton of topics salient to EAs show up within Methods and many of them get thoroughly explored.
Strong agree—there are so many ways to go off the rails even if you’re prioritizing being super humble and weak[1]
“weak” i.e. in the usage “strong views weakly held”