Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it’s a good chance it’s my fault (Sorry!).
Habryka
That’s an interesting idea, I hadn’t considered that!
Yeah, I’ve considered this a bunch (especially after my upvote strength on LW went up to 10, which really limits the number of people in my reference class).
I think a whole multi-selection UI would be hard, but maybe having a user setting that you can change on your profile where you can set your upvote-strength to be any number between 1 and your current vote strength seems less convenient but much easier UI wise. It would require some kind of involved changes in the way votes are stored (since we currently have an invariant that guarantees you can recalculate any users karma from nothing but the vote table, and this would introduce a new dependency into that that would have some reasonably big performance implications).
(I care quite a bit about votes being anonymous, so will generally glomarize in basically all situations where someone asks me about my voting behavior or the voting behavior of others, sorry about that)
My guess is LW both bans and rate-limits more.
Academia pre the mid-20th-century was a for-profit enterprise. It did not receive substantial government grants and indeed was often very tightly intertwined with the development of industry (much more so than today).
Indeed, the degree to which modern academia is operating on a grant basis and has adopted more of the trappings of the nonprofit space is one of the primary factors in my model of its modern dysfunctions.
Separately, I think the contribution of militaries to industrial and scientific development is overrated, though that also would require a whole essay to go into.
I take a very longtermist and technology-development focused view on things, so the GHD achievements weigh a lot less in my calculus.
The vast majority of world-changing technology was developed or distributed through for-profit companies. My sense is nonprofits are also more likely to cause harm than for-profits (for reasons that would require its own essay to go into, but are related to their lack of feedback loops).
This is an extremely rich guy who isn’t donating any of his money.
FWIW, I totally don’t consider “donating” a necessary component of taking effective altruistic action. Most charities seem much less effective than the most effective for-profit organizations, and most of the good in the world seems achieved by for-profit companies.
I don’t have a particularly strong take on Bryan Johnson, but using “donations” as a proxy seems pretty bad to me.
Less than a year ago Deepmind and Google Brain were two separate companies (both making cutting-edge contributions to AI development). My guess is if you broke off Deepmind from Google you would now just pretty quickly get competition between Deepmind and Google Brain (and more broadly just make the situation around slowing things down a more multilateral situation).
But more concretely, anti-trust action makes all kinds of coordination harder. After an anti-trust action that destroyed billions of dollars in economic value, the ability to get people in the same room and even consider coordinating goes down a lot, since that action itself might invite further anti-trust action.
Huh, fwiw I thought this proposal would increase AI risk, since it would increase competitive dynamics (and generally make coordinating on slowing down harder). I at least didn’t read this post as x-risk motivated (though I admit I was confused what it’s primary motivation was).
Yeah, that’s a decent link. I do think this comment is more about whether anti-recommendations for organizations should be held to a similar standard. My comment also included some criticisms of Sean personally, which I think do also make sense to treat separately, though at least I definitely intend to also try to debias my statements about individuals after my experiences with SBF in-particular on this dimension.
Hmm, I agree that there was some aggression here, but I felt like Sean was the person who first brought up direct criticism of a specific person, and very harsh one at that (harsher than mine I think).
Like, Sean’s comment basically said “I think it was directly Bostrom’s fault that FHI died a slow painful death, and this could have been avoided with the injection of just a bit of competence in the relevant domain”. My comment is more specific, but I don’t really see it as harsher. I also have a prior to not go into critiques of individual people, but that’s what Sean did in this context (of course Bostrom’s judgement is relevant, but I think in that case so is Sean’s).
Pushback (in the form of arguments) is totally reasonable! It seems very normal that if someone is arguing for some collective path of action, using non-shared assumptions, that there is pushback.
The thing that feels weirder is to invoke social censure, or to insist on pushback when someone is talking about their own beliefs and not clearly advocating for some collective path of action. I really don’t think it’s common for people to push back when someone is expressing some personal belief of theirs that is only affecting their own actions.
In this case, I think it’s somewhat ambiguous whether there I am was arguing for a collective path of action, or just explaining my private beliefs. By making a public comment I at least asserted some claim towards relevance towards others, but I also didn’t explicitly say that I was trying to get anyone else to really change behavior.
And in either case, invoking social censure on the basis of someone expressing a belief of theirs without also giving a comprehensive argument for that belief seems rare (not unheard of, since there are many places in the world where uniform ideologies are enforced, though I don’t think EA has historically been such a place, nor wants to be such a place).
This also roughly matches my impression. I do think I would prefer the EA community to either go towards more centralized governance or less centralized governance in the relevant way, but I agree that given how things are, the EA Forum team has less leeway with moderation than the LW team.
I think this might be one of the LTFF writeups Oli mentions (apologies if wrong), and seems like a good place to start
Yep, that’s the one I was thinking about. I’ve changed my mind on some of the things in that section in the (many) years since I wrote it, but it still seems like a decent starting point.
When people make claims, we expect there to be some justification proportional to the claims made.
To be clear, I also absolutely do not hold myself to this standard. I feel totally fine, and encourage others to do as well, to casually mention controversial and important beliefs of theirs whenever it seems relevant, without an obligation to fully back up that claim. Indeed, I am pretty confused what norm you are referring to here, since I also can’t think of this norm in almost any context I am in.
If someone mentions they believe in god, I don’t expect that this means they are ready or want to have a conversation about theology with me right then and there. When someone says they vote libertarian in the US general election I totally don’t expect to have a conversation with them about macroeconomic principles right there. People express large broad claims all the time without wanting to go into all the details.
This thread doesn’t feel great for this, though CSER is an organization for which I do really wish more people shared their assessments. Also happy to have a call if your curiosity extends that far, and you would be welcome to write up the things that I say in that call publicly (though of course that’s a lot of work and I don’t think you have any obligation to do so).
Thanks Sean. I think this is a good comment and I think makes me understand your perspective better.
I do think we obviously have large worldview differences here, that seem maybe worth exploring at some point, but this comment (as well as some private conversations sparked by these comments with others at FHI) made me feel more sympathetic to the perspective of “there is some history-rewriting happening that seems scary, where the university gets portrayed as this kind of boogeyman, and while it does seem the university did some unreasonable-seeming things, I think a lack of empathy and practicality in relation to that university had a lot of bad effects on both the university and FHI, and we should be very wary of remembering the story of FHI as a purely one-sided one”.
You made hostile claims that weren’t following on from prior discussion,[1] and in my view nasty and personal insinuations as well, and didn’t have anything to back it up.
This seems relatively straightforwardly false. In as much as Sean is making claims about the right strategy to follow for FHI, and claiming that the errors at FHI were straightforwardly Bostrom’s fault and attributable to ‘garden variety incompetence’, the degree of historical success of the strategies that Sean seems to be advocating for is of course relevant in assessing whether that’s accurate. And CSER and Leverhulme seem like the obvious case studies that are available here.
We can quibble over the exact degree of relevance of the points I brought up, but the logical connection here seems straightforward.
didn’t have anything to back it up.
Separately, I see no way how you could know whether I have anything to back up my criticism. I have written about my thoughts on CSER in the past, and I did not intend to write up all the thoughts and evidence I have in this thread.
If you want we can have a call for an hour, or you can investigate this question yourself and come to your own conclusion, and then you can make a judgement of whether I have anything to back up my opinion, but as I have said upthread, I don’t consider myself to have an obligation to extensively document the evidence for all of my opinions and judgements before I feel comfortable expressing them.
Yeah, I agree this is a real dynamic. It doesn’t sound unreasonable for me to have a standard link that l link to if I criticize people on here that makes it salient that I am aspiring to be less asymmetric in the information I share (I do think the norms are already pretty different over on LW, where if anything I think criticism is a bit less scrutinized than praise, so its not like this is a totally alien set of norms).
Oh, I quite like the idea of having the AI score the writing on different rubrics. I’ve been thinking about how to better use LLMs on LW and the AI Alignment Forum, and I hadn’t considered rubric scoring so far, and might give it a shot as a feature to maybe integrate.