I lead the DeepMind mechanistic interpretability team
Neel Nanda
Community seems the right categorisation to me—the main reason to care about this is understanding the existing funding landscape in AI safety, and how much to defer to them/trust their decisions. And I would consider basically all the large funders in AI Safety to also be in the EA space, even if they wouldn’t technically identify as EA.
More abstractly, a post about conflicts of interest and other personal factors, in a specific community of interest, seems to fit this category
Being categorised as community doesn’t mean the post is bad, of course!
Personally, I find the idea somewhat odd/uncomfortable, but also vaguely buy the impact case, so I’ve only added it on LinkedIn, as that’s the social network where I feel like the norm is shameless signalling and where I tie it least to my identity—I may as well virtue signal rather than just bragging!
This seems a question of what the policy is, not of judgement re how to apply it, in my opinion.
The three examples you gave obviously are in the category of “controversial community drama that will draw a lot of attention and strong feelings”, and I trust the mod’s ability to notice this. The question is whether the default policy is to make such things personal blog posts. I personally think this would be a good policy, and that anything in this category is difficult to discuss rationally. I do also consider the community pane a weaker form of low visibility, so there’s something here already, but I would advocate for a stronger policy.
Another category is “anything about partisan US politics”, which I don’t think is that hard to identify, is clearly hard to discuss rationally, and in my opinion is reasonable to have a policy of lowering the visibility of.
I don’t trust karma as a mechanism, because if the post is something that people have strong feelings about, and many of those feelings are positive (or at least, righteous anger style feelings), then posts often get high karma. Eg I think the Nonlinear posts got a ton of attention, in my opinion were quite unproductive and distracting, got very high karma, and if they had been less visible I think this would have been good
I agree that this is inconsistent (looks like Ben’s Nonlinear post is front page). But my conclusion is that community drama should also be made less visible except to those who opt in, not vice versa. The separate section for community posts was a decent start
I personally set them to equal visibility to normal posts, so this doesn’t matter to me. But I don’t know the stats for how many forum users do so. If basically all forum users have them invisible then I would consider this stronger censorship
It sounds like you agree it’s difficult, you just think EA Forum participants will successfully rise to the challenge?
Which idk, maybe, maybe not, seems high variance—I’m less optimistic than you. And making things personal blog posts makes them less visible to new forum users (hidden by default, I think?) but not to more familiar users who opt in to seeing personal blog posts, which seems great for higher quality conversations. So yeah, idk, ultimately the level of filtering here is very mild and I would guess net good
I think difficult to discuss rationally, and unable to discuss rationally are two completely different things that it’s important not to conflate. It just seems very obviously true that posts on US politics are more likely to lead to drama, fighting, etc. There are definitely EAs who are capable of having productive and civil conversations about politics, I’ve enjoyed several such, and find EAs much better for this than most groups. But public online forums are a hard medium to have such discussions. And I think the moderating team have correctly labelled any such posts as difficult to discuss rationally. Whether you agree with making them less visible is up to you, I personally think it’s fairly reasonable
In my opinion that post was bizarrely low quality and off base, and not worth engaging with: EA beliefs do not necessarily imply that the market will drop (I personally think a lot of the risk comes from worlds where AI is wildly profitable and drives a scary competitive race but where companies are making a LOT of money), lots of EAs have finance backgrounds or are successful hedge fund workers earning to give so his claim that no EAs understand finance is spurious, this definitely IS something some EAs have spent a while thinking about, even if we did have a thesis converting it to a profitable trade is hard and has many foot guns, and some of us have better things to do with our time than optimising our portfolio
For what it’s worth, my guess is that the best trade is going long volatility, since my beliefs about AI do imply that the world is going to get weird fairly quickly even if I don’t know exactly how.
Sure, I agree that under the (in my opinion ridiculous and unserious) accounting method of looking at the last actor, zero is a valid conclusion.
I disagree that small is accurate—I feel like even if I’m being incredibly charitable and say that the donor is only 1% of the overall ecosystem saving the life, we still get to 2000 lives saved, which seems highly unreasonable to call small—to me small is at best <100
What does reclaim give you? I’ve never heard of it, and the website is fairly uninformative
It looks like the total number of lives saved by all Singer- and EA-inspired donors over the past 50 years may be small, or even zero
This conclusion from the first half of the letter seems unjustified by the prior text?
You seem to be arguing that there’s a credit allocation problem, where there’s actually many actors who contribute to a bednet saving a life, but Givewell style calculations ignore this, and give all the credit to the donor, which leads to over counting. I would describe this as GiveWell computing the marginal impact, which I think is somewhat reasonable (how is the world different if I donate vs don’t donate), but agree this has issues and there are arguments for better credit allocation methods. I think this is a fair critique.
But, I feel like at best this dilutes the impact by a factor of 10, maybe 100 at an absolute stretch. If we take rough estimates like 200,000 lives saved via GiveWell (rough estimate justified in footnote 1 of this post), that’s still 20,000 or 2,000 lives saved. I don’t see how you could get to “small or even zero” from this argument
I like Caleb’s answer. Some more thoughts
Clearly writing out the evidence that you’re a good candidate for the grant. This can look like conventional credentials, past projects, references, etc, just anything which increases my probability that it’s a good idea
Scoping out the purpose of the grant. In particular, I generally expect that when someone without a track record of AI Safety/similar research does an independent research project, it’s fairly unlikely to actually result in impactful research (doing good research is really hard!), and that most of the impact comes from helping the person skill up, test fit, and either get a job or do better projects in future. You tend to skill up best when making a sincere effort to do good research, so this doesn’t mean don’t think about it all, but I would also discuss the skilling up benefits to you and why that matters
apply to several funders where possible
I’m surprised by this one! I see how it’s in the applicant’s interests, but why does it matter to you?
How is it silly? It seems perfectly acceptable, and even preferable, for people to be involved in shaping EA only if they agree for their leadership to be scrutinized.
My argument is that barring them doesn’t stop them from shaping EA, just mildly inconveniences them, because much of the influence happens outside such conferences
With all the scandals we’ve seen in the last few years, I think it should be very evident how important transparency is
Which scandals do you believe would have been avoided with greater transparency, especially transparency of the form here (listing the names of those involved, with no further info)? I can see an argument that eg people who have complaints about bad behaviour (eg Owen’s, or SBF/Alameda’s) should make them more transparently (though that has many downsides), but that’s a very different kind of transparency.
I’m not expressing an opinion on that. The post makes a clear claim that their legal status re tax deductibility will change if more EU citizens sign up. This surprises me and I want to understand it better. I agree there are other benefits to having more members, I’m not disputing that
I’m surprised that having more members let’s you offer better tax deductions (and that they don’t even need to be Danish taxpayers!), what’s up with that?
Seems like she’ll have a useful perspective that adds value to the event, especially on brand. Why do you think it should be arms length?
This seems fine to me—I expect that attending this is not a large fraction of most attendee’s impact on EA, and that some who didn’t want to be named would have not come if they needed to be on a public list, so barring such people seems silly (I expect there’s some people who would tolerate being named as the cost of coming too, of course). I would be happy to find some way to incentivise people being named.
And really, I don’t think it’s that important that a list of attendees be published. What do you see as the value here?
Seems reasonable (tbh with that context I’m somewhat OK with the original ban), thanks for clarifying!
In fairness, I wrote my post because I saw lots of people making arguments for a far stronger claim than necessary, and was annoyed by this