I’m a researcher at London School of Economics and Political Science, working in the intersection of moral psychology and philosophy.
Stefan_Schubert
Do you have a rough estimate of the current size?
Do you think it would be better to not suggest any action, or to filter these suggestions without any input from other people?
I’m not sure what you mean. I’m saying the post should have been more carefully argued.
discouraging intimate relationships between senior and junior employees of EA orgs
I assume you hereby mean the same org (I think that’s the natural reading). But the post rather says:
Senior community members should avoid having intimate relationships with junior members.
That’s a very different suggestion.
Could you elaborate more on (some of) these considerations and why you think the cultishness risk is being overestimated relative to them?
My intuition is that it’s being generally underestimated
I didn’t say the cultishness risk is generally overestimated. I said that this particular post overestimates that risk relative to other considerations, which are given little attention. I don’t think it’s right to suggest a long list of changes based on one consideration alone, while mostly neglecting other considerations. That is especially so since the cult notion is anyway kind of vague.
There are many considerations of relevance for these choices besides the risk of becoming or appearing like a cult. My sense is that this post may overestimate the importance of that risk relative to those other considerations.
I also think that in some cases, you could well argue that the sign is the opposite to that suggested here. E.g. frugality could rather be seen as evidence of cultishness.
Yeah, I get that. I guess it’s not exactly inconsistent with the shot through formulation, but probably it’s a matter of taste how to frame it so that the emphasis gets right.
I guess you want to say that most community building needs to be comprehensively informed by knowledge of direct work, not that each person who works in (what can reasonably be called) community building needs to have that knowledge.
Maybe something like “Most community building should be shot through by direct work”—or something more distantly related to that.
Though maybe you feel that still presents direct work and community-building as more separate than ideal. I might not fully buy the one camp model.
Yes. I see some parallels between this discussion and the discussion about the importance of researchers being teachers and vice versa in academia. I see the logic of that a bit but also think that in academia, it’s often applied dogmatically and in a way that underrates the benefits of specialisation. Thus while I agree that it can be good to combine community-building and object-level work, I think that that heuristic needs to be applied with some care and on a case-by-case basis.
I liked this post by Katja Grace on these themes.
Here is one way the world could be. By far the best opportunities for making the world better can be supported by philanthropic money. They last for years. They can be invested in a vast number of times. They can be justified using widely available information and widely verifiable judgments.
Here is another way the world could be. By far the best opportunities are one-off actions that must be done by small numbers of people in the right fleeting time and place. The information that would be needed to justify them is half a lifetime’s worth of observations, many of which would be impolite to publish. The judgments needed must be honed by the same.
These worlds illustrate opposite ends of a spectrum. The spectrum is something like, ‘how much doing good in the world is amenable to being a big, slow, public, official, respectable venture, versus a small, agile, private, informal, arespectable one’.
In either world you can do either. And maybe in the second world, you can’t actually get into those good spots, so the relevant intervention is something like trying to. (If the best intervention becomes something like slowly improving institutions so that better people end up in those places, then you are back in the first world).
An interesting question is what factor of effectiveness you lose by pursuing strategies appropriate to world 1 versus those appropriate to world 2, in the real world. That is, how much better or worse is it to pursue the usual Effective Altruism strategies (GiveWell, AMF, Giving What We Can) relative to looking at the world relatively independently, trying to get into a good position, and making altruistic decisions.
I don’t have a good idea of where our world is in this spectrum. I am curious about whether people can offer evidence.
Thanks, I think this is an interesting take, in particular since much of the commentary is rather the opposite—that EAs should be more inclined not to try to get into an effective altruist organisation.
I think one partial—and maybe charitable—explanation why independent grants are so big in effective altruism is that it scales quite easily—you can just pay out more money, and don’t need a managerial structure. By contrast, scaling an organisation takes time and is difficult.
I could also see room for organisational structures that are somewhere in-between full-blown organisations and full independence.
Overall I think this is a topic that merits more attention.
Interesting point.
I guess it could be useful to be able to see how many have voted as well, since 75% agreement with four votes is quite different from 75% agreement with forty votes.
I would prefer a more failproof anti-spam system; e.g. preventing new accounts from writing Wiki entries, or enabling people to remove such spam. Right now there is a lot of spam on the page, which reduces readability.
Thanks!
Extraordinary growth. How does it look on other metrics; e.g. numbers of posts and comments? Also, can you tell us what the growth rate has been per year? It’s a bit hard to eyeball the graph. Thanks.
This kind of thing could be made more sophisticated by making fines proportional to the harm done
I was thinking of this. Small funders could then potentially buy insurance from large funders in order to allow them to fund projects that they deem net positive even though there’s a small risk of a fine that would be too costly for them.
They refer to Drescher’s post. He writes:
But we think that is unlikely to happen by default. There is a mismatch between the probability distribution of investor profits and that of impact. Impact can go vastly negative while investor profits are capped at only losing the investment. We therefore risk that our market exacerbates negative externalities.
Standard distribution mismatch. Standard investment vehicles work the way that if you invest into a project and it fails, you lose 1 x your investment; but if you invest into a project and it’s a great success, you may make back 1,000 x your investment. So investors want to invest into many (say, 100) moonshot projects hoping that one will succeed.
When it comes to for-profits, governments are to some extent trying to limit or tax externalities, and one could also argue that if one company didn’t cause them, then another would’ve done so only briefly later. That’s cold comfort to most people, but it’s the status quo, so we would like to at least not make it worse.
Charities are more (even more) of a minefield because there is less competition, so it’s harder to argue that anything anyone does would’ve been done anyway. But at least they don’t have as much capital at their disposal. They have other motives than profit, so the externalities are not quite the same ones, but they too increase incarceration rates (Scared Straight), increase poverty (preventing contraception), reduce access to safe water (some Playpumps), maybe even exacerbate s-risks from multipolar AGI takeoffs (some AI labs), etc. These externalities will only get worse if we make them more profitable for venture capitalists to invest in.
We’re most worried about charities that have extreme upsides and extreme downsides (say, intergalactic utopia vs. suffering catastrophe). Those are the ones that will be very interesting for profit-oriented investors because of their upsides and because they don’t pay for the at least equally extreme downsides.
If anything, I think that prohibiting posts like this from being published would have a more detrimental effect on community culture.
Of course, people are welcome to criticise Ben’s post—which some in fact do. That’s a very different category from prohibition.
I agree, and I’m a bit confused that the top-level post does not violate forum rules in its current form.
That seems like a considerable overstatement to me. I think it would be bad if the forum rules said an article like this couldn’t be posted.
This question is related to the question of how much effort effective altruism as a whole should put into movement growth relative to direct work. That question has been more discussed; e.g. see the Wiki entry and posts by Peter Hurford, Ben Todd, Owen Cotton-Barratt, and Nuño Sempere/Phil Trammell.
Yeah, I think it would be good to introduce premisses relating to the time that AI and bio capabilities that could cause an x-catastrophe (“crazy AI” and “crazy bio”) will be developed. To elaborate on a (protected) tweet of Daniel’s.
Suppose that you have as long timelines for crazy AI and for crazy bio, but that you are uncertain about them, and that they’re uncorrelated, in your view.
Suppose also that we modify 2 into “a non-accidental AI x-catastrophe is at least as likely as a non-accidental bio x-catastrophe, conditional on there existing both crazy AI and crazy bio, and conditional on there being no other x-catastrophe”. (I think that captures the spirit of Ryan’s version of 2.)
Suppose also that you think that the chance that in the world where crazy AI gets developed first, there is a 90% chance of an accidental AI x-catastrophe, and that in 50% of the worlds where there isn’t an accidental x-catastrophe, there is a non-accidental AI x-catastrophe—meaning the overall risk is 95% (in line with 3). In the world where crazy bio is rather developed first, there is a 50% chance of an accidental x-catastrophe (by the modified version of 2), plus some chance of a non-accidental x-catastrophe , meaning the overall risk is a bit more than 50%.
Regarding the timelines of the technologies, one way of thinking would be to say that there is a 50⁄50 chance that we get AI or bio first, meaning there is a 49.5% chance of an AI x-catastrophe and a >25% chance of a bio x-catastrophe (plus additional small probabilities of the slower crazy technology killing us in the worlds where we survive the first one; but let’s ignore that for now). That would mean that the ratio of AI x-risk to bio x-risk is more like 2:1. However, one might also think that there is a significant number of worlds where both technologies are developed at the same time, in the relevant sense—and your original argument potentially could be used as it is regarding those worlds. If so, that would increase the ratio between AI and bio x-risk.
In any event, this is just to spell out that the time factor is important. These numbers are made up solely for the purpose of showing that, not because I find them plausible. (Potentially my example could be better/isn’t ideal.)
Great, thanks—I appreciate it. I’d love a systematic study akin to the one Seb Farquhar did years back.
https://forum.effectivealtruism.org/posts/Q83ayse5S8CksbT7K/changes-in-funding-in-the-ai-safety-field