Can someone elaborate on what assessing candidates’ value-alignment/morality and making decisions about that looks like in practice? I work in the ‘traditional’ charitable sector (for lack of a better word), and the number one piece of advice that I’ve always heard given to hiring managers is “ask people about their skills, never about their morality or commitment to the cause” (and, as a addendum, dock people who spend too much time in the interview talking about those things). Obviously there are special cases to avoiding considering someone’s value-alignment—e.g. cases where people avoid hiring candidates with associations with organizations that seem a little ‘questionable’ vis-a-vis the cause—but overall I’ve not really heard a lot about assessing or taking into account people’s value-alignment/morality during hiring decisions. So, with that in mind—do some EA-aligned organizations screen candidates based on morality/commitment/value-alignment? If so, how do they go about doing that—what sort of interview questions can get that information out of people accurately?
Rick
- 4 Oct 2016 5:16 UTC; 1 point) 's comment on Is the community short of software engineers after all? by (
Encouraging altruistic workers to self-subsidize altruistic work is a dangerous path to go down, in my opinion. On a large scale, it can (and does!) put downward pressure on sector-wide wages, which in turn can push qualified people away from the sector (thus hurting the talent pool), disproportionately may exclude people of humble backgrounds from getting jobs in altruistic work (which in turn helps ensure that people of privilege are over represented in those jobs, which is not good—e.g. this post), and may also, according to some sociological theories, possibly lead to a societal devaluation of such work (e.g. what is seen with care work, which also is bad). I’d much rather let such people seek normal wages, and then donate the excess—you get the same benefits, but avoid all of the associated problems.
The rationale is mostly borne of experience, from what I can tell (e.g. managers experiencing consistent success with this set up), but formally it is that 1) you should hire based on who will do the most good in the position, and 2) asking about experience and skills is the best way of figuring out if they’ll do the most good. Outside of corruption, which is a whole other discussion, the difference between very moral person A and mediocre-ly moral person B is that person B may dedicate more time to thinking about and working on the cause, which in turn becomes results. If person A is not as smart as person B, but works harder and gets better results as a result, you should hire person A. As a converse, if person B really doesn’t care that much, slacks off a lot, but is a genius who consistently gets better results than person A, you should hire person B. In both cases, asking about their morality isn’t going to tell you who will be most effective—it’s an easy thing to lie about, and when it does play a large role it will show up in your skills and experience anyway (past success is an indicator of future success).
Possibly! Outside of a few annoying high-profile groups (who shall not be named), you don’t really hear people working for charitible causes say “I’m in it for the money”. I’m pretty sure this situation is mostly driven by a lack of money mixed with the availability of people who are willing to take a pay cut to work in aid, rather than it being a conscious attempt at screening workers for morality. It may be worth researching the ‘screening for morality’ aspect further—I haven’t really seen much on the implications of it (hence my curiosity about how it would work in practice—it’s a very interesting thought!). Either way, there’s a sweet spot somewhere, it’s just a question of where—how much below market rate do you need to pay charitable workers in order to maximize the costs/benefits between screening for morality, saving money, and minimizing possible side effects like those I mentioned downpost?
In humanitarian work, for example, I think we’ve gone too far (as one writer put it, “it’s unrealistic to expect us to live like monks”. On a related note, it may be worth looking into the large debate on the professionalization of the humanitarian aid sector. Basically, for a very long time the humanitarian aid sector under-invested in the professional development, mental health, safety, and general wellbeing of its workers, because the kind of people who work in frontline aid work tend to be willing to do it anyway even if they are getting paid next to nothing, are in serious danger all the time, and are under-invested in by their organization. Unsurprisingly, burn-out and untreated PTSD are common. As an aside, professionalization also seems to be slowly increasing the effectiveness of humanitarian aid, which is great.
I agree with a lot of the other folk here that risk aversion should not be seen as a selfish drive (even though, as Gleb mentioned, it can serve that drive in some cases), but rather is an important part of rational thinking. In terms of directly answering your question, though, regarding ‘discounting future life’, I’ve been wondering about this a bit too. So, I think it’s fair to say that there are some risks involved with pursuing X-risks: there’s a decent chance you’ll be wrong, you may divert resources from other causes, your donation now may be insignificant compared to future donations when the risk is more well-known and better understood, and you’ll never really know whether or not you’re making any progress. Many of these risks are accurately represented in EA’s cost/benefit models about X-risks (I’m sure yours involved some version of these, even if just the uncertainty one).
My recent worry is the possibility that, that when a given X-risk becomes associated with the EA community, these risks become magnified, which in turn needs to be considered in our analyses. I think that this can happen for three reasons:
First, the EA community could create an echo chamber for incorrect X-risks, which increases bias in support of those X-risks. In this case, rational people who would have otherwise dismissed the risk as conspiratorial now would be more likely to agree with it. We’d like to think that large support of various X-risks in the EA communities is because EAs have more accurate information about these risks, but that’s not necessarily the case. Being in the EA community changes who you see as ‘experts’ on a topic – there isn’t a vocal majority of experts working on AI globally who see the threat as legitimate, which to a normal rational person may make the risk seem a little overblown. However, the vast majority of experts working on AI who associate with EA do see it as a threat, and are very vocal about it. This is a very dangerous situation to be in.
Second, if an ‘incorrect’ X-risk is grasped by the community, there’s a lot of resource diversion at stake – EA has the power to more a lot of resources in a positive way, and if certain X-risks are way off base then their popularity in EA has an outsized opportunity cost.
Lastly, many X-risks turns a lot of reasonable people away from EA, even when they’re correct—if we believe that EA is a great boon to humanity, then the reputational risk has very real implications for the analysis.
Those are my rough initial thoughts, which I’ve elaborated on a bit here. It’s a tricky question though, so I’d love to hear people’s critiques of this line of thinking—is this magnified risk something we should take into account? How would we account for it in models?
I just want to push back against your statement that “economists believe that risk aversion is irrational”. In development economics in particular, risk aversion is often seen as a perfectly rational approach to life, especially in cases where the risk is irreversible.
To explain this, I just want to quickly point out that, from an economic standpoint, there’s no correct formal way of measuring risk aversion among utils. Utility is an ordinal, not cardinal, measure. Risk aversion is something that is applied to real measures, like crop yields, in order to better estimate people’s revealed preferences—in essence, risk aversion is a way of taking utility into account when measuring non-utility values.
So, to put this in context, let’s say you are a subsistence farmer, and have an expected yield of X from growing Sorghum or a tuber, and you know that you’ll always roughly get a yield X (since Sorghum and many tubers are crazily resilient), but now someone offers you an ‘improved Maize’ growth package that will get you an expected yield of 2X, but there’s a 10% chance that you’re crops will fail completely. A rational person at the poverty line should always choose the Sorghum/tuber. This is because that 10% chance of a failed crop is much, much worse than could be revealed by expected yield—you could starve, have to sell productive assets, etc. Risk aversion is a way of formalizing the thought process behind this perfectly rational decision. If we could measure expected utility in a cardinal way, we would just do that, and get the correct answer without using risk aversion—but because we can’t measure it cardinally, we have to use risk aversion to account for things like this.
As a last fun point, risk aversion can also be used to formalize the idea of diminishing marginal utility without using cardinal utility functions, which is one of the many ways that we’re able to ‘prove’ that diminishing marginal utility exists, even if we can’t measure it directly.
I think that this discussion really comes from the larger discussion about the degree to which we should consider rational choice theory (RCT) to be a normative, as opposed to a positive, theory (for a good overview of the history of this debate, I would highly suggest this article by Wade Hands, especially the example on page 9). As someone with an economics background, I very heavily skew toward seeing it as a positive theory (which is why I pushed back against your statement about economists’ view of risk aversion). In my original reply I wasn’t very specific about what I was saying, so hopefully this will help clarify where I’m coming from!
I just want to say that I agree that rational choice theory (RCT) is dominated by expected utility (EU) theory. However, I disagree with your portrayal of risk aversion. In particular, I agree that risk aversion over expected utility is irrational—but my reasoning for saying this is very different. From an economic standpoint, risk aversion over utils is, by its very definition, irrational. When you define ‘rational’ to mean ‘that which maximizes expected utility’ (as it is defined in EU and RCT models), then of course being risk averse over utils is irrational—under this framework, risk neutrality over utils is a necessary pre-requisite for the model to work at all. This is why, in cases where risk aversion is important (such as the yield example), expected utility calculations take risk aversion into account when calculating the utils associated with each situation—thus making risk aversion over the utils themselves redundant.
Put in a slightly different way, we need to remember that utils do not exist—they are an artifact of our modeling efforts. Risk neutrality over utils is a necessary assumption of RCT in order to develop models that accurately describe decision-making (since RCT was developed as a positive theory). Because of this, the phrase ‘risk aversion over utility’ has not real-world interpretation.
With that in mind, people don’t fail the Allais paradox because of risk aversion over utils, since there is no such thing as being risk averse over utils. Instead, the Allais paradox is a case showing that older RCT models are insufficient for describing the actions of humans—since the empirical results appear to show, in a way, something akin to risk aversion over utils, which in turn breaks the model. This is an important point—put differently, risk neutrality over utils is a necessary assumption of the model, and empirical results that disprove this assumption do not mean that humans are wrong (even though that may be true), it means that the model fails to capture reality. It was because the model broke (in this case, and in others), that economics developed newer positive theories of choice, such as behavioral economics and bounded rationality models, that better describe decision-making.
At most, you can say that the Allais paradox is a case showing that people’s heuristics associated with risk aversion are systematically biased toward decisions that they would not choose if they thought the problem through a bit more. This is definitely a case showing that people are irrational sometimes, and that maybe they should think through these decisions a little more thoroughly, but it does not have anything to do with risk aversion over utility.
Anyways, to bring this back to the main discussion—from this perspective, risk aversion is a completely fine thing to put into models, and it would not be irrational to Alex to factor in risk aversion. This would especially be fine if Alex is worried about the validity of their model itself (which, Alex not being an expert on modeling nor AI risk, should consider to be a real concern). As a last point, I do personally think that we should be more averse to the risks associated with supporting work on far-future stuff and X-risks (which I’ve discussed partially here), but that’s a whole other issue entirely.
Hope that helps clarify my position!
My suggestion is that we worry less about solving moral philosophy and worry more about solving the actual core issues at stake
Moreover, many of our commonly held moral theories—having been developed in long-past social and historical contexts—don’t actually provide clear guidance on how we should resolve some of these new futuristic debates.
Yes—thank you for posting this! I think it’s really worth exploring the question of whether moral convergence is even necessarily a good thing. Even beyond moral convergence, I think we need to call into question its antecedent of ‘moral purity’ (i.e. defining and sticking to clear-cut moral principles) is even a good thing either.
I don’t have a philosophy background, so please let me know if this take is way off course, but like kbog mentions many of the commonly cited moral schema don’t apply in every situation – which is why Nick Bostrom, for example, suggests adopting a moral parliament set-up. I worry that pushing for convergence and moral clarity may oversimplify the nuance of reality, and may harm our effectiveness in the long run.
In my own life, I’ve been particularly worried about the limits of moral purity in day-to-day moral decisions – which I’ve written about here. While it’s easy to applaud folk who rigorously keep to a strict moral code, I really wonder whether it’s really the best way forward? For a specific example that probably applies to many of us, utilitarianism sometimes suggests that should you work excessive overtime at the expense of your personal relationships – but is this really a good idea? Even beyond self-care, is there a learning aspect (in terms of personal mental growth, as well as helping you to understand how to work effectively in a messy world filled with people who aren’t in EA) that we could be missing out of?
Thank you Mike, all very good points. I agree that some frameworks, especially versions of utilitarianism, are quite good at adapting to new situations, but to be a little more formal about my original point, I worry that the resources and skills required to adapt these frameworks in order to make them ‘work’ makes them poor frameworks to rely on for a day-to-day basis. Expecting human beings to apply these frameworks ‘correctly’ is probably giving the forecasting and estimation ability of humans a little too much credit. For a reductive example, ‘do the most good possible’ technically is a ‘correct’ moral framework, but it really doesn’t ‘work’ well for day-to-day decisions unless you apply a lot of diligent thought to it (often forcing you to rely on ‘sub-frameworks’).
Imagine a 10 year old child who suddenly and religiously adopts a classical hedonistic utilitarian framework – I would have to imagine that this would not turn out for the best. Even though their overall framework is probably correct, their understanding of the world hampers their ability to live up to their ideals effectively. They will make decisions that will objectively be against their framework, simply because the information they are acting on is incomplete. 10 year olds with much simpler moral frameworks will most likely be ‘right’ from a utilitarian standpoint much more often than 10 year olds with a hedonistic utilitarian framework, simply because the latter requires a much more nuanced understanding of the world and forecasted effects in order to work.
My worry is that all humans (not just 10 year olds) are bad at forecasting the impacts of their actions, especially when dynamic effects are involved (as they invariably are). With this in mind, let’s pretend that, at most, the average person can semi-accurately estimate the first order effects of your actions (which is honestly a stretch already). A first order effect would be something like “each marginal hour I work creates more utility for the people I donate to than is lost among me and my family”. Under a utilitarian framework, you would go with whatever you estimate to be correct, which in turn (due to your inability to forecast) would be based on only a first order approximation. Other frameworks that aren’t as based on forecasting (e.g. some version of deontology) can see this first order approximation and still suggest another action (which may, in turn, create more ‘good’ in the long-run). Going back to the overtime example, if you look past first-order effects in a utilitarian framework you can still build a reason against the whole ‘work overtime’ thing. A second order effect would be something like “but, if I do this too long, I’ll burn out, thus decreasing my long-term ability to donate”, and a third order effect would be something like “if I portray sacrificing my wellbeing as a virtue by continuing to do this throughout my life, it could change the views of those who see me as a role model in not-necessarily positive ways”, and so on. Luckily, as a movement, people have finally started to normalize an acceptance of some of the problematic second-order effects of the ‘work overtime’ thing, but it took a worryingly long time—and it certainly won’t be the only time that our first order estimations will be overturned by more diligent thinking!
So, yes, if you work really hard to figure out second, third, etc. order effects, then versions of utilitarianism can be great – but relying too heavily on it for day-to-day decisions (at the expense of sub-frameworks that rely less on forecasting ability) may not work out as well as we’d hope, since figuring out those effects is terribly complicated – in many decisions, relying on a sub-framework that relies less on forecasting ability (e.g. some version of deontology) may be the best way forward. Many EAs realize some version of this, but I think it’s something that we should be more explicit about.
To draw it back in to the “is the moral parliament basically the same as Expected Moral Value”, I would say that it’s not. They are similar, but a key difference is the forecasting ability required for each: moral parliament can easily be used as a mental heuristic in cases where forecasting is impossible or misleading by focusing on which framework applies best for given situations, whereas EMV requires quite a bit of forecasting ability and calculation, and most importantly is incredibly biased against moral frameworks that are unable to quantify the expected good to come out of decisions (yes, the discussion of how to deal with ordinal systems does some to mitigate this, but even then there is a need to forecast effects implicit in the decision process). Hopefully that helps clarify my position, I should’ve probably been a bit more formal in my reasoning in my original post, but better late than never I guess!
It probably wouldn’t hurt if AI inclined EAs focused more on getting experts on board. It’s a very bad situation to be in if the vast majority of experts on a given topic think that a given issue you are interested in is overblown, because 1) tractabilty goes down the tubes, since most experts actively contradict you, 2) your ability to collaborate with other experts is greatly hampered, since most experts won’t work with you, and 3) it becomes really easy for people to assume that you’re a crackpot. I’m also not sure if it’s even ‘rational’ for non experts to get involved in this until a majority of experts in the field is on board. I mean, if person A has no experience with a topic, and the majority of experts say one thing, but person A gets convinced that the opposite is true by an expert in the minority, am I wrong in thinking that that’s not a great precedent to set?
In the US, and elsewhere, they use incentives to keep people in line, such as withholding endorsements or party funds, which can lead to people losing their seat, this effectively kicking them out of the party. See whips) for what this looks like in practice. Also, in parliamentary systems, often times you can also kick people out of the party directly, or at the very least take away their power and position.
Good question—not really sure, I just meant to directly answer that one question. That being said, Social movements have, to varying degrees of success, managed to distance evenhanded from fringe subsets and problematic actors. How, exactly, one goes about doing this is unknown to me, but I’m sure that it’s something that we could (and should) learn from leaders of other movements. Of the top of my head, the example that is most similar to our situation is the expulsion of Ralph Nader from the various movements and groups he was a part of after the Bush election.
I would strongly advise against using the Freeman article, which is very out of date and doesn’t represent the almost 50 years of progress in feminist thought that have come after it. In particular, intersectional feminism, which has now become one of the leading types of feminism, directly challenges the thoughts that Freeman put down, noting that the structures in feminism actually consistently were used to silence voices within the movement that did not fit within the mainstream. Crenshaw wrote one of the seminal articles on this, but there are many other modern authors who also share this view.
Silencing of voices within a movement is a really important issue: many women in the civil rights movement were silenced in the name of ‘black unity’ (Audre Lorde is a good source on this); even today bisexual people have trouble finding a place in the LGBTQ movement (example); genderqueer women, transsexual women, non-hetero women, and women of color consistently are sidelined in women’s rights movements (Lorde again, she’s amazing); and national identities can be used to silence dissenters of any type. In the best of cases, these cases consisted of silencing the concerns of multiple members of the movement, in the worst of cases the ensuing dehumanization (“you are a danger to the cause!”) led to violence.
I’ve heard of people who don’t fit the ‘traditional EA mold’ feel that this is happening in EA as well (quick example here). Even if it’s not direct sanction, feeling that you are “not EA enough” can still create a problem.
Long story short, structure is bad, long live structurelessness.
A nice balance is probably best overall, good point. Although, I do think it may be worth looking into replicating the intellectual diversity that feminism developed over time (while avoiding the pitfalls, inshallah) - it might be something that could benefit the movement going forward.
I am sorry to hear that your encounters with feminism have primarily been divisive. My experience has been a bit different, and it may help for me to go into some quick details (OK, actually this post became quite long, which I apologize for—it’s probably approaching blog length) and draw parallels with EA.
It took me a year to actually start engaging with EA. I love cost effectiveness, marginal thinking, and rigorously thinking about how to do the most good. My friends and colleagues do as well, but they do not engage with EA. To me, EA appeared, from the outside, to be a group that lays claim to something that is not unique to them, and then looks down on others—a very insular community with members that actively trash and condescend people who ‘are not EA enough’. Other critics have expressed this view as well, and my initial forays into EA did not help this perception—some of my views are not standard EA views, and I had multiple people without economics backgrounds jump on me to explain that I was wrong while condescendingly explaining basic economics to me. This would be fine, if they were actually correct to do so—most of the times the loudest critiques were the most rudimentary and off the mark (for reference, I got my masters in economics and work directly in integrating economic thinking into aid programs, so I have a decent idea of what bad economic thinking looks like). Needless to say, these experiences and others left a sour taste in my mouth, and so I stopped engaging for a while.
This is similar to some people’s experiences with feminism—when initially trying to break in, it can seem like a very insular community driven entirely by yelling at people who are not ‘feminist enough’. I liked feminist ideals in undergrad, similar to how I enjoyed EA ideals, but avoided it because my perception was that I would not get anything from engaging in feminism because I would be expunged for ‘not being feminist enough’ (similar to why I avoided EA). I also didn’t see a clear reason for engaging, since many of my friends already had feminist ideals without being a direct part of the feminist movement (similar to my friends and colleagues who hold EA ideas without engaging with EA).
The moment that really changed everything was in the first year of my masters, where I was hitting a economic problem that the tools I was using just could not solve—I went to my adviser, complaining that no one seemed to have thought about this problem before, to which he retorted “you know that the feminist economists have been working on this for decades, right? Talk to Professor XYZ and they’ll help you”. And I did, and next thing I knew I was getting a specialty in gender analysis of economics—because as I started to get more involved, I realized that behind that initial barrier was a rich world of diverse thinking on a variety of topics. I truly believe now that the most advanced and innovative thinking in economics today comes from feminist economists.
And it wasn’t just academic feminists—once I got past that initial barrier, I started looking more into the very groups I originally avoided, and I soon realized that a lot of feminist activists were actively fighting to break down the barrier that I encountered, by advocating for ‘calling in’ rather than ‘calling out’ (among other things). Once you’re inside, it is a very supportive and tolerant community, and it has helped me (and many others) grow as a person and as a thinker more than anything else in my life has.
Going back to EA, as I mentioned before there is a very similar barrier, in which to an outside person a lot of the people ‘representing EA’ online can be quite nasty to outsiders and divergent views. Once I got past this initial barrier, I realized that the majority of people identifying with EA are actually quite nice, and I realized that there are many in the EA movement who are actively trying to make people’s first experience of EA more amicable and to make the movement as a whole more tolerant and respective of divergent views. It’s essentially the EA movement’s equivalent of the ‘calling-in’ problem, and the point that these discussions are happening make me very hopeful for the future.
None of this really helps answer the ‘what about a formal mechanism’ question directly, I just want to try and express my belief that better engagement with social movements like feminism (all of whom have dealt with similar problems to the EA movement!) is important. Offhand saying that ‘feminism failed on this point, so we can’t learn from them’ without really engaging with members of the feminist movement is not a strong way forward.
In terms of examples off the top of my head of how feminist actors have tried to mitigate the ‘bad actor’ problem, my first thought is the issue of problematic ‘allies’. The response has to write guidance (less formal version here) on how to be a good ally, and to generally set forth ‘community norms’ that show up in various places (blogs, posters, listservs, whatever). When someone does not adhere to these norms, in the best of cases you can help them understand why going against the norm is bad and help them be a better ally, and in the worst of situations the movement as a whole at least has some plausible deniability (“don’t tell us that person is representative of us, they’re clearly breaking all of the norms that we’ve clearly detailed all of these places!”).
I agree that systematic change should be given more thought in EA, but there’s a very specific problem that I think we need to tackle before we can do this seriously: a lot of the tools and mindsets in EA are inadequate for dealing with systematic change.
To explain what I mean, I want to quickly make reference to a chart that Caroline Fiennes uses in her book. Essentially, you can think of work on social issues as a sort of ‘pyramid’. At the top of the pyramid you have very direct work (deworming, bed nets, cash transfers, etc.). This work is comparably very certain to work, and you can fairly easily attribute changes in outcomes to these programs. However, the returns are small—you only help those who you directly work with. As you go down the pyramid, you start to consider programs that focus on communities… then those that focus on changing larger policy and practice … then changing attitudes and norms (or some types of systematic change) … and eventually you get to things like existential risks. As you go down the pyramid, you get greater returns to scope (can impact a lot more people), but it becomes a lot more uncertain that you will have an impact, and it also becomes very hard to attribute change in any outcome to an program.
My worry is that the tools that the EA movement relies on were created with the top of the pyramid in mind—the main forms of causal research, cost effectiveness analysis, and so on that we rely on were not built with the bottom or even middle of the pyramid. Yes, members of EA have gotten very good at trying to apply these tools to the bottom and middle, but it can get a bit screwy very quickly (as someone with an econ background, I shudder whenever someone uses econ tools to try and forecast the cost effectiveness of X-risk reduction activities—it’s like trying to peel a potato while blindfolded using a pencil: it’s not what the pencil was made for, and even though it is technically possible I’ll be damned if the blindfolded person actually has a clue if it’s working or not).
We should definitely keep our commitment to these tools, but if we want to be rigorous about exploring systematic risks, we should probably start by figuring out how to expand our toolbox in order to address these issues as rigorously as possible (and, importantly, to figure out when exactly our current tools are insufficient! We already have these for a lot of our tools—basically assumptions that, when broken, break the tool—but I haven’t seen people rigorously consulting them!). I’m sure that a lot of us have in mind some very clear ideas of how we can/should rigorously prioritize and evaluate various systematic risks—but I’m pretty sure we have as many opinions as we have people. We need to get on the same page first, which is why I’d suggest that we work on figuring out some basic standards and tools for moving forward, then going from there. Expanding our toolkit is key, though—perhaps someone should look into other disciplines that could help out? I’d do it, but I’m lazy and tired and probably would make a hash of it anyway.
Ah, I see the issue now—you are assuming that I’m saying that feminism has a model that we should directly emulate, whereas I am just saying that they are dealing with similar issues, and we have things to learn from them. In short, there are leaders in feminism who have been working on this issue, with some limited success and yes, a lot of failures. However, even if they were completely 100% failing, then there is still a very important thing that we can learn from them: what have they tried that didn’t work? It is just as important to figure out pitfalls and failed projects as it is to try and find successful case studies.
The key is getting that conversation started, and comparing notes. Your perception of feminism and the problems therein may change in the process, but most importantly we all may learn some important lessons that can be applied in EA (even if they do consist primarily of “hey this one solution really doesn’t work, if you do anything, do something else”).
If you are truly 100% not convinced that we can learn this from feminism, then that’s OK: you can talk to leaders of any other social movement instead, since many of them have dealt with and thought about similar problems. Your local union reps may be a good place to start!
Yeah, I can see how that could be an issue, and honestly I do lean towards the “the external optics problem is the patriarchy’s fault, not ours—telling us that we are ‘not nice enough’ is just a form of silencing, and you wouldn’t listen to us anyway if we were ‘nicer’” viewpoint, but I can see how that can make this discussion difficult. I’m just mostly hoping that the discussions on ‘calling-in’ within feminism move forward—even a quick google search shows that it’s popping up on a lot of the feminist sights targeted to younger audiences—it may be on oncoming change, and hopefully it’ll pick up steam.
Congratulations on your engagement by the way!
Ah, but have you done a RCT with a cost effectiveness analysis on this? I am dubious that hacked ATM cards are a more cost efficient intervention than, say, funding free bed bet distribution!
I think that Karnofsky’s post, as well as the above discussions, miss another important set of considerations for effectiveness (which only really apply to people giving over $100,000, but still):
1) Lowering fundraising and transaction costs for the charity: When a large donor agrees to stay with an organization for a long time, that organization can focus more on their programs and less on fundraising—when donors are constantly shifting from organization to organization, organizations are constantly being forced to spend valuable resources replacing them. In addition to fundraising costs, it’s also important to remember various transaction costs, such as reporting requirements (which you can work to streamline over a long-term engagement with a trusted organization).
2) Uncertainty for the capacity building: Keep in mind that helping build an organization’s capacity can have long-term impacts beyond just ‘# of bednets distributed’. When it comes to long-term donor relationships, it’s not just about revenue planning (which was mentioned above), but also about strategic planning. Lets say that your current largest donor is really interested in, say, monitoring and evaluation. If you know that that donor will stay around for a while, you can invest in your M&E capacity knowing that it will continue to be funded, but if that donor is going to switch to another organization in a year, how can you be sure that the next donor will fund you in a similar way? You might need to prepare to cut your M&E the instant you find a new donor.
3) Giving to learn isn’t just about building a network and learning about a cause, it’s also important to have deep relationships with an implementer: Shallow relationships only go so far, building a strong, ongoing relationship with an organization you trust can help you prioritize growth areas within that organization, and set both you and the organization up for successful experimentation and problem solving.
Caroline Fiennes of Giving Evidence (https://giving-evidence.com/) has a few other good reasons for focusing on building a long-term relationship with a small number of charities, but unfortunately I cannot find a consolidated blog post or article on this specific topic. Still, if you dig into her work, she does have a lot of interesting work on how the way in which people give (e.g. long vs. short term) has implications for the long-term effectiveness of charities. I would definitely recommend looking in to her work.