Yeah my quick guess is that (as for many complex skills) g is very helpful, but that it’s very possible to be high g without being very good at the thing I’m pointing at (partially because feedback loops are poor, so people haven’t necessarily has a good training signal for improving).
Owen_Cotton-Barratt
I guess I significantly agree with all of the above, and I do think it would have been reasonable for me to mention these considerations. But since I think the considerations tend to blunt rather than solve the issues, and since I think the audience for my post will mostly be well aware of these considerations, it still feels fine to me to have omitted mention of them? (I mean, I’m glad that they’ve come up in the comments.)
I guess I’m unsure whether there’s an interesting disagreement here.
Yeah, I totally agree that if you’re much more sophisticated than your (potential) donors you want to do this kind of analysis. I don’t think that applies in the case of what I was gesturing at with “~community projects”, which is where I was making the case for implicit impact markets.
Assuming that the buyers in the market are sophisticated:
in the straws case, they might say “we’ll pay $6 for this output” and the straw org might think “$6 is nowhere close to covering our operating costs of $82,000” and close down
I think too much work is being done by your assumption that the cost effectiveness can’t be increased. In an ideal world, the market could create competition which drives both orgs to look for efficiency improvements
This kind of externality should be accounted for by the market (although it might be that the modelling effectively happens in a distributed way rather than anyone thinking about it all).
So you might get VCs who become expert in judging when early-stage projects are a good bet. Then people thinking of starting projects can somewhat outsource the question to the VCs by asking “could we get funding for this?”
Moral trade is definitely relevant here. Moral trade basically deals with cases with fundamental-differences-in-values (as opposed to coordination issues from differences in available information etc.).
I haven’t thought about this super carefully, but it seems like a nice property of impact markets is that they’ll manage to simultaneously manage the moral trade issues and the coordination issues. Like in the example of donors wishing to play donor-of-last-resort it’s ambiguous whether this desire is driven by irreconcilably different values or different empirical judgements about what’s good.
I agree that these considerations would blunt the coordination issues some.
So I think that a proposal for “Implicit impact markets without infrastructure” should probably include as one element a reminder for people to take these considerations into account.
I guess I think that it should include that kind of reminder if it’s particularly important to account for these things under an implicit impact markets set-up. But I don’t think that; I think they’re important to pay attention to all of the time, and I’m not in the business (in writing this post) of providing reminders about everything that’s important.
In fact I think it’s probably slightly less important to take them into account if you have (implicit or explicit) impact markets, since the markets would relieve some of the edge that it’s otherwise so helpful to blunt via these considerations.
Yeah, Shapley values are a particular instantiation of a way that you might think the implicit credit split would shake out. There are some theoretical arguments in favour of Shapley values, but I don’t think the case is clear-cut. However in practice they’re not going to be something we can calculate on-the-nose, so they’re probably more helpful as a concept to gesture with.
Of course “non-EA funding” will vary a lot in its counterfactual value. But roughly speaking I think that if you are pulling in money from places where it wouldn’t have been so good, then on the implicit impact markets story you should get a fraction of the credit for that fundraising. Whether or not that’s worth pursuing will vary case-to-case.
Basically I agree with Michael that it’s worth considering but not always worth doing. Another way of looking at what’s happening is that starting a project which might appeal to other donors creates a non-transferrable fundraising opportunity. Such opportunities should be evaluated, and sometimes pursued.
I agree that in principle that you could model all of this out explicitly, but it’s the type of situation where I think explicit modelling can easily get you into a mess (because there are enough complicated effects that you can easily miss something which changes the answer), and also puts the cognitive work in the wrong part of the system (the job of funders is to work out what would be the best use of their resources; the job of the charities is to provide them with all relevant information to help them make the best decision).
I think impact markets (implicit or otherwise) actually handle this reasonably well. When you’re starting a charity, you’re considering investing resources in pursuit of a large payoff (which may not materialise). Because you’re accepting money to do that, you have to give up a fraction of the prospective payoff to the funders. This could change the calculus of when it’s worth launching something.
I like the jumping in! I think using vignettes as a starting point for discussion of norms has some promise.
In these cases, I imagine it being potentially fruitful to have more-discussion-per-vignette about both whether the idea captured is a good one (I think it’s at least unclear in some of your examples), as well as how good it would be if the norm were universalised … we don’t want to spend too much attention on promoting norms that while positive just aren’t a very big deal.
Default expectations of credit
Maybe we should try to set default expectations of how much credit for a project goes to different contributors? With the idea that not commenting is a tacit endorsement that the true credit split is probably in that ballpark (or at least that others can reasonably read it that way).
One simple suggestion might be four equal parts credits: to founding/establishing the org and setting it in a good direction (including early funders and staff); to current org leadership; to current org staff; to current funders. I do expect substantial deviation from that in particular cases, but it’s not obvious to me that any of the buckets is systematically too big or too small, so maybe it’s reasonable as a starting point?
Inefficiencies from inconsistent estimates of value
Broadening from just considering donations, there’s a worry that the community as a whole might be able to coordinate to get better outcomes than we’re currently managing. For instance opinions about the value of earning to give vary quite a bit; here’s a sketch to show how that can go wrong:
Alice and Beth could each go into direct work or into earning-to-give. We represent their options by plotting a point showing how much they would achieve on the relevant dimension for each option. The red and green points show some possibilities for what Alice and Beth might together achieve by each picking one of their options. There are two more points in that choice set, one on each axis, where both people go into direct work or both go into earning to give. It’s unclear in this example what optimal outcome is, but it is clear that the default point is not optimal, since it’s dominated by the one marked “accessible”. This doesn’t quite fit in the hierarchy of approaches to donor coordination, but it is one of the issues that fully explicit impact markets should be able to help resolve. How much would implicit impact markets help? Maybe if they were totally implicit and “strengths of ask” were always made qualitatively rather than quantitatively it wouldn’t help so much (since everyone would understand “strength” relative to what they think of as normal for the importance of money or direct work, and Alice and Beth have different estimates of that ‘normal’). But if a fraction of projects move to providing quantitative estimates (while still not including any formal explicit market mechanisms), that might be enough to relieve the inefficiencies.
Forget replaceability? (for ~community projects)
Definitely didn’t mean to shut down conversation! I felt like I had a strong feeling that it was not an option on the table (because of something like coherence reasons—cf. my reply to Jonas—not because it seemed like a bad or too-difficult idea). But I hadn’t unpacked my feeling. I also wasn’t sure whether I needed to, or whether when I posted everyone would say something like “oh, yeah, sure” and it would turn out to be a boring point. This was why I led with “I don’t know how much of an outlier I am”; I was trying to invite people to let me know if this was a boring triviality after it was pointed out, or if it was worth trying to unpack.
P.S. I appreciate having what seemed bad about the phrasing pointed out.
Hmm, no, I didn’t mean something that feels like pessimism about coordination ability, but that (roughly speaking) thing you get if you try to execute a “change the name of the movement” operation is not the same movement with a different name, but a different (albeit heavily overlapping) movement with the new name. And so it’s better understood as a coordinated heavy switch to emphasising the new brand than it is just a renaming (although I think the truth is actually somewhere in the middle).
I don’t think that’s true if the name change is minor so that the connotations are pretty similar. I think that switching from “effective altruism” to “efficient do-gooding” is a switch which could more or less happen (you’d have a steady trickle of people coming in from having read old books or talked to people who were familiar with the old name, but “effective altruism, now usually called efficient do-gooding” would mostly work). But the identity of the movement is (at least somewhat) characterised by its name and how people understand it and relate to it. If you shifted to a name like “global priorities” with quite different connotations, I think that it would change people’s relationship with the ideas, and you would probably find a significant group of people who said “well I identify with the old brand, but not with the new brand”, and then what do you say to them? “Sorry, that brand is deprecated” doesn’t feel like a good answer.
(I sort of imagine you agree with all of this, and by “change the name of the movement” you mean something obviously doable like getting a lot of web content and orgs and events and local groups to switch over to a new name. My claim is that that’s probably better conceived of in terms of its constituent actions than in terms of changing the name of the movement.)
I don’t know how much of an outlier I am, but I feel like “change the name of the movement” is mostly not an option on the table. Rather there’s a question about how much (or when) to emphasise different labels, with the understanding that the different labels will necessarily refer to somewhat different things. (This is a different situation than an organisation considering a rebrand; in the movement case people who preferred the connotations of the older label are liable to just keep using it.)
Anyhow, I like your defence of “effective altruism”, and I don’t think it should be abandoned (while still thinking that there are some contexts where it gets used but something else might be better).
I agree that this is potentially an issue. I think it’s (partially) mitigated the more it’s used to refer to ideas rather than people, and the more it’s seen to be a big (and high prestige) thing.
Maybe the obvious suggestion then is “new enlightenment”? I googled, and the term has some use already (e.g. in a talk by Pinker), but it feels pretty compatible with what you’re gesturing at. I guess it would suggest a slightly broader conception (more likely to include people or groups not connected to the communities you named), but maybe that’s good?
Thanks, makes sense. This makes me want to pull out the common characteristics of these different groups and use those as definitional (and perhaps realise we should include other groups we’re not even paying attention to!), rather than treat it as a purely sociological clustering. Does that seem good?
Like maybe there’s a theme about trying to take the world and our position in it seriously?
I didn’t downvote (because as you say it’s providing relevant information), but I did have a negative reaction to the comment. I think the generator of that negative reaction is roughly: the vibe of the comment seems more like a political attempt to close down the conversation than an attempt to cooperatively engage. I’m reminded of “missing moods”; it seems like there’s a legitimate position of “it would be great to have time to hash this out but unfortunately we find it super time consuming so we’re not going to”, but it would naturally come with a mood of sadness that there wasn’t time to get into things, whereas the mood here feels more like “why do we have to put up with you morons posting inaccurate critiques?”. And perhaps that’s a reasonable position, but it at least leaves a kind of bad taste.