it is more important to build a robust movement of people collaboratively and honestly maximizing impact than it is to move additional money to even very good charities in the short term.
I generally agree with your sentiment here. However, I think the analysis changes when it comes to introducing people to EA.
Giving Multiplier doesn’t target existing EAs with this offer. They’re targeting people outside the movement who very likely are not yet familiar with EA ideas. Because the standard in the industry is that donation matching is a norm, reaching that audience sometimes takes doing something like this kind of match. Then, once someone has given to an EA cause, they are far easier to convince to give to EA charities later on, and (presumably) they’ll be good targets for converting to EAs themselves.
While I agree that this type of influence matching is ultimately misleading in the ways you describe, I don’t think it’s fair to call it illusory matching. I mean: it is illusory in the colloquial sense, but illusory matching as a term originated in GiveWell’s 2016 post, where Holden specifically put forward the claim that this kind of influence matching is a type of non-illusory matching. He even suggests the very concept that Giving Multiplier is doing:
you should fight back by structuring your own influence matching – making a conditional commitment to the highest-impact charity you can find, in order to pull other dollars in toward it.
With all that said, I do think it makes sense for people that read this forum to not give through Giving Multiplier nor to be really all that influenced by donation matching campaigns generally. But as a technique for reaching people new to EA, I don’t think that legitimate non-illusory matching in the form of coordination matching or influence matching is necessarily bad.
It is generally misleading in the sense that a careful read will make you realize that they are doing something different from how it may naively seem from a read on their homepage — but when it comes to an unsophisticated audience who is already used to the norm of illusory matching, I’m not sure it’s fair to call someone like Giving Multiplier especially misleading just because their homepage doesn’t go into all the details of how this type of matching works. If someone asks me where the nearest gas station is, I shouldn’t be labeled as misleading just because I don’t give significant detail on the best route there. I think it’s okay to just point and say “five minutes that way”. I feel the same about how Giving Multiplier has structured their homepage.
Holden specifically put forward the claim that this kind of influence matching is a type of non-illusory matching. He even suggests the very concept that Giving Multiplier is doing
Holden wrote “the matcher makes a legitimate commitment to give only if others do, in an attempt to influence their giving”. That’s not what’s happening here: the matchers are donating regardless of whether others do. Additionally, I’m quite pessimistic about people being able to make legitimate commitments in this regard, since predicting what you would otherwise do with the funds is typically very difficult.
(I also think glossing Holden’s “perhaps … you should fight back” as “you should fight back” gives the wrong impression.)
I generally agree with your sentiment here. However, I think the analysis changes when it comes to introducing people to EA.
Giving Multiplier doesn’t target existing EAs with this offer. They’re targeting people outside the movement who very likely are not yet familiar with EA ideas. Because the standard in the industry is that donation matching is a norm, reaching that audience sometimes takes doing something like this kind of match. Then, once someone has given to an EA cause, they are far easier to convince to give to EA charities later on, and (presumably) they’ll be good targets for converting to EAs themselves.
While I agree that this type of influence matching is ultimately misleading in the ways you describe, I don’t think it’s fair to call it illusory matching. I mean: it is illusory in the colloquial sense, but illusory matching as a term originated in GiveWell’s 2016 post, where Holden specifically put forward the claim that this kind of influence matching is a type of non-illusory matching. He even suggests the very concept that Giving Multiplier is doing:
With all that said, I do think it makes sense for people that read this forum to not give through Giving Multiplier nor to be really all that influenced by donation matching campaigns generally. But as a technique for reaching people new to EA, I don’t think that legitimate non-illusory matching in the form of coordination matching or influence matching is necessarily bad.
It is generally misleading in the sense that a careful read will make you realize that they are doing something different from how it may naively seem from a read on their homepage — but when it comes to an unsophisticated audience who is already used to the norm of illusory matching, I’m not sure it’s fair to call someone like Giving Multiplier especially misleading just because their homepage doesn’t go into all the details of how this type of matching works. If someone asks me where the nearest gas station is, I shouldn’t be labeled as misleading just because I don’t give significant detail on the best route there. I think it’s okay to just point and say “five minutes that way”. I feel the same about how Giving Multiplier has structured their homepage.
Holden wrote “the matcher makes a legitimate commitment to give only if others do, in an attempt to influence their giving”. That’s not what’s happening here: the matchers are donating regardless of whether others do. Additionally, I’m quite pessimistic about people being able to make legitimate commitments in this regard, since predicting what you would otherwise do with the funds is typically very difficult.
(I also think glossing Holden’s “perhaps … you should fight back” as “you should fight back” gives the wrong impression.)