Founder & co-CEO at Momentum
arikagan
Yeah, I basically agree with everything you said! More specifically:
We tested with the small nonprofits using the same hacked software we used for DBT (literally a static page with an iframe to our webapp)
We were in fact making financial transaction errors (e.g. double charging donors) since our database was not designed for scale, so we had to rebuild our backend. We had no in-house engineers, so we paused development until we hired engineers who have rearchitected the backend.
We’re trying to launch with mid-sized nonprofits using the improved technology we built for the small-nonprofits.
Our sales calls have revealed pretty similar feature requests from every mid-sized nonprofit (e.g we need a CRM integration). Many of these features are in our roadmap, but I agree with you that it would be even better to build it for a specific charity that is willing to sign rather than based on “generic” tech.
We’re already focused on getting 3-5 “case studies” from mid-sized nonprofits, but I’ve been reflecting recently on going to greater lengths to build custom features for those organizations, and after reflecting on your comments I’m updating further in that direction.
We did a lot of this custom work for DBT—we stopped developing generic features and almost worked like a dev shop to meet their needs (although we’ve slowed developing custom features for that charity as we are no longer learning as much from them).
Yeah, great question—I agree that this is more important than many of the questions I lay out in the FAQ (but unfortunately it’s not one we’re frequently asked). I agree the mobile app did not have PMF (which is why we shut it down). The vast majority of the 40k donors are from DBT (it’s the largest campaign we’ve launched by far), so the interesting question is if we can replicate that to demonstrate PMF.
We spent most of this last year rebuilding our technology and running campaigns with small nonprofits to see which features are needed for PMF. We can now easily acquire small nonprofit customers but have now learned all we can from them. While these are easy minimum viable sales tests, these customers don’t have substantial donation volume. Saying that we got a 40% increase in donation volume isn’t meaningful if we moved an org from $200/mo to $280/mo. Now that we’ve hired several engineers and upgraded our tech, we’re moving on to testing with larger nonprofits with significant donation volume (our target market is orgs moving over $1M/year). Of course, even with the right product you still need to find the right growth channel, but that should also be revealed through the tests we’re running on the new market segment. We believe these tests will answer the three most important questions right now: do we truly have PMF, which growth channels should we be prioritizing, and precisely how large is the increase in donation volume for a nonprofit using our software.
Makes sense! I originally just spoke about the aspirational goals, but then early reviewers asked for more explicit estimates of middle-of-the-road outcomes as well, so I added a range of possibilities. Perhaps that made it more confusing than if I’d just kept one set of estimates.
Thanks for the thoughtful questions, I appreciate the close reading it likely took for you to understand that level of nuance. To address all of them:
1) $500M/year:“You change to [$500M/yr] in the main body”
Interesting, I’m wondering why you felt that was the main claim of the body? While at one point I do give a 5% fermi estimate of $500M/yr, it’s immediately followed by a 1% estimate of $5B/yr, and I never refer to $500M at any other point. I also give estimates for $11B/yr twice in the body (slightly less than $1B/mo) and estimates for $4-5B/yr twice as well. These latter numbers roughly correspond with the original goals mentioned: to move $1B/mo and to grow funding by over an order of magnitude (>$4B/yr).
More importantly, I want to distinguish our goals from the range of possible outcomes (or even the most likely outcome). It seems valuable to set extremely ambitious goals while still being honest about the chance of success. Being honest about those probabilities shouldn’t be seen as undermining that ambitious goal.
2) 87% & $10M:
Great question—it is not 87% of the $10M, it’s 87% of the money moved on the mobile app. The $10M is total money moved, including donation pages. As discussed in the post, we haven’t rolled out the donor portal to most of the new donation pages yet, so it is very hard to get a good estimate. Since we haven’t started nudging the new donors towards recommended charities yet, the only money moved so far to recommended charities is from mobile donors. If I recall correctly, the mobile app moved around $100K before we switched to focusing on donation pages. You might expect that to mean $87K to recommended charities (87% x $100K), but in fact it’s closer to $230K to date. Recurring donors are retained at a very high rate, and many mobile donors are still donating to our recommended charities (they can manage donations through the donor portal). Naturally, $230K still only represents a fraction of the $10M (~2%) since the new donation pages reach far more donors, and we haven’t yet begun recommending charities to these donors. We are very eager to roll out the donor portal to the donors using the donation pages and see what that percentage becomes. As discussed in the post, we expect it to be lower than 87%, but don’t know how much lower.
3) Counterfactual:
I’ve been unable to think of a great way to estimate this. We do have evidence that donations tend not to be made from a fixed budget (e.g. people are very open to giving more or less depending on how much we ask). As a result, we might expect a large portion of donations to be counterfactual. More importantly, even if someone supporting our recommended charities would have donated money regardless, it presumably would have gone to a different charity. These donors are new to effective giving and otherwise most would support far less effective charities. That said, we don’t have a good way to come up with a numerical estimate for any of this, so if you have any ideas I would love to hear them, as this question is extremely important to us.
Hopefully this helps to clarify things!
Ah sorry, it keeps adding my punctuation to the hyperlink! Fixed
We’ve explored a number of different exit strategies, although we don’t really need to flesh out all of the details until later rounds. Some ideas we’ve thrown around:
- There’s a lot of for-profit interest in fintech from groups like Intuit, Blackbaud, etc. We’d obviously need to be careful from a mission alignment point of view, but the ultimate goal (as with any social enterprise) is to bake our theory of change into the way we make money so that it is genuinely valuable from a for-profit point of view to push our mission forward. This is harder with effective altruism than simply with broader social impact, as it adds additional constraints, but the process is similar (and we believe it’s achievable). This is something I’ve spent a lot of time thinking about from a product point of view, and it’s actually one of the main reasons we think it’s so important to hire an EA product manager (more than for many other roles).
- There are more creative setups we’ve seen work, like separating profits from decision making, so investors earn a higher percentage of profits while having a lower percentage of voting rights. There’s also a concept called Steward Ownership that we’re pretty excited about: https://medium.com/@purpose_network/whats-steward-ownership-14efc6caf9e7. You could also imagine exiting to a foundation, although this would bear other risks (e.g. reconsolidating decision making under the very mega-donors you were trying to diversify from).I actually don’t think we have this fully answered yet, and I expect to spend significantly more time down the line exploring exit options that maintain profitability while advancing our mission.
Yeah, we’re actually very excited about this possibility! I had to resist adding another 2 pages on this as a separate route towards impact because the post was already too long, although I gestured at it briefly when I mention a possible “on-ramp towards deeper involvement in EA.”
As you suggest, if funding continues to grow at its current pace (and the other concerns we discuss about relying on a few funders prove incorrect), then growing the community may become a higher priority than merely increasing donation volume. While many donors won’t be interested in deeper involvement with EA, if we actually scale to have 10M donors on the platform, introducing a small portion of these donors to the EA community would be extremely valuable.
We intend to provide escalating layers of information that are available if a user expresses interest. In the mobile app we had default portfolios, but we also provided charity profiles with additional descriptions, impact estimates, and links out to the charity evaluator that recommended the organization. We hope to eventually go one step further, and offer other routes to engage with the community, such as career coaching, mentors, EA fellowships, and more. This would allow you to easily make a donation without needing to process too much information, but if you are interested in engaging more deeply after, that should also be encouraged.
Other FinTech platforms take similar approaches to financial education, first starting with automation/robo-advisors, then escalating to tips and suggestions once you’ve demonstrated engagement, and sometimes escalating to interacting with a financial advisor (in this case, perhaps a career advisor or an EA fellowship). We think something in this vein is likely a very promising alternative route to impact.
Great question. I actually don’t think that data will prove very useful yet, although I’d be happy to share it with you directly. We primarily found donations in the mobile app to heavily follow defaults, and intentionally didn’t yet spend much time experimenting with defaults. We were struggling with donor acquisition, and the evidence was already strong that we could influence donation choices, so we turned our attention to the donation pages. We haven’t yet rolled out the new donor portal where defaults will once again become critical, but once we do, experimenting with defaults will become very important (and then that data would be more revealing).
As to the stat I mentioned: we ran a series of campaigns focused on donating money saved during COVID (e.g. donating the price of your canceled commute or donating a refunded concert ticket). We rapidly built a portfolio of relevant charities when COVID hit, and as always, the donation volume closely tracked our defaults. Some defaults were doing direct relief/included to increase familiarity (e.g. Feeding America). Others were recommended by 80k (e.g. Gates Foundation), Founder’s pledge (e.g. John’s Hopkins CHS), or had received Open Philanthropy grants (e.g. CDC Foundation). We defaulted JHCHS to 10%, which it received. CDC Foundation also received 25%. However, I doubt their share was allocated to preventing future pandemics (in retrospect, it is clear that the OP grants were carefully earmarked for specific programs). But that seems like a question of default selection, so my conclusion was that defaults almost entirely determine the donation volume to each charity. I wasn’t unsure whether to count the 25%, so I listed it as 10-35%.
We’ve actually seen far more than 35% go to longtermism in some cases. One campaign defaulted to the Good Technology fund had 93% of donors giving to CHAI, CISAC, FHI, JHCHS, MIRI, and NTI (donate every time a news article comes out on nuclear disarmament). What’s more, one donation rule that had nothing to do with longtermism achieved 85% simply because Good Tech was the default (donate every time This American Life airs). But these had a lower sample size, so I wouldn’t conclude much beyond the exceptional influence of defaults.
Defaults seem to predict a massive portion of the variance, which suggests it shouldn’t be too challenging to get donors to support longtermism. If we can get enough donors into the portal, the hard work will be carefully choosing which charities to default. If we reach the point where the donation volume is high enough, I am hopeful we can rely on help from other EA organizations that are better suited to make these decisions than we are. We may even establish an independent nonprofit (that doesn’t answer to the for-profit) which owns decisions around charity recommendations.
100% agreed with everything you said here. We’ve thought through some of these scenarios you brought up but didn’t want to get too bogged down in more complicated estimates in the main post. Our more in depth estimates might place it closer to 1000 breaking even, perhaps a few thousand to be very safe. Happy to discuss more in depth, but as you say it becomes less relevant once the numbers get larger.
Interesting idea! We haven’t built that in yet, but I think we could build a feature that would add up your donations throughout the year and track your projected impact, but wait until Giving Tuesday to actually disburse the funds (in a way that would enable the match).
As Nick said, it would be wonderful to see follow-up studies here that try to flesh out these different aspects. We don’t think we’re covering everything in EA (although the description Nick posted below is from effectivealtruism.org, so it seemed like a decent first attempt). But that certainly seems correct, you could have very different answers to “who likes extreme altruism”, “who likes AI safety”, etc.
The community question is particularly interesting one because it might be more of a historical artifact than a necessary trait of the movement. There could be people who would be a perfect fit for ideas of EA (however defined: x-risk, donating 50%, etc), but still might not like the current community. How to actually deal with that finding would be a different question, but it seems like that would be worth knowing.
Thanks Siebe—while I certainly agree that we don’t take the most extreme form of effective altruism, I don’t think it’s actually as focused on narrow Effective Giving as you suggest. We used that language in the original write up because we wanted it to be accessible to a non EA audience. But if you look at the language of the actual description (Nick posted it above), we took that from effectivealtruism.org, and it actually focuses pretty broadly on trying to do good, not just on donating.
But as we mention, I think this is just the tip of the iceberg, I don’t think this research is at all the end of the story. We’ve been working on a follow-up study that includes cause neutrality, but it would be great to see people study similar questions on more extreme forms of effective altruism, and maybe even include an element of the community.
I really like the approach behind this post—too often EAs are hesitant to think about ways we can make use of our own psychology for pursuing altruism. It appears to some EAs that tricks like donating to a cause area (to avoid identifying too strongly in opposition to it) should not be part of a rationalist’s toolkit. But accepting that we are all biased, and doing what we can to overcome those biases in favor of what we would rationally, reflectively endorse as the unbiased viewpoint, can only help us increase our effectiveness in pursuing our altruistic goals.
Very nice—I’ve had people ask me before how to make a charity more effective, and it’s always been somewhat uncomfortable to have to say that EA focuses more attention on evaluating the existing effectiveness of charities than on trying to help charities to become more effective. But this is one step better than just helping existing charities to become more effective, this is creating effective charities from the ground up. Bravo.
This is terrific—thanks for taking steps to make this a reality! Excited to see what wonderful things come out of the people who are staying there.
I’d agree with being hesitant to distinguish definitions of EA for “academic” and “outreach” purposes. It seems like that’s asking for someone to use the wrong definition in the wrong context.
It could also be useful to specify a few other things about the question, such as whether charities saving future lives are legitimate to include in the calculation and whether the language about helping the world’s poorest people was specifically intending to restrict the set to global poverty charities.
Great question—a negligible portion of our donors would identify as EA. I’d estimate a few hundred of our users would identify as EA at most, almost all of the other donors came in from acquisition channels completely unrelated to EA. So I’d estimate over 99% are non EA donors. Indeed, there are only about 10k EAs in the entire movement, so even if we’d somehow captured all existing EAs it would still only be about 1⁄4 of our donors at most.