Our goal is to direct $1B per month to EA causes by 2030
vs
5% chance the platform directs over $500M/year to EA
These are fairly different, and it feels kind of misleading to start with the first one in the head and change to the second one in the main body.
I’d still want to bet against “5% chance the platform directs over $500M/year to EA [by 2030]”, but not as eagerly after a second re-read.
Momentum has moved over $10M with our software from 40,000 donors. In our mobile app, 87% of donations went to our recommended charities
Wait, is this 87% of the $10M, or does this include the donation pages? What’s the % for total donations as a whole? Do you have a rough sense of how many of those donations are counterfactual?
In any case, cheers and best of luck, this feels like an ambitious undertaking that could have a large impact.
Thanks for the thoughtful questions, I appreciate the close reading it likely took for you to understand that level of nuance. To address all of them:
1) $500M/year:
“You change to [$500M/yr] in the main body”
Interesting, I’m wondering why you felt that was the main claim of the body? While at one point I do give a 5% fermi estimate of $500M/yr, it’s immediately followed by a 1% estimate of $5B/yr, and I never refer to $500M at any other point. I also give estimates for $11B/yr twice in the body (slightly less than $1B/mo) and estimates for $4-5B/yr twice as well. These latter numbers roughly correspond with the original goals mentioned: to move $1B/mo and to grow funding by over an order of magnitude (>$4B/yr).
More importantly, I want to distinguish our goals from the range of possible outcomes (or even the most likely outcome). It seems valuable to set extremely ambitious goals while still being honest about the chance of success. Being honest about those probabilities shouldn’t be seen as undermining that ambitious goal.
2) 87% & $10M: Great question—it is not 87% of the $10M, it’s 87% of the money moved on the mobile app. The $10M is total money moved, including donation pages. As discussed in the post, we haven’t rolled out the donor portal to most of the new donation pages yet, so it is very hard to get a good estimate. Since we haven’t started nudging the new donors towards recommended charities yet, the only money moved so far to recommended charities is from mobile donors. If I recall correctly, the mobile app moved around $100K before we switched to focusing on donation pages. You might expect that to mean $87K to recommended charities (87% x $100K), but in fact it’s closer to $230K to date. Recurring donors are retained at a very high rate, and many mobile donors are still donating to our recommended charities (they can manage donations through the donor portal). Naturally, $230K still only represents a fraction of the $10M (~2%) since the new donation pages reach far more donors, and we haven’t yet begun recommending charities to these donors. We are very eager to roll out the donor portal to the donors using the donation pages and see what that percentage becomes. As discussed in the post, we expect it to be lower than 87%, but don’t know how much lower.
3) Counterfactual: I’ve been unable to think of a great way to estimate this. We do have evidence that donations tend not to be made from a fixed budget (e.g. people are very open to giving more or less depending on how much we ask). As a result, we might expect a large portion of donations to be counterfactual. More importantly, even if someone supporting our recommended charities would have donated money regardless, it presumably would have gone to a different charity. These donors are new to effective giving and otherwise most would support far less effective charities. That said, we don’t have a good way to come up with a numerical estimate for any of this, so if you have any ideas I would love to hear them, as this question is extremely important to us.
Hey, the main uncertainty I see with Momentum is “does it have product-market-fit except for the Trump-specific campaign”, since I can imagine the Trump campaign being somewhat of an anomaly, interesting but also maybe very hard to reproduce, I don’t know
As far as I understand, the current things we know about product market fit are:
The trump campaign raised $9,300,000 (source below)
The mobile app raised $230K, but was closed down (which means it didn’t have good product market fit, I assume)
“donation pages” (that are not trump related?) - this sounds very interesting! Could you say more? For example:
How much money did you move?
Is it more than the charities would have probably raised otherwise? (however you’d like to estimate this, I don’t mind)
Did you spend money on adwords or something like that, or was it raised organically?
Anything else you can say about the product market fit here?
And I understand this sums up to
Momentum has moved over $10M with our software from 40,000 donors
I am very interested to hear how the 40k donors spread over these projects, as well as the amount they donated, if you’re ok with sharing it
Yeah, great question—I agree that this is more important than many of the questions I lay out in the FAQ (but unfortunately it’s not one we’re frequently asked). I agree the mobile app did not have PMF (which is why we shut it down). The vast majority of the 40k donors are from DBT (it’s the largest campaign we’ve launched by far), so the interesting question is if we can replicate that to demonstrate PMF.
We spent most of this last year rebuilding our technology and running campaigns with small nonprofits to see which features are needed for PMF. We can now easily acquire small nonprofit customers but have now learned all we can from them. While these are easy minimum viable sales tests, these customers don’t have substantial donation volume. Saying that we got a 40% increase in donation volume isn’t meaningful if we moved an org from $200/mo to $280/mo. Now that we’ve hired several engineers and upgraded our tech, we’re moving on to testing with larger nonprofits with significant donation volume (our target market is orgs moving over $1M/year). Of course, even with the right product you still need to find the right growth channel, but that should also be revealed through the tests we’re running on the new market segment. We believe these tests will answer the three most important questions right now: do we truly have PMF, which growth channels should we be prioritizing, and precisely how large is the increase in donation volume for a nonprofit using our software.
Yeah, great question—I agree that this is more important than many of the questions I lay out in the FAQ (but unfortunately it’s not one we’re frequently asked)
<3
TL;DR: My suggestion would be checking Product Market Fit without developing any significant technology, especially since you said things like “We spent most of this last year rebuilding our technology”
For example:
Use your existing tech from the Trump campaign
Use your existing tech from the $280/mo orgs
I imagine both have messy code (as with all startups) and require actual developer time to customize for the customers, but I think that’s all ok (as long as you don’t make wrong financial transactions).
I’d try selling to a $1M/year nonprofit, I’d tell them “you’ll be our design partners and we’ll do custom development for you”, and I’d try making nothing generic (are you trying to make tech that will be relevant for multiple customers? Maybe that’s our double crux).
I don’t know anything about your internals so these suggestions might be totally silly for reasons I don’t understand, sorry for that. I thought It’s better to offer than not to. And that it would be better to put all my disclaimers here instead of in each sentence above :P
I’d be happy for lots of pushback of course <3
Or maybe I misunderstood and this is already exactly what you’re doing:
we’re moving on to testing with larger nonprofits with significant donation volume
Yeah, I basically agree with everything you said! More specifically:
We tested with the small nonprofits using the same hacked software we used for DBT (literally a static page with an iframe to our webapp)
We were in fact making financial transaction errors (e.g. double charging donors) since our database was not designed for scale, so we had to rebuild our backend. We had no in-house engineers, so we paused development until we hired engineers who have rearchitected the backend.
We’re trying to launch with mid-sized nonprofits using the improved technology we built for the small-nonprofits.
Our sales calls have revealed pretty similar feature requests from every mid-sized nonprofit (e.g we need a CRM integration). Many of these features are in our roadmap, but I agree with you that it would be even better to build it for a specific charity that is willing to sign rather than based on “generic” tech.
We’re already focused on getting 3-5 “case studies” from mid-sized nonprofits, but I’ve been reflecting recently on going to greater lengths to build custom features for those organizations, and after reflecting on your comments I’m updating further in that direction.
We did a lot of this custom work for DBT—we stopped developing generic features and almost worked like a dev shop to meet their needs (although we’ve slowed developing custom features for that charity as we are no longer learning as much from them).
Makes sense! I originally just spoke about the aspirational goals, but then early reviewers asked for more explicit estimates of middle-of-the-road outcomes as well, so I added a range of possibilities. Perhaps that made it more confusing than if I’d just kept one set of estimates.
vs
These are fairly different, and it feels kind of misleading to start with the first one in the head and change to the second one in the main body.
I’d still want to bet against “5% chance the platform directs over $500M/year to EA [by 2030]”, but not as eagerly after a second re-read.
Wait, is this 87% of the $10M, or does this include the donation pages? What’s the % for total donations as a whole? Do you have a rough sense of how many of those donations are counterfactual?
In any case, cheers and best of luck, this feels like an ambitious undertaking that could have a large impact.
Thanks for the thoughtful questions, I appreciate the close reading it likely took for you to understand that level of nuance. To address all of them:
1) $500M/year:
Interesting, I’m wondering why you felt that was the main claim of the body? While at one point I do give a 5% fermi estimate of $500M/yr, it’s immediately followed by a 1% estimate of $5B/yr, and I never refer to $500M at any other point. I also give estimates for $11B/yr twice in the body (slightly less than $1B/mo) and estimates for $4-5B/yr twice as well. These latter numbers roughly correspond with the original goals mentioned: to move $1B/mo and to grow funding by over an order of magnitude (>$4B/yr).
More importantly, I want to distinguish our goals from the range of possible outcomes (or even the most likely outcome). It seems valuable to set extremely ambitious goals while still being honest about the chance of success. Being honest about those probabilities shouldn’t be seen as undermining that ambitious goal.
2) 87% & $10M:
Great question—it is not 87% of the $10M, it’s 87% of the money moved on the mobile app. The $10M is total money moved, including donation pages. As discussed in the post, we haven’t rolled out the donor portal to most of the new donation pages yet, so it is very hard to get a good estimate. Since we haven’t started nudging the new donors towards recommended charities yet, the only money moved so far to recommended charities is from mobile donors. If I recall correctly, the mobile app moved around $100K before we switched to focusing on donation pages. You might expect that to mean $87K to recommended charities (87% x $100K), but in fact it’s closer to $230K to date. Recurring donors are retained at a very high rate, and many mobile donors are still donating to our recommended charities (they can manage donations through the donor portal). Naturally, $230K still only represents a fraction of the $10M (~2%) since the new donation pages reach far more donors, and we haven’t yet begun recommending charities to these donors. We are very eager to roll out the donor portal to the donors using the donation pages and see what that percentage becomes. As discussed in the post, we expect it to be lower than 87%, but don’t know how much lower.
3) Counterfactual:
I’ve been unable to think of a great way to estimate this. We do have evidence that donations tend not to be made from a fixed budget (e.g. people are very open to giving more or less depending on how much we ask). As a result, we might expect a large portion of donations to be counterfactual. More importantly, even if someone supporting our recommended charities would have donated money regardless, it presumably would have gone to a different charity. These donors are new to effective giving and otherwise most would support far less effective charities. That said, we don’t have a good way to come up with a numerical estimate for any of this, so if you have any ideas I would love to hear them, as this question is extremely important to us.
Hopefully this helps to clarify things!
Hey, the main uncertainty I see with Momentum is “does it have product-market-fit except for the Trump-specific campaign”, since I can imagine the Trump campaign being somewhat of an anomaly, interesting but also maybe very hard to reproduce, I don’t know
As far as I understand, the current things we know about product market fit are:
The trump campaign raised $9,300,000 (source below)
The mobile app raised $230K, but was closed down (which means it didn’t have good product market fit, I assume)
“donation pages” (that are not trump related?) - this sounds very interesting! Could you say more? For example:
How much money did you move?
Is it more than the charities would have probably raised otherwise? (however you’d like to estimate this, I don’t mind)
Did you spend money on adwords or something like that, or was it raised organically?
Anything else you can say about the product market fit here?
And I understand this sums up to
I am very interested to hear how the 40k donors spread over these projects, as well as the amount they donated, if you’re ok with sharing it
Source for defeat by tweet donations:
I hope you’ll have amazing product market fit and I think you’ll change the world if you will
@arikagan
Yeah, great question—I agree that this is more important than many of the questions I lay out in the FAQ (but unfortunately it’s not one we’re frequently asked). I agree the mobile app did not have PMF (which is why we shut it down). The vast majority of the 40k donors are from DBT (it’s the largest campaign we’ve launched by far), so the interesting question is if we can replicate that to demonstrate PMF.
We spent most of this last year rebuilding our technology and running campaigns with small nonprofits to see which features are needed for PMF. We can now easily acquire small nonprofit customers but have now learned all we can from them. While these are easy minimum viable sales tests, these customers don’t have substantial donation volume. Saying that we got a 40% increase in donation volume isn’t meaningful if we moved an org from $200/mo to $280/mo. Now that we’ve hired several engineers and upgraded our tech, we’re moving on to testing with larger nonprofits with significant donation volume (our target market is orgs moving over $1M/year). Of course, even with the right product you still need to find the right growth channel, but that should also be revealed through the tests we’re running on the new market segment. We believe these tests will answer the three most important questions right now: do we truly have PMF, which growth channels should we be prioritizing, and precisely how large is the increase in donation volume for a nonprofit using our software.
<3
TL;DR: My suggestion would be checking Product Market Fit without developing any significant technology, especially since you said things like “We spent most of this last year rebuilding our technology”
For example:
Use your existing tech from the Trump campaign
Use your existing tech from the $280/mo orgs
I imagine both have messy code (as with all startups) and require actual developer time to customize for the customers, but I think that’s all ok (as long as you don’t make wrong financial transactions).
I’d try selling to a $1M/year nonprofit, I’d tell them “you’ll be our design partners and we’ll do custom development for you”, and I’d try making nothing generic (are you trying to make tech that will be relevant for multiple customers? Maybe that’s our double crux).
I don’t know anything about your internals so these suggestions might be totally silly for reasons I don’t understand, sorry for that. I thought It’s better to offer than not to. And that it would be better to put all my disclaimers here instead of in each sentence above :P
I’d be happy for lots of pushback of course <3
Or maybe I misunderstood and this is already exactly what you’re doing:
Yeah, I basically agree with everything you said! More specifically:
We tested with the small nonprofits using the same hacked software we used for DBT (literally a static page with an iframe to our webapp)
We were in fact making financial transaction errors (e.g. double charging donors) since our database was not designed for scale, so we had to rebuild our backend. We had no in-house engineers, so we paused development until we hired engineers who have rearchitected the backend.
We’re trying to launch with mid-sized nonprofits using the improved technology we built for the small-nonprofits.
Our sales calls have revealed pretty similar feature requests from every mid-sized nonprofit (e.g we need a CRM integration). Many of these features are in our roadmap, but I agree with you that it would be even better to build it for a specific charity that is willing to sign rather than based on “generic” tech.
We’re already focused on getting 3-5 “case studies” from mid-sized nonprofits, but I’ve been reflecting recently on going to greater lengths to build custom features for those organizations, and after reflecting on your comments I’m updating further in that direction.
We did a lot of this custom work for DBT—we stopped developing generic features and almost worked like a dev shop to meet their needs (although we’ve slowed developing custom features for that charity as we are no longer learning as much from them).
Yeah, this is probably a cultural difference.
Makes sense! I originally just spoke about the aspirational goals, but then early reviewers asked for more explicit estimates of middle-of-the-road outcomes as well, so I added a range of possibilities. Perhaps that made it more confusing than if I’d just kept one set of estimates.