Agree that this seems useful! Hope these seem like they’re getting to interesting possible disagreements.
Henri Thunberg
You probably want to donate any Manifold currency this week
On the other hand, by filling the gap for 2024 we think there is over a 90% chance that we will be able to reach a sustainability tipping point i.e. have a viable income stream for at least 1 FTE and therefore avoid similar threats in the future.
Is your claim that for funding of 2025, you will have ≥1 FTE funding (120k >loty / 29k USD) ready at the end of 2024 – excluding Open Philanthropy, EA Infrastructure Fund, and Meta Charity Funders? Or does the statement permit grants from those sources?
5. [...] Wouldn’t it be better to give them the money and letting them choose the best charity that’s gone unfunded from their applicant pool?
FWIW, I don’t think this Meta Charity Funders’ model. I think they let funders join rather than donate to a pool. As far as I understand, after joining you access communal resources to best decide on grants – but the decision itself of where to donate remains with yourself as original funder.
This looks great, thank you for doing work to increase excitement about effective giving further!
For those interested, I created a Manifold market for the Donation Election, where people can add their own charities and bets on how the funds will be distributed.
It seems obvious to me that numerous stakeholders—including organization leaders, donors of all sizes, group leaders, and entrepreneurs—would all benefit from having an accurate understanding of EA’s growth trajectory. And it seems just as obvious that it would be tremendously inefficient for each of those parties to conduct their own analysis. It would be in everyone’s interest to receive a regular (every 3-6 months?) update from a reliable analyst. This wouldn’t be expensive (it wouldn’t require anything close to a full-time job initially, though the frequency and/or depth of the analyses could be scaled up if people saw value in doing so). As public goods go, this should be an easy one to provide.
Quick vibe-check from Rethink Priorities: Could people react with Agree or Disagree if you think EA Infrastructure Fund (as an example) paying for the Surveys & Data Analysis team of Rethink Priorities to do this (say, for a trial period of one year), would be money well spent?
I understand it depends on scope and price, but as a general direction this information would be useful. We agree this seems important, have done similar work before for other organizations, and would be happy to take on such a project.
(I think the links in the summary are linking to a collaborative edit version of this doc, rather than the places you want it to)
Love this! Seems like a great push (importantly, in a constructive and thoughtful way) for many that might otherwise feel frustrated with their current situation. Will definitely share this from time to time :))
Thanks for sharing this Linch, I found it a useful complement to the marginal grant thresholds post, which I recommend for those who enjoyed this post.
Thanks Joel for your thoughtful comment, which I’d like to build on.
I was thinking about how we can get funders to make calculated bets on those that have been discarded elsewhere, and get rewarded when they proved others right. Isn’t AI Safety Impact Markets trying to solve some of the issues with adverse selection through that kind of mechanism? Sorry for the lack of depth, but I think others can weigh in better.
What are your thoughts, for you personally, around...
I) Time spent
II) Joy of use
III) Value of information gained
of Manifold vs Metaculus?
Great listen, I enjoyed this a lot!
Kudos to Luisa who does a really good job of acting as a “Watson”, asking the followup questions that listeners might have. Several times in this podcast I was happy with her summaries or clarifying questions, even if I suspect she already knew the answers many of those times.
I would be surprised if the effect from the lack of a pledge drive would run on into February and March 2023 though. Comparison YoY here is 12 months before, Jan 2023 to 2022 etc.
Emm sorry, what? Out of 8,000 GWWC pledgers, who have at least pledged to give 10%, very few earn $1M?
This is a great post!
I assume that you are, but better safe than sorry: Are you discussing this with Chris Lloyd at Good Impressions who’s currently ” investigating whether paid ads can be an effective fundraising tool” for EA organizations?
Thank you Eda for posting this. This must be a horrible situation to be in and I am so sorry for the losses and suffering.
Could you please give more pointers on why these organizations were chosen? While you can’t vouch for their effectiveness, I guess you are very comfortable with them doing relevant work and having a solid track record of similar activity? (To be extra clear, this is not criticism, just understanding the extent of efforts.)
At Ge Effektivt (Swedish effective donations platform) we wrote a blog post about it partly because we get questions from donors about how to approach the current crisis, but also for SEO purposes and to have more people discover EA/effective charities. We did mention some organizations that we were comfortable with naming, but as I’ve also seen Ahbap recommended elsewhere I’d be happy to extend/replace the charities we’re currently naming.
Best of luck in the fundraising efforts!
Listened to it while doing other stuff so might not be 100 % accurate.
To my understanding Tegmark appears for 10 minutes, doing a normal AI-risk spiel. I think the angle relevant to the podcast is the risk of concentration of power in the hands of a few. So some accusations of big tech capturing AI conferences etc.
There’s a small segue talking about covid where Tegmark states he felt it was such an infected discussion that he couldn’t talk about it openly in some work environments for fear of repercussions.
As a Swede who is somewhat familiar with the publication Expo, I would maybe put the risk of forgery of that document at <5%. They are specifically known for their digging journalism, and I would be very surprised if they screwed up something basic like that.
Also, wouldn’t it be extremely strange behavior from FLI if that document actually was a forgery? Would be the go-to defense rather than what they are doing now.
I agree with this, there’s both a communication and a memory-hogging issue for each new Slack workspace you bring in.
So many conversations you’re in include a “Yeah, I think I’m in that Slack space, not sure” since a few of them look alike.
That aside, I applaud the creation and hope to contribute.
Thank you for being transparent and insightful about the lessons learned. I found this post useful!
Would you be comfortable sharing some more statistics? I’m thinking things like...Rate of enrollment at companies
Average donation amount when you were up and running, I suspect it was lower than described in “It currently has £15,000 amount going through each month from 150 users.”
Dropoff rates from users’ payroll giving
...
You’ve nudged me one step closer to writing a similar thing about learnings from a Swedish charity startup I worked with in 2017-2020.
Welcome to the forum simoj!
I think this might not be a great idea, due to uncertainties in what would happen with the allocated charity budget if it doesn’t get donated right now. It’s quite possible that the counterfactual is low, so we should probably not invest more into the ecosystem at this point — in particular when things are so up in the air about the future of the platform.