Hi Ozzie, I typically find the quality of your contributions to the EA Forum to be excellent. Relative to my high expectations, I was disappointed by this comment.
> Would such a game “positively influence the long-term trajectory of civilization,” as described by the Long-Term Future Fund? For context, Rob Miles’s videos (1) and (2) from 2017 on the Stop Button Problem already provided clear explanations for the general public.
It sounds like you’re arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?
This struck me as strawmanning.
The original post asked whether the game would positively influence the long-term trajectory of civilisation. It didn’t spell it out, but presumably we want that to be a material positive influence, not a trivial rounding error—i.e. we care about how much positive influence.
The extent of that positive influence is lowered when we already have existing clear and popular explanations. Hence I do believe the existence of the videos is relevant context.
Your interpretation “It sounds like you’re arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?” is a much stronger and more attackable claim than my read of the original.
> It seems insane to even compare, but was this expenditure of $100,000 really justified when these funds could have been used to save 20–30 children’s lives or provide cataract surgery to around 4000 people?
These are totally different modes of impact. I assume you could make this argument for any speculative work.
I’m more sympathetic to this, but I still didn’t find your comment to be helpful. Maybe others read the original post differently than I did, but I read the OP is simply expressing the concept “funds have an opportunity cost” (arguably in unnecessarily hyperbolic terms). This meant that your comment wasn’t a helpful update for me.
On the other hand, I appreciated this comment, which I thought to be valuable:
I also like grant evaluation, but I would flag that it’s expensive, and often, funders don’t seem very interested in spending much money on it.
A donor-pays philanthropy-advice-first model solves several of these problems.
If your model focuses primarily on providing advice to donors, your scope is “anything which is relevant to donating”, which is broad enough that you’re bound to have lots of high-impact research to do, which helps with constraint 1.
Strategising and prioritisation are much easier when you’re knee-deep in supporting donors with their donations—this highlights the pain points in making good giving decisions, which helps with constraint 2.
If donors perceive that the research is worth funding, and have potentially had input into the ideation of the research project, they are likely to be willing to fund it, which helps with constraint 6.
This explains why SoGive adopted this model.