I appreciate the investigation, but have mixed feelings about these points.
> Would such a game “positively influence the long-term trajectory of civilization,” as described by the Long-Term Future Fund? For context, Rob Miles’s videos (1) and (2) from 2017 on the Stop Button Problem already provided clear explanations for the general public.
It sounds like you’re arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue? As much as I’d like it to be the case that something just has to be explained in one way to one group at one time, and everyone else will figure that out, generally disseminating even simple ideas is a lot of work. The basics of EA are very simple, but we still need to repeat them in many ways to many groups to make an even limited impact.
These are totally different modes of impact. I assume you could make this argument for any speculative work. There are many billions of dollars spent each year on research and projects that end up as failures.
I’m scared of this argument because it’s easy to use it to attack any speculative work. “A Givewell analyst spent 3 months investigating economic policies in India and that didn’t work out? They could have saved 5 lives!”
I also want to flag that $100k sounds like a lot to some individuals, but in practice, often buys frustratingly little when spent on western professionals. One good software developer can easily cost $200k-$300k per year, all things included, if employed.
> With seven full years of funding on record, I believe a thorough evaluation of previous grants is needed.
I also like grant evaluation, but I would flag that it’s expensive, and often, funders don’t seem very interested in spending much money on it. One question is how much of the total LTFF budget should go to grant evaluation. I’d expect probably 2-8% is reasonable, but funders might not love this.
> However, I found numerous other cases, many even worse, primarily involving digital creators with barely any content produced during their funding period, people needing big financial support to change careers, and independent researchers whose proposals had not, at the time of writing, resulted in any published papers.
I’d be curious to see more analysis here. If it is the case that a very large fraction of grants are useless, and very few produce huge wins, then I agree that that would definitely be concerning.
I would flag that I think many of us are fairly frustrated by opportunities in longtermism. There aren’t many very clear wins as we’d like, so a lot of the funding is highly speculative right now.
-- Lastly, I’d of course grant that this project looks very underwhelming now. I have no idea what the story was behind funding it—I assume that there was some surprising evidence of promise, but it didn’t work out for reasons. I’m assuming that the team spent some a few months on it, but it didn’t seem promising enough to continue. This situation is common in these sorts of projects, which can be very hit-or-miss.
Lastly, kudos for focusing on an organization instead of an individual—this seems like a safe choice to me.
I would flag that I think many of us are fairly frustrated by opportunities in longtermism. There aren’t many very clear wins as we’d like, so a lot of the funding is highly speculative right now.
I worry the most about this:
I believe many donors, who think they are contributing effectively, may not be fully aware of how their money is being utilized.
You and I understand the current SotA for longtermist opportunities. The best a visitor to the EA Funds page gets is:
(includes speculative opportunities)
(In low-contrast text, no less—the rest is presented as being equivalent to the other funds). I don’t have evidence for this claim, but I’m concerned that longtermist funds are drawing false equivalences; that most funders would assume their risk profiles are merely 1-10x worse when they may be orders of magnitude worse than that.
But, on the other hand, I don’t know how bad the problem is. It feels subjectively easy to cherry-pick joke projects, but as you note, these are change on the huge amounts of money these funds have to give out. I don’t know if these projects make up the bulk of those getting this funding.
Hi Ozzie, I typically find the quality of your contributions to the EA Forum to be excellent. Relative to my high expectations, I was disappointed by this comment.
> Would such a game “positively influence the long-term trajectory of civilization,” as described by the Long-Term Future Fund? For context, Rob Miles’s videos (1) and (2) from 2017 on the Stop Button Problem already provided clear explanations for the general public.
It sounds like you’re arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?
This struck me as strawmanning.
The original post asked whether the game would positively influence the long-term trajectory of civilisation. It didn’t spell it out, but presumably we want that to be a material positive influence, not a trivial rounding error—i.e. we care about how much positive influence.
The extent of that positive influence is lowered when we already have existing clear and popular explanations. Hence I do believe the existence of the videos is relevant context.
Your interpretation “It sounds like you’re arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?” is a much stronger and more attackable claim than my read of the original.
These are totally different modes of impact. I assume you could make this argument for any speculative work.
I’m more sympathetic to this, but I still didn’t find your comment to be helpful. Maybe others read the original post differently than I did, but I read the OP is simply expressing the concept “funds have an opportunity cost” (arguably in unnecessarily hyperbolic terms). This meant that your comment wasn’t a helpful update for me.
On the other hand, I appreciated this comment, which I thought to be valuable:
I also like grant evaluation, but I would flag that it’s expensive, and often, funders don’t seem very interested in spending much money on it.
1. I agree my sentence “It sounds like you’re arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?” was quite overstated. I apologize for that.
That said, my guess is that I’m really not sure if presence of the Rob Miles videos did decrease the value of future work much. Maybe by something like 20%? I could also see situations where the response was positive, revealing that more work here would be more valuable, not less.
All that said, my guess is that this point isn’t particularly relevant, outside of what it shows of our arguing preferences and viewpoints. I think the original post would have a similar effect without it.
but I read the OP is simply expressing the concept “funds have an opportunity cost” (arguably in unnecessarily hyperbolic terms).
That’s relevant to know, thanks! This wasn’t my takeaway when reading it (I tend to assume that it’s clear that funds have opportunity costs, so focused more on the rest of the point), but I could have been wrong.
I’d be curious to see more analysis here. If it is the case that a very large fraction of grants are useless, and very few produce huge wins, then I agree that that would definitely be concerning.
In particular, I’d like to see analysis of a fair[1] sample.
I don’t think we would necessarily need to see a “very large fraction” be “useless” for us to have some serious concerns here. I take Nicolae to raise two discrete concerns about the video-game grant: that it resulted in no deliverable product at all, and that it wouldn’t have been a good use of funds even if it had. I think the quoted analysis addresses the second concern better than the first.
If there are “numerous other cases, many even worse, . . . involving digital creators with barely any content produced during their funding period,” then that points to a potential vetting problem. I can better see the hits-based philanthropy argument for career change, or for research that ultimately didn’t produce any output,[2] but producing ~no digital output that the grantee was paid to create should be a rare occurrence. It’s hard to predict whether any digital content will go viral / have impact, but the content coming into existence at all shouldn’t be a big roll of the dice.
I used “fair” rather than “random” to remain agnostic on weighting by grant size, etc. The idea is representative and not cherry-picked (in either direction).
The experience of software outsourcing is that replacing expensive western software devs with cheaper foreign devs is often much more expensive than people expect. You can make a decent business from doing so, but it’s no free lunch (unlike for GiveDirectly, where $->utils is straightforwardly better in the third world) and I wouldn’t fault a startup for exclusively hiring expensive American devs.
All the best tech companies have strong incentives to try to save money, but most end up still spending heavily in the US still.
Add to that the fact that EAs who apply are heavily selected from western countries.
All that said, I do support trying to outsource some of software and other things. I’ve had some success outsourcing technical and operations work, and will continue to try to do so in the future. I think different organizations have different advantages here, depending on their unique circumstances. (If your organization needs to be around other EAs, it might need to be based in the Bay / DC / London. If the management is already in a cheap place and prefers remote work, it’s easier to be more remote-friendly.)
@Larks@Ozzie Gooen@huw worked a decade in tech, and tradeoffs justifiably prevent outsourcing everything. The truism that frustratingly little commonly gets delivered for $100k felt like the original comment simply reiterating realities of the complaint. Questioning rather than defending status quo spending is still an effective altruism tenet. To clarify, I’d rather not fund anyone anywhere working on unpublished AI video games
Equally, the best talent from non-Western countries usually migrates to Western countries where wages are orders of magnitude higher. So this ends up being self-reinforcing.
I appreciate the investigation, but have mixed feelings about these points.
> Would such a game “positively influence the long-term trajectory of civilization,” as described by the Long-Term Future Fund? For context, Rob Miles’s videos (1) and (2) from 2017 on the Stop Button Problem already provided clear explanations for the general public.
It sounds like you’re arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue? As much as I’d like it to be the case that something just has to be explained in one way to one group at one time, and everyone else will figure that out, generally disseminating even simple ideas is a lot of work. The basics of EA are very simple, but we still need to repeat them in many ways to many groups to make an even limited impact.
> It seems insane to even compare, but was this expenditure of $100,000 really justified when these funds could have been used to save 20–30 children’s lives or provide cataract surgery to around 4000 people?
These are totally different modes of impact. I assume you could make this argument for any speculative work. There are many billions of dollars spent each year on research and projects that end up as failures.
I’m scared of this argument because it’s easy to use it to attack any speculative work. “A Givewell analyst spent 3 months investigating economic policies in India and that didn’t work out? They could have saved 5 lives!”
I also want to flag that $100k sounds like a lot to some individuals, but in practice, often buys frustratingly little when spent on western professionals. One good software developer can easily cost $200k-$300k per year, all things included, if employed.
> With seven full years of funding on record, I believe a thorough evaluation of previous grants is needed.
I also like grant evaluation, but I would flag that it’s expensive, and often, funders don’t seem very interested in spending much money on it. One question is how much of the total LTFF budget should go to grant evaluation. I’d expect probably 2-8% is reasonable, but funders might not love this.
> However, I found numerous other cases, many even worse, primarily involving digital creators with barely any content produced during their funding period, people needing big financial support to change careers, and independent researchers whose proposals had not, at the time of writing, resulted in any published papers.
I’d be curious to see more analysis here. If it is the case that a very large fraction of grants are useless, and very few produce huge wins, then I agree that that would definitely be concerning.
I would flag that I think many of us are fairly frustrated by opportunities in longtermism. There aren’t many very clear wins as we’d like, so a lot of the funding is highly speculative right now.
--
Lastly, I’d of course grant that this project looks very underwhelming now. I have no idea what the story was behind funding it—I assume that there was some surprising evidence of promise, but it didn’t work out for reasons. I’m assuming that the team spent some a few months on it, but it didn’t seem promising enough to continue. This situation is common in these sorts of projects, which can be very hit-or-miss.
Lastly, kudos for focusing on an organization instead of an individual—this seems like a safe choice to me.
I worry the most about this:
You and I understand the current SotA for longtermist opportunities. The best a visitor to the EA Funds page gets is:
(In low-contrast text, no less—the rest is presented as being equivalent to the other funds). I don’t have evidence for this claim, but I’m concerned that longtermist funds are drawing false equivalences; that most funders would assume their risk profiles are merely 1-10x worse when they may be orders of magnitude worse than that.
But, on the other hand, I don’t know how bad the problem is. It feels subjectively easy to cherry-pick joke projects, but as you note, these are change on the huge amounts of money these funds have to give out. I don’t know if these projects make up the bulk of those getting this funding.
Hi Ozzie, I typically find the quality of your contributions to the EA Forum to be excellent. Relative to my high expectations, I was disappointed by this comment.
This struck me as strawmanning.
The original post asked whether the game would positively influence the long-term trajectory of civilisation. It didn’t spell it out, but presumably we want that to be a material positive influence, not a trivial rounding error—i.e. we care about how much positive influence.
The extent of that positive influence is lowered when we already have existing clear and popular explanations. Hence I do believe the existence of the videos is relevant context.
Your interpretation “It sounds like you’re arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?” is a much stronger and more attackable claim than my read of the original.
I’m more sympathetic to this, but I still didn’t find your comment to be helpful. Maybe others read the original post differently than I did, but I read the OP is simply expressing the concept “funds have an opportunity cost” (arguably in unnecessarily hyperbolic terms). This meant that your comment wasn’t a helpful update for me.
On the other hand, I appreciated this comment, which I thought to be valuable:
Thanks for the comment Sanjay!
I think your points are quite fair.
1. I agree my sentence “It sounds like you’re arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?” was quite overstated. I apologize for that.
That said, my guess is that I’m really not sure if presence of the Rob Miles videos did decrease the value of future work much. Maybe by something like 20%? I could also see situations where the response was positive, revealing that more work here would be more valuable, not less.
All that said, my guess is that this point isn’t particularly relevant, outside of what it shows of our arguing preferences and viewpoints. I think the original post would have a similar effect without it.
That’s relevant to know, thanks! This wasn’t my takeaway when reading it (I tend to assume that it’s clear that funds have opportunity costs, so focused more on the rest of the point), but I could have been wrong.
In particular, I’d like to see analysis of a fair[1] sample.
I don’t think we would necessarily need to see a “very large fraction” be “useless” for us to have some serious concerns here. I take Nicolae to raise two discrete concerns about the video-game grant: that it resulted in no deliverable product at all, and that it wouldn’t have been a good use of funds even if it had. I think the quoted analysis addresses the second concern better than the first.
If there are “numerous other cases, many even worse, . . . involving digital creators with barely any content produced during their funding period,” then that points to a potential vetting problem. I can better see the hits-based philanthropy argument for career change, or for research that ultimately didn’t produce any output,[2] but producing ~no digital output that the grantee was paid to create should be a rare occurrence. It’s hard to predict whether any digital content will go viral / have impact, but the content coming into existence at all shouldn’t be a big roll of the dice.
I used “fair” rather than “random” to remain agnostic on weighting by grant size, etc. The idea is representative and not cherry-picked (in either direction).
These are other two grant types in Nicolae’s sentence that I partially quoted in my sentence before this one.
That true statement seemingly misses the forest for the trees, because money going further overseas is an effective altruism tenet:
https://www.givewell.org/giving101/Your-dollar-goes-further-overseas
The experience of software outsourcing is that replacing expensive western software devs with cheaper foreign devs is often much more expensive than people expect. You can make a decent business from doing so, but it’s no free lunch (unlike for GiveDirectly, where $->utils is straightforwardly better in the third world) and I wouldn’t fault a startup for exclusively hiring expensive American devs.
+1 to this.
All the best tech companies have strong incentives to try to save money, but most end up still spending heavily in the US still.
Add to that the fact that EAs who apply are heavily selected from western countries.
All that said, I do support trying to outsource some of software and other things. I’ve had some success outsourcing technical and operations work, and will continue to try to do so in the future. I think different organizations have different advantages here, depending on their unique circumstances. (If your organization needs to be around other EAs, it might need to be based in the Bay / DC / London. If the management is already in a cheap place and prefers remote work, it’s easier to be more remote-friendly.)
@Larks @Ozzie Gooen @huw worked a decade in tech, and tradeoffs justifiably prevent outsourcing everything. The truism that frustratingly little commonly gets delivered for $100k felt like the original comment simply reiterating realities of the complaint. Questioning rather than defending status quo spending is still an effective altruism tenet. To clarify, I’d rather not fund anyone anywhere working on unpublished AI video games
Equally, the best talent from non-Western countries usually migrates to Western countries where wages are orders of magnitude higher. So this ends up being self-reinforcing.