Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
Should the reader infer anything from the absence of a reference to GV here? The comment thread that came to mind when reading this response was significantly about GV (although there was some conflation of OP and GV within it). So if OP felt it could recommend US “right-of-center”[1] policy work to GV, I would be somewhat surprised that this well-written post didn’t say that.
Conditional on GV actually being closed to right-of-center policy work, I express no criticism of that decision here. It’s generally not cool to criticize donors for declining to donate to stuff that is in tension or conflict with their values, and it seems that would be the case. However, where the funder is as critical to an ecosystem as GV is here, I think fairly high transparency about the unwillingness to fund a particular niche is necessary to allow the ecosystem to adjust. For example, learning that GV is closed to a niche area that John Doe finds important could switch John from object-level work to earning to give. And people considering moving to object-level work need to clearly understand if the 800-pound gorilla funder will be closed to them.
Good Ventures did indicate to us some time ago that they don’t think they’re the right funder for some kinds of right-of-center AI policy advocacy, though (a) the boundaries are somewhat fuzzy and pretty far from the linked comment’s claim about an aversion to opportunities that are “even slightly right of center in any policy work,” (b) I think the boundaries might shift in the future, and (c) as I said above, OP regularly recommends right-of-center policy opportunities to other funders.
Also, I don’t actually think this should affect people’s actions much because: my team has been looking for right-of-center policy opportunities for years (and is continuing to do so), and the bottleneck is “available opportunities that look high-impact from an AI GCR perspective,” not “available funding.” If you want to start or expand a right-of-center policy group aimed at AI GCR mitigation, you should do it and apply here! I can’t guarantee we’ll think it’s promising enough to recommend to the funders we advise, but there are millions (maybe tens of millions) available for this kind of work; we’ve simply found only a few opportunities that seem above-our-bar for expected impact on AI GCR, despite years of searching.
Should the reader infer anything from the absence of a reference to GV here? The comment thread that came to mind when reading this response was significantly about GV (although there was some conflation of OP and GV within it). So if OP felt it could recommend US “right-of-center”[1] policy work to GV, I would be somewhat surprised that this well-written post didn’t say that.
Conditional on GV actually being closed to right-of-center policy work, I express no criticism of that decision here. It’s generally not cool to criticize donors for declining to donate to stuff that is in tension or conflict with their values, and it seems that would be the case. However, where the funder is as critical to an ecosystem as GV is here, I think fairly high transparency about the unwillingness to fund a particular niche is necessary to allow the ecosystem to adjust. For example, learning that GV is closed to a niche area that John Doe finds important could switch John from object-level work to earning to give. And people considering moving to object-level work need to clearly understand if the 800-pound gorilla funder will be closed to them.
I place this in quotes because the term is ambiguous.
Good Ventures did indicate to us some time ago that they don’t think they’re the right funder for some kinds of right-of-center AI policy advocacy, though (a) the boundaries are somewhat fuzzy and pretty far from the linked comment’s claim about an aversion to opportunities that are “even slightly right of center in any policy work,” (b) I think the boundaries might shift in the future, and (c) as I said above, OP regularly recommends right-of-center policy opportunities to other funders.
Also, I don’t actually think this should affect people’s actions much because: my team has been looking for right-of-center policy opportunities for years (and is continuing to do so), and the bottleneck is “available opportunities that look high-impact from an AI GCR perspective,” not “available funding.” If you want to start or expand a right-of-center policy group aimed at AI GCR mitigation, you should do it and apply here! I can’t guarantee we’ll think it’s promising enough to recommend to the funders we advise, but there are millions (maybe tens of millions) available for this kind of work; we’ve simply found only a few opportunities that seem above-our-bar for expected impact on AI GCR, despite years of searching.
Can you say what the “some kinds” are?