As AI heats up, I’m excited and frankly somewhat relieved to have Holden making this change. While I agree with 𝕮𝖎𝖓𝖊𝖗𝖆′s comment below that Holden had a lot of leverage on AI safety in his recent role, I also believe he has an vast amount of domain knowledge that can be applied more directly to problem solving. We’re in shockingly short supply of that kind of person, and the need is urgent.
Alexander has my full confidence in his new role as the sole CEO. I consider us incredibly fortunate to have someone like him already involved and and prepared to of succeed as the leader of Open Philanthropy.
Dustin Moskovitz
- AI #2 by 2 Mar 2023 14:50 UTC; 66 points) (LessWrong;
I’m grateful that Cari and I met Holden when we did (and grateful to Daniela for luring him to San Francisco for that first meeting). The last fourteen years of our giving would have looked very different without his work, and I don’t think we’d have had nearly the same level of impact — particularly in areas like farm animal welfare and AI that other advisors likely wouldn’t have mentioned.
I’m not sure what can be shared publicly for legal reasons, but would note that it’s pretty tough in board dynamics generally to clearly establish counterfactual influence. At a high level, Holden was holding space for safety and governance concerns and encouraging the rest of the leadership to spend time and energy thinking about them.
I believe the implicit premise of the question is something like “do those benefits outweigh the potential harms of the grant.” Personally, I see this as a misunderstanding, i.e. that OP helped OpenAI to come into existence and it might not have happened otherwise. I’ve gone back and looked at some of comms around the time (2016) as well as debriefed with Holden and I think the most likely counterfactual is that the time to the next fundraising (2019) and creation of the for-profit entity would have been shortened (due to less financial runway). Another possibility is that the other funders from the first round would have made larger commitments. I give effectively 0% of the probability mass to OpenAI not starting up.
A couple replies imply that my research on the topic was far too shallow and, sure, I agree.
But I do think that shallow research hits different from my POV, where the one person I have worked most closely with across nearly two decades happens to be personally well researched on the topic. What a fortuitous coincidence! So the fact that he said “yea, that’s a real problem” rather than “it’s probably something you can figure out with some work” was a meaningful update for me, given how many other times we’ve faced problems together.
I can absolutely believe that a different person, or further investigation generally, would yield a better answer, but I consider this a fairly strong prior rather than an arbitrary one. I also can’t point at any clear reference examples of non-geographic democracies that appear to function well and have strong positive impact. A priori, it seems like a great idea, so why is that?
The variations I’ve seen so far in the comments (like weighing forum karma) increase trust and integrity in exchange for decreasing the democratic nature of the governance, and if you walk all the way along that path you get to institutions.
I believe that’s an oversimplification of what Alexander thinks but don’t want to put words in his mouth.
In any case this is one of the few decisions the 4 of us (including Cari) have always made together so we have done a lot of aligning already. My current view, which is mostly shared, is we’re currently underfunding x-risk even without longtermism math, both because FTXF went away and because I’ve updated towards shorter AI timelines in the past ~5 years. And even aside from that, we weren’t at full theoretical budget last year anyway. So that all nets out that to expected increase, not decrease.
I’d love to discover new large x-risk funders though and think recent history makes that more likely.
Given that your proposal is to start small, why do you need my blessing? If this is a good idea, then you should be able to fund it and pursue it with other EA donors and effectively end up with a competitor to the MIF. And if the grants look good, it would become a target for OP funds. I don’t think OP feels their own grants are the best possible, but rather the best possible within their local specialization. Hence the regranting program.
Speaking for myself, I think your list of criteria make sense but are pretty far from a democracy. And the smaller you make the community of eligible deciders, the higher the chance they will be called for duty, which they may not actually want. How is this the same or different from donor lotteries, and what can be learned from that ? (To round this out a little, I think your list is effectively skin in the game in the form of invested time rather than dollars)
>> OP to be more open-by-default than other foundations
Which foundations do you think are more open than OP? Transparency is a spectrum and OP certainly seems to publish quite a bit. No others have forums dedicated to verbosely tearing apart their grants when they smell weakness.
That said, Open has an oft-forgotten second meaning: OP is open to any cause area in theory. i.e. it is cause neutral.
I think the primary content in ASB’s comment is actually “hits-based”—i.e. this is a grant with a low probability of a big win.To think it isobviouslybad, as so many commenters here do, you must have e.g. 90% confidence that the grant won’t result in $5M worth of counterfactual x-risk funding. (I do not have inside information that this was the goal/haven’t talked to ASB about it; it just seems like the right kind of goal for an org focused on “elephant bumping”).Striking this bit out as someone pointed out you might have more rules-based objections to the grant.
Absolutely, I did not mean my comment to be the final word, and in fact was hoping for interesting suggestions to arise.
Good point on the more detailed plan, though I think this starts to look a lot more like what we do today if you squint the right way. e.g. OP program officers are subject matter experts (who also consult with external subject matter experts), and the forum regularly tears apart their decisions via posts and discussion, which then gets fed back into the process.- 19 Jan 2023 15:46 UTC; 21 points) 's comment on The EA community does not own its donors’ money by (
Unless it’s a hostile situation (as might happen with public cos/activist investors), I don’t think it’s actually that costly. At seed stage, it’s just kind of normal to give board seats to major “investors”, and you want to have a good relationship with both your major investors and your board.
The attitude Sam had at the time was less “please make this grant so that we don’t have to take a bad deal somewhere else, and we’re willing to ‘sell’ you a board seat to close the deal” and more “hey would you like to join in on this? we’d love to have you. no worries if not.”
This is a great point, Alexander. I suspect some people, like ConcernedEAs, believe the specific ideas are superior in some way to what we do now, and it’s just convenient to give them a broad label like “democratizing”. (At Asana, we’re similarly “democratizing” project management!)
Others seem to believe democracy is intrinsically superior to other forms of governance; I’m quite skeptical of that, though agree with tylermjohn that it is often the best way to avoid specific kinds of abuse and coercion. Perhaps in our context there might be more specific solutions along those lines, like an appeals board for COI or retaliation claims. The formal power might still lie with OP, but we would have strong soft reasons for wanting to defer.
In the meantime, I think the forum serves that role, and from my POV we seem reasonably responsive to it? Esp. the folks with high karma.
I probably should have been clearer in my first comment that my interest in democratizing the decisions more was quite selfish: I don’t like having the responsibility, even when I’m largely deferring it to you (which itself is a decision).
Sure, I think you can make an argument like that for almost any cause area (find neglected tactics within the cause to create radical leverage). However, I’ve become more skeptical of it over time, both bc I’ve angled for those opps myself, and because there are increasingly a lot of very smart, strategic climate funders deploying capital. On some level, we should expect the best opps to find funders.
>> E.g. your own Regranting Challenge allocated 10m to a climate org on renewables in Southeast Asia. This illustrates that OP seems to believe that climate interventions can clear the near-termist bar absent democratic pressures that dilute.
The award page has this line “We are particularly interested in funding work to decarbonize the power sector because of the large and neglected impacts of harmful ambient air pollution, to which coal is a meaningful contributor. i.e. it’s part of our new air quality focus area. Without actually haven read the write-up, I’m sure they considered climate impact too, but I doubt it would have gotten the award without that benefit.
That said, the Kigali grant from 2017 is more like your framing. (There was much less climate funding then.)
I have a long track record of being funny, thank you very much! https://twitter.com/moskov/status/1556349357879808000
- 21 Dec 2022 4:10 UTC; -17 points) 's comment on Lizka’s Quick takes by (
I think we’re already along the path, rather than at one end, and thus am inclined to evaluate the merits of specific ideas for change rather than try to weigh the philosophical stance.
Yes that’s my position. My hope is we actually slowed acceleration by participating but I’m quite skeptical of the view that we added to it.
Your real account, not just this burner.
This is in such poor taste you should seriously delete your account. April fools day isn’t The Purge—you still have to have basic decency and respect for the community.
Apologies for the snarky language, but I did not mean to disparage the criticisms in the slightest. I think they are quite fine as they are, and do add value (80% confidence). I’m just pointing out that people frequently say there is no scrutiny of OP while engaging in an explicit act of scrutiny.
I was trying to highlight a bootstrapping problem, but by no means meant it to be the only problem.
It’s not crazy to me to create some sort of formal system to weigh the opinions of high-karma forums posters, though as you say that is only semi-democratic, and so reintroduces some of the issues Cremer et al were trying to solve in the first place.I am open-minded about whether it would be better than openphil, assuming they get the time to invest in making decisions well after being chosen (sortition S.O.P.).
I agree that some sort of periodic rules reveal could significantly mitigate corruption issues. Maybe each generation of the chosen council could pick new rules that determine the subsequent one.
Historically it has looked quite intractable, but I’d say that’s changing recently and may spur more grants.
The bigger problem though is it’s a first-world issue, so you automatically get a big haircut on cost effectiveness. Even so, this is one of only a handful of things that are prioritized for those geos.
If folks don’t mind, a brief word from our sponsors...
I saw Cremer’s post and seriously considered this proposal. Unfortunately I came to the conclusion that the parenthetical point about who comprises the “EA community” is, as far as I can tell, a complete non-starter.
My co-founder from Asana, Justin Rosenstein, left a few years ago to start oneproject.org, and that group came to believe sortition (lottery-based democracy) was the best form of governance. So I came to him with the question of how you might define the electorate in the case of a group like EA. He suggests it’s effectively not possible to do well other than in the case of geographic fencing (i.e. where people have invested in living) or by alternatively using the entire world population.
I have not myself come up with a non-geographic strategy that doesn’t seem highly vulnerable to corrupt intent or vote brigading. Given that the stakes are the ability to control large sums of money, having people stake some of their own (i.e. become “dues-paying” members of some kind) does not seem like a strong enough mitigation. For example, a hostile takeover almost happened to the Sierra Club in SF in 2015 (albeit for reasons I support!).
There is a serious, live question of what defines an EA right now. Are they longtermists? Do they include animals in the circle of moral concern? Shrimp? I’m not sure how you could establish a clear membership criteria without first answering these questions, and that feels backwards. I do think you could have separate pools of money based on separate worldviews, but you’d probably have to cut pretty narrowly which defeats the point.
As an example,
thea top-rated fund at GWWC is the one for Climate Change: https://www.givingwhatwecan.org/charities/founders-pledge-climate-change-fundWorking on climate change is certainly important, but I see that as fairly suggestive evidence that a more democratric approach would be dilutive to EA principles (i.e. neglectedness in this case) and result in more popular cause selection.