Your real account, not just this burner.
Dustin Moskovitz
This is in such poor taste you should seriously delete your account. April fools day isn’t The Purge—you still have to have basic decency and respect for the community.
I understood what you meant before, but still see it as a bad analogy.
For context I saw many rounds of funding as a board member at Vicarious which was a pure lab for most of its life (and then later attempted robotics but that small revenue actually devalued it in the eyes of investors). There, what it took was someone getting excited about the story and smaller performance milestones along the way.
Again, why does it have to be X=$1B and probability 1?
It seems like if the $30M mattered, then the counterfactual is that they needed to be able to raise $30M at the end of their runway, at any valuation, rather than $1B, in order to bridge to the more impressive model. There should be a sizeable gap in what constitutes a sufficiently impressive model between those scenarios. In theory they also had “up to $1B” in grants from their original funders, including Elon, that should have been possible to draw on if needed.
How did you come to the conclusion that funding ML research is “pretty messy and unpredictable”? I’ve seen many ML companies funded over the years as straightforwardly as other tech startups, esp. if they had great professional backgrounds as was clearly the case with OAI. Seems like an unnecessary assumption on top of other unnecessary assumptions.
Why do you believe that’s binary? (Vs just less funding/smaller valuation at the first round)
Yes that’s my position. My hope is we actually slowed acceleration by participating but I’m quite skeptical of the view that we added to it.
Unless it’s a hostile situation (as might happen with public cos/activist investors), I don’t think it’s actually that costly. At seed stage, it’s just kind of normal to give board seats to major “investors”, and you want to have a good relationship with both your major investors and your board.
The attitude Sam had at the time was less “please make this grant so that we don’t have to take a bad deal somewhere else, and we’re willing to ‘sell’ you a board seat to close the deal” and more “hey would you like to join in on this? we’d love to have you. no worries if not.”
I’m not sure what can be shared publicly for legal reasons, but would note that it’s pretty tough in board dynamics generally to clearly establish counterfactual influence. At a high level, Holden was holding space for safety and governance concerns and encouraging the rest of the leadership to spend time and energy thinking about them.
I believe the implicit premise of the question is something like “do those benefits outweigh the potential harms of the grant.” Personally, I see this as a misunderstanding, i.e. that OP helped OpenAI to come into existence and it might not have happened otherwise. I’ve gone back and looked at some of comms around the time (2016) as well as debriefed with Holden and I think the most likely counterfactual is that the time to the next fundraising (2019) and creation of the for-profit entity would have been shortened (due to less financial runway). Another possibility is that the other funders from the first round would have made larger commitments. I give effectively 0% of the probability mass to OpenAI not starting up.
Sure but that was explicitly carved out as a different sub-poll here so these seemed relevant to this narrow phrasing.
My friends are less receptive to EA since the crisis, but it seems like it’s because of multiple scandals/storylines from recent months, rather than FTX itself.
- 24 Oct 2023 0:43 UTC; 78 points) 's comment on How has FTX’s collapse impacted EA? by (
My friends are less receptive to EA since the crisis primarily due to FTX
Will’s version https://twitter.com/willmacaskill/status/1591218014707671040
People debate if I’m core I guess, but I left some taking it seriously thoughts here https://twitter.com/moskov/status/1591592375553781760
>> To be clear, the reason I think this is a more convincing argument in climate than in many other causes is (a) the vastness of societal and philanthropic climate attention and (b) it’s very predictable brokenness, with climate philanthropy and climate action more broadly generally “captured” by one particular vision of solving the problem (mainstream environmentalism, see qualification below).
Vast attention is the mechanism that causes popular causes to usually have lower ROI on the margin; i.e. some of all that attention is likely competent.
I’m not sure what other causes you have in mind here. I think the argument with your two conditions applies equally well to large philanthropic areas like education, poverty/homelessness, and art.
>> It seems to me that consistency would require that we should assume other climate grants can meet the OP bar as well
Absolutely, I agree they can. Do you publish your cost effectiveness estimates?
I believe that’s an oversimplification of what Alexander thinks but don’t want to put words in his mouth.
In any case this is one of the few decisions the 4 of us (including Cari) have always made together so we have done a lot of aligning already. My current view, which is mostly shared, is we’re currently underfunding x-risk even without longtermism math, both because FTXF went away and because I’ve updated towards shorter AI timelines in the past ~5 years. And even aside from that, we weren’t at full theoretical budget last year anyway. So that all nets out that to expected increase, not decrease.
I’d love to discover new large x-risk funders though and think recent history makes that more likely.
As AI heats up, I’m excited and frankly somewhat relieved to have Holden making this change. While I agree with 𝕮𝖎𝖓𝖊𝖗𝖆′s comment below that Holden had a lot of leverage on AI safety in his recent role, I also believe he has an vast amount of domain knowledge that can be applied more directly to problem solving. We’re in shockingly short supply of that kind of person, and the need is urgent.
Alexander has my full confidence in his new role as the sole CEO. I consider us incredibly fortunate to have someone like him already involved and and prepared to of succeed as the leader of Open Philanthropy.- AI #2 by 2 Mar 2023 14:50 UTC; 66 points) (LessWrong;
Yes
Yea that’s reasonable.
If ASB said “there are good reasons not to provide more details”, would you accept that, or ask for the reasons?
Apologies for the snarky language, but I did not mean to disparage the criticisms in the slightest. I think they are quite fine as they are, and do add value (80% confidence). I’m just pointing out that people frequently say there is no scrutiny of OP while engaging in an explicit act of scrutiny.
I don’t feel I have much to say about that tbh, though I did talk about auditing financials here https://forum.effectivealtruism.org/posts/eRyC6FtN7QEkDEwMD/should-we-audit-dustin-moskovitz?commentId=qEzHRDMqfR5fJngoo
If we have another major donor with a more mysterious financial background than mine, we should totally pressure them to undergo an audit!
That said, I’m not convinced the next scandal will look anything like that, and the real problem to me was the lack of smoking guns. It’s very hard to remove someone from power without that, as we’ve recently observed with sama, and continuously observe with Elon.
So the upshot is my prediction is we will again fail to identify and correct possible scandals, and I’m not sure we should beat ourselves up about it as much as we do. My post was more meant to soften the ground on that likely outcome so that we don’t see it as a fatally damning tragedy when it happens, for EA or any other movement.