Building bespoke quantitative models to support decisionmakers in AI and bio. Right now that means: forecasting capabilities gains due to post-training enhancements on top of frontier foundation models, and estimating the annual burden of airborne disease in the US.
Joel Becker
To put it more strongly: I would like to make clear that I have never heard any claims of improper conduct by Drew Spartz (in relation to the events discussed in this post or otherwise).
Thank you very much for sharing, Chloe.
Ben, Kat, Emerson, and readers of the original post have all noticed that the nature of Ben’s process leads to selection against positive observations about Nonlinear. I encourage readers to notice that the reverse might also be true. Examples of selection against negative information include:
Ben has reason to exclude stories that are less objective or have a less strong evidence base. The above comment is a concrete example of this.
There’s also something related here about the supposed unreliability of Alice as a source: Ben needs to include this to give a complete picture/because other people (in particular the Nonlinear co-founders) have said this. I strongly concur with Ben when he writes that he “found Alice very willing and ready to share primary sources [...] so I don’t believe her to be acting in bad faith.” Personally, my impression is that people are making an incorrect inference about Alice from her characteristics (that are perhaps correlated with source-reliability in a large population, but aren’t logically related, and aren’t relevant in this case).
To the extent that you expect other people to have been silenced (e.g. via anticipated retaliation), you might expect not to hear relevant information from them.
To the extent that you expect Alice and Chloe to have had burnout-style experiences, you might expect not to read clarifications on or news about negative experiences.
Until this post came out, this was true of ~everything in the post.
There is a reason the post was published 1.5 years after the relevant events took place—people involved in the events really do not want to spend further mental effort on this.
- 12 Sep 2023 13:38 UTC; 22 points) 's comment on Sharing Information About Nonlinear by (LessWrong;
Kat, I am really sorry about the severe emotional difficulty. It makes sense that having this post be public would be an extremely challenging thing to deal with, all the more so if you have decisive contrary evidence. I will be interested in engaging with whatever you present, once you have the opportunity.
I think it is important to say, as one of the people who Ben interviewed: my very strong impression has been that Ben is interested in the truth, and that he is acting in good faith. My guess is that if you have strong, contrary evidence regarding the most important claims, then Ben will engage with this evidence with an open mind and will signal boost if relevant.
Yes, I think that the post does not do enough to make it clear that the central allegations are not about Drew Spartz. Happy to expand.
That does not get the thought experiment through.
Mortgage rates for older people are higher. And if mortgage holders die, the mortgage must still be paid by the executor of an estate, which is a disincentive for anyone with a bequest motive.
I’m sure that we can find some corner case where young cancer victims with no friends/family or no regard for their friends/family act otherwise. But this hardly seems important for the point that you—yes, you—can make money by implementing the trades suggested in this piece. Which is the claim that Yudkowsky is using the cancer victim analogy to argue against.
I’m sure it’s going to be a challenging time for community health. Thank you so much for all the amazing work you guys do. (Evergreen, but especially pertinent this week.)
Kat, thank you for this post. I appreciate the very helpful/understanding manner in which it is written. I’m really sorry that you needed to invest so much into this, although I think you made the right decision in doing so.
I’ll read more fully, probably sit with this for some time, and respond properly after that. (Keeping in mind my conditional pre-commitments to signal boost and seriously engage.)
Huh? Terminal cancer victims taking out 30-year mortgages is extremely different, in terms of the counter-party’s willingness to trade.
Isn’t the implication that the (EDIT: alleged) victim gave consent for Kat to share anonymously?
Thank you for sharing these reflections, Asya! And for your service as the LTFF chair!
I feel confused about the difficulty of fund manager hiring. One source of confusion comes from the importance of expertise (/doing-good-direct-work), as you touch on in the post:
Historically, we’ve had trouble hiring fund managers, especially in technical AI alignment, largely for the reasons mentioned above (people generally want to focus on their work). I think there’s an extent to which I’ve contributed to our difficulty in hiring, in that I’m not sold that people doing good direct work should be taking on additional responsibilities as fund managers (so haven’t been great at convincing people to join)
In addition to the high opportunity cost of time for expert fund managers, I would have guessed that small differences between the EV of marginal grants pushes in the direction of expertise being less important. But then I don’t understand why hiring fund managers would be unusually challenging. Wouldn’t deemphasizing expertise increase the pool of eligible fund managers, thereby making hiring easier?
(Perhaps I‘m confusing relative and absolute difficulty — expertise being less important would make hiring relatively easier, but it’s still absolutely tough?)
The second source of confusion comes reconciling the difficulty of finding fund managers with the fact that FTXFF and Manifund seemed to find part-time grantmakers quite easily. I don’t know how many regrantors and grant-recommenders FTXFF ended up with, but the last rumour I heard was between 100 and 200. Manifund are currently on 16 and seem keen to expand. I would’ve thought that there is some intersection between regrantors with the top, say, 30% of grantmaking records by your light, those satisfying other hiring criteria you might have, and those currently willing to work with LTFF.
Is the difference in the scale of grants LTFF fund managers make vs regrantors? Or expectations around regularity of response (regrantors are more flexible)? Or you’re not excited about the records of regrantors in general? Or something else?
Here’s another thing.
That’s a red line in my book, and I will not personally work with Nonlinear in the future because of it, and I recommend their exclusion from any professional communities that wish to keep up the standard of people not being silenced about extremely negative work experiences.
Let’s suppose that Nonlinear have crossed red lines, and that additional information from them won’t change this. (In reality I think that this is up in the air for the next week or so; I won’t allow my limited imagination to diminish the hope.)
Do you not believe in the possibility of rehabilitation in this case?
I haven’t read up on what norms here work well in other high-trust communities. But at least in criminal vs. society settings I would want to be a strong proponent of rehabilitation. It seems pretty plausible to me that, after thinking more about best norms in high-trust communities, I could come to think that “create horrendous work environment” and “create credible fear of severe retaliation” were things that could change (and be monitored) upon rehabilitation, and that it would be good for this to happen after X period of time.
Thanks for running this, Nuno! I had fun participating!
I agree with
My sense is that similar contests with similar marketing should expect a similar number of entries.
if we’re really strict about “similar marketing.” But, when considering future contests, there’s no need to hold that constant. The fact that e.g. Misha Yagudin had not heard of this prize seems shocking and informative to me. I think you could invest more time into thinking about how to increase engagement!
Relatedly, I have now had the following experience a number of times. I don’t know how to solve some problem in squiggle (charting multiple plots, feeding in large parameter dictionaries, taking many samples of samples, saving samples for use outside of squiggle, embedding squiggle in a web app, etc.etc.). I look around squiggle documentation searching for a solution, and can’t find it. I message one of the squiggle team. The squiggle team member has an easy and (often but not always) already-implemented-elsewhere solution that is not publicly available in any documentation or similar. I leave feeling very happy about the existence of squiggle and helpfulness of its team! But another feeling I have is that the squiggle team could be more successful if it invested more time in the final, sometimes boring mile of examples/documentation/evangelism, rather than chasing the next more intellectually interesting project.
Repost from LW:
My understanding (definitely fallible, but I’ve been quite engaged in this case, and am one of the people Ben interviewed) has been that Alice and Chloe are not concerned about this, and in fact that they both wish to insulate Drew from any negative consequences. This seems to me like an informative and important consideration. (It also gives me reason to think that the benefits of gaining more information about this are less likely to be worth the costs.)
- 9 Sep 2023 7:58 UTC; 14 points) 's comment on Sharing Information About Nonlinear by (
If you can share (publicly or privately) strong evidence contradicting “claims [...] that wildly distort the true story” (emphasis mine), I pre-commit to signal boosting.
For what it’s worth, I wouldn’t be surprised if you do have strong counter-evidence to some claims (given the number of claims made, the ease with which things can be lost in translation, your writing this email, etc.). But, as of right now, I would be surprised if my understanding of the important stuff—roughly, the items covered in Ben’s epistemic state and the crossing of red lines—was wildly distorted. I hope that it is.
[EDIT, Nov 13: it sounds like the Nonlinear reply might be in the 100s of pages. This might be the right move from their point of view, but reading 3-figure pages stretches my pre-commitment above further than I would have intended at the time. I’d like to amend the commitment to “engaging with the >=20 pages-equivalent that seems most relevant to me or Nonlinear, or skimming >=50 pages-equivalent.” If people think this is breaking the spirit of my earlier commitment, I’ll seriously consider standing by the literal wording of that commitment (engaging with ~everything). Feel free to message about this.]
- 12 Dec 2023 14:14 UTC; 34 points) 's comment on Nonlinear’s Evidence: Debunking False and Misleading Claims by (
Thank you for the helpful announcement, and for all of your other work on this!
Nonlinear staff were participants on the FTX EA program, which I ran, and where I was in part responsible for participant welfare. Some of the important events took place in this period. This led me to start supporting Alice and Chloe. I have continued to be involved in the case on-and-off since then.
Agreed. I would have wanted the post itself to make this more clear.
- 7 Sep 2023 22:33 UTC; 41 points) 's comment on Sharing Information About Nonlinear by (
You can argue that one could take a short position on interest rates (e.g., in the form of a loan) if you believe that they will rise at some point, but that is a different bet from short timelines—what you’re betting on then, is when the world will realize that timelines are short, since that’s what it will take before many people choose to pull out of the market, and thus drive interest rates up. It is entirely possible to believe both that timelines are short, and that the world won’t realize AI is near for a while yet, in which case you wouldn’t do this.
This reasoning sounds pretty tortured to me.
First, should you really believe that the relatively small number of traders needed to move markets won’t come to think AI is a really big deal, given that you think AI is a really big deal?
Second, if “the world won’t realize AI is near for a while,” you can still make money by following analogous strategies to those described in the post. You don’t need the world to realize tomorrow.
Claire Zabel confirms in this comment that
Open Phil gave most of the funding for the purchase of Wytham Abbey (a small part of the costs were also committed by Owen and his wife, as a signal of “skin in the game”)
As one of the people Ben interviewed:
This post closely reflects my understanding of the situation. (EDIT: at this time, before engaging with Nonlinear reply myself.)
Whenever this post touches on something that I can independently corroborate (EDIT: small minority of claims), I believe it to be accurate. Whenever the post communicates something that both Ben and I have heard from Alice and Chloe (EDIT: large majority of claims), it tells their account faithfully.
I appreciate Ben’s emphasis on red lines and the experiences of Alice and Chloe. When he leaves out stories that I think we are both aware of, my guess is that he has done so because these stories aren’t super relevant to the case at hand or aren’t super objective/strongly evidenced. This makes me think more favourably of the rest of his write-up.