Vibewise, I find this unsurprising. I am not utterly shocked by any of the above and am not particularly surprised by much of it. Nonlinear does have a move fast and break things approach and I imagine they have patterns of behaviour that have predictably hurt people in the past. As evidence of this I made a market on it about it 8 months ago.
I like the nonlinear team personally and guess they do good/interesting work. I thought superlinear was a good test-case of bounty-based funding. I also use the Non-Linear library and find it valuable. I am confident that Kat, Emerson and Drew have good intentions on a deep level.
The above statements can both be true.
Thanks Ben for doing this, I think it was brave and good
I particularly like this advice “Going forward I think anyone who works with Kat Woods, Emerson Spartz, or Drew Spartz, should sign legal employment contracts, and make sure all financial agreements are written down in emails and messages that the employee has possession of. I think all people considering employment by the above people at any non-profits they run should take salaries where money is wired to their bank accounts, and not do unpaid work or work that is compensated by ways that don’t primarily include a salary being wired to their bank accounts.”
I think that EA is a very high trust ecosystem and I guess maybe Nonlinear shouldn’t be given that trust. But after reading the above, it’s up to you. I might advise the median EA not to work for them and I’d advise them not to hire people unless they are pretty hard-nosed though seemingly Kat has said she would change things anyway.
I am pretty curious about @spencerg’s statement that many simple errors were wrong with a recent draft this. That seems notable
I think the key question here is “Will such events happen again and what is the acceptable threshold for that chance”
If Nonlinear were welcomed unambiguously back into the EA fold I don’t think I’d be that confident that there wouldn’t be more stories like this in the next year. Maybe 20% that there would be.
I guess most people think that is above the acceptable tolerance.
I think it’s above tolerance for me too. It seems much higher than any other org I can think of. Notably it seems very avoidable.
I guess I do think things trade off against one another and maybe I’d like a way for us to say “this org is really effective but we don’t recommend most people work for them”. This is the sort of stance many have towards non-safety AI work as a means to upskill
I think rather than this being seen as punishment it can be seen as acceptable boundary setting—communities are allowed to want people given status and others not. This action will lower Nonlinear’s status and as a group we can choose to do that. Generally I think about things in terms of bad/unacceptable behaviour rather than bad people and I think a community gets to set the level of predictable bad behaviour it is exposed to.
What could Nonlinear do to convince me that they deserve the same levels of easy trust as other EA orgs? [1]
Provide evidence that this article is deeply flawed
Not threaten to sue Lightcone for publishing it
Acknowledge the patterns of behaviour that led to these outcomes and how they have changed—this would probably involve acknowledging that lots of these events took place
They could just decide that they don’t want this—they want to work in a different way to how EA orgs tend to, and that’s fine-adjacent, but then I would recommend they be treated differently too. I wonder sometimes whether it’s good to have, say, rationalism as a space willing to deal with less standard norms.
In my opinion too much post-FTX discussion has focused on individual EA behaviour and too little on powerful EA behavior. I think the median EA trying to avoid downside risk is overrated. As one’s influence grows the harm one does scales and it becomes better value to try to limit the bad and increase the good more. I am much more interested in a discussion of what Nonlinear should or shouldn’t do than for Catherine Richardson from Putney to worry if she she is spending too much money on rent.
Note that again, I don’t think they should close down, I’m just not sure they should be invited to present at EAGs, and I’d be happy for this to sit on a forum wiki page about them
I am much more interested in a discussion of what Nonlinear should or shouldn’t do than for Catherine Richardson from Putney to worry if she she is spending too much money on rent
Some thoughts:
Vibewise, I find this unsurprising. I am not utterly shocked by any of the above and am not particularly surprised by much of it. Nonlinear does have a move fast and break things approach and I imagine they have patterns of behaviour that have predictably hurt people in the past. As evidence of this I made a market on it about it 8 months ago.
I like the nonlinear team personally and guess they do good/interesting work. I thought superlinear was a good test-case of bounty-based funding. I also use the Non-Linear library and find it valuable. I am confident that Kat, Emerson and Drew have good intentions on a deep level.
The above statements can both be true.
Thanks Ben for doing this, I think it was brave and good
I particularly like this advice “Going forward I think anyone who works with Kat Woods, Emerson Spartz, or Drew Spartz, should sign legal employment contracts, and make sure all financial agreements are written down in emails and messages that the employee has possession of. I think all people considering employment by the above people at any non-profits they run should take salaries where money is wired to their bank accounts, and not do unpaid work or work that is compensated by ways that don’t primarily include a salary being wired to their bank accounts.”
I think that EA is a very high trust ecosystem and I guess maybe Nonlinear shouldn’t be given that trust. But after reading the above, it’s up to you. I might advise the median EA not to work for them and I’d advise them not to hire people unless they are pretty hard-nosed though seemingly Kat has said she would change things anyway.
I am pretty curious about @spencerg’s statement that many simple errors were wrong with a recent draft this. That seems notable
I think the key question here is “Will such events happen again and what is the acceptable threshold for that chance”
If Nonlinear were welcomed unambiguously back into the EA fold I don’t think I’d be that confident that there wouldn’t be more stories like this in the next year. Maybe 20% that there would be.
I guess most people think that is above the acceptable tolerance.
I think it’s above tolerance for me too. It seems much higher than any other org I can think of. Notably it seems very avoidable.
I guess I do think things trade off against one another and maybe I’d like a way for us to say “this org is really effective but we don’t recommend most people work for them”. This is the sort of stance many have towards non-safety AI work as a means to upskill
I think rather than this being seen as punishment it can be seen as acceptable boundary setting—communities are allowed to want people given status and others not. This action will lower Nonlinear’s status and as a group we can choose to do that. Generally I think about things in terms of bad/unacceptable behaviour rather than bad people and I think a community gets to set the level of predictable bad behaviour it is exposed to.
What could Nonlinear do to convince me that they deserve the same levels of easy trust as other EA orgs? [1]
Provide evidence that this article is deeply flawed
Not threaten to sue Lightcone for publishing it
Acknowledge the patterns of behaviour that led to these outcomes and how they have changed—this would probably involve acknowledging that lots of these events took place
They could just decide that they don’t want this—they want to work in a different way to how EA orgs tend to, and that’s fine-adjacent, but then I would recommend they be treated differently too. I wonder sometimes whether it’s good to have, say, rationalism as a space willing to deal with less standard norms.
In my opinion too much post-FTX discussion has focused on individual EA behaviour and too little on powerful EA behavior. I think the median EA trying to avoid downside risk is overrated. As one’s influence grows the harm one does scales and it becomes better value to try to limit the bad and increase the good more. I am much more interested in a discussion of what Nonlinear should or shouldn’t do than for Catherine Richardson from Putney to worry if she she is spending too much money on rent.
Note that again, I don’t think they should close down, I’m just not sure they should be invited to present at EAGs, and I’d be happy for this to sit on a forum wiki page about them
lol