Building Nonlinear.
Drew Spartz
Apply to >50 AI safety funders in one application with the Nonlinear Network [Round Closed]
A list of EA-relevant business books I’ve read
The $100,000 Truman Prize: Rewarding Anonymous EA Work
Happy to provide more context here.
Nonlinear is or used to be a project of Spartz Philanthropies. According to the IRS website, Spartz Philanthropies had its 501(c)(3) status revoked in 2021 since it had not filed the necessary paperwork for three years straight. Now the Nonlinear website no longer mentions Spartz Philanthropies, and I am unsure whether Nonlinear is a tax-exempt nonprofit or what legal status it has.
Nonlinear, Inc is a 501c3. Spartz Philanthropies was an inactive entity that Emerson set up in 2018. We were initially planning on using it as the main entity for Nonlinear. We had filed an extension for the tax returns, but somehow the IRS missed the fact we filed it, which led to tax-exempt status being automatically revoked. Our accountant said you can appeal it and were very likely to win, since it was an error on their part, and we began the appeal, but it can take years. In the meantime, we were fiscally sponsored by Rethink Charity. The IRS was taking too long to respond to the appeal, so I set up a new entity.
Back in 2021, Nonlinear launched its AI safety fund with an announcement post which got some pushback/skepticism in the comments section. Does anyone know whether this fund has made any grants or seeded any new organisations? I have not managed to find any information about the Fund on the Nonlinear website.
I’ve actually been working on a more complete list of all the projects we’ve funded and incubated! But have been very unproductive the last two months due to a combination of an extremely painful RSI and chronic nausea/gut issues. We changed our name from the Nonlinear Fund to Nonlinear. Kat made a basic list here: https://www.nonlinear.org/
I can see how you might think that, and thanks for sharing your thoughts.
My opinion is that the presumption of innocence is not just a legal principle, it is a foundational principle of most justice systems because one accusation can forever ruin someone’s reputation whether or not they are proven innocent in the future.
Accusations can draw a lot of attention, but retractions receive far less attention.
I believe it’s very important to be careful damaging someone’s reputation before hearing both sides because it’s really hard to repair it.
Additionally, it’s much harder to prove accusations wrong than it is to anonymously make them in the first place, so most cultures have immune reactions against anonymous accusations.
It’s also just bad epistemics to only hear one side. Every side always thinks they’re in the right, so if you only hear one side, it’s practically impossible to have good epistemics.
I think you’re missing a few billionaires in your 5-6 number.
Jed McCaleb (founder of Ripple) is a funder of SFF.
There are many wealthy crypto people that have either donated to EA causes or are heavily involved that have an illiquid or highly fluctuating net worth due to the crypto markets. I would guess there are 5-10 that were billionaires at some point but likely have high 9 figure net worths now.
Also do you count people that sympathize with EA ideas as EAs? Fred Ehrsam and Brian Armstrong have both wrote positively about EA in the past. I have seen on Twitter a handful of 9-10 figure net worth crypto hedge fund managers talk about Less Wrong and a few talk about EA.
You can’t really use the S&P 500 as a way to predict these guys’ net worths either.
If there is another crypto bull market and Bitcoin hits $200k, I remember seeing a BOTEC that half of all the new billionaires in the world will be due to crypto.
Hi Aman,
Appreciate the question. We’ve received funding from different sources like the Survival and Flourishing Fund, Future Fund, and other private donors, with Emerson Spartz donating six figures annually.
This project would not fall under the scope of what the Future Fund granted us, so we will not be using their funding for this.
This is coming directly out of our operating budget, so we’re aiming to make payouts that have a higher counterfactual likelihood of impact.
Just wanted to say (without commenting on the points in the dialogue) that I appreciate you and Robert having this discussion, and I think the fact you’re having it is an example of good epistemics.
[Question] What are examples of EA orgs pivoting after receiving funding?
Hi! Thanks for the questions.
I remember hearing that Emerson/Nonlinear invested quite a lot into crypto—presumably with the current markets, his/Nonlinear’s portfolio must’ve taken a hit?
Yes, most crypto people have taken a hit, including Emerson. As far as I know, he has no plans to slow down his donations to Nonlinear.
Secondly, Nonlinear received a Future Fund grant: https://ftxfuturefund.org/our-grants/?_search=nonlinear Are you potentially concerned about clawbacks to the money you hand out, especially if you’re dispersing small amounts to several people who could then be affected?
We’re not using Future Fund grant money for this. That being said, we are still gathering information, but based on our conversations with lawyers and distressed debt investors, we are not as concerned as some community members are about clawbacks, especially for very small grants. This may update in the future as more information comes out.
Also, will additional funders top you up, or will the money go directly to the people affected?
We have had several funders reach out to us. Still working out the details :)
Ryan, glad you liked the prize, and thanks for your feedback! Our partner has significant IP law and branding experience and does not share your concerns.
His perspective: the general case is that
Celebrating our ancestors is common practice. Long-dead famous people frequently get things named after them.
Negative outcomes are unlikely. What you’re proposing could happen, but is quite an edge case.
Branding is important. A better-named prize can lead to more impact and improved community health.
So why are we calling this “The Truman Prize?” instead of something like “The Anonymous EA Award”?
There’s a reason why inspiring people from the past get things named after them. Could write a whole post on our thinking around this, but let’s just say we think having a community health prize with a more inspiring name would be more effective and lead to more impact.
Spending a lot of time on preventing low probability, low downside possibilities, is low EV.
Things like this usually end up being really bureaucratic and could take months or years to approve, so the cost is higher than a simple quick email. Following this general approach to low probability, low downside risks, would lead to it being prohibitive to get things done.
It’s low probability because first, a descendant of Truman would have to
Actually learn of this prize which is unlikely
Not feel like we are honoring Truman by celebrating anonymous altruism, which seems unlikely.
Care enough to actually ask us to change the name, which is also unlikely.
And in the unlikely event that all three of those things happen, then we’ll just change the name. Which is also low cost.
- 14 Nov 2022 5:00 UTC; 3 points) 's comment on Announcing Nonlinear Emergency Funding by (
Highly recommend trying out the Top-Grading interview from Who.
You go through a candidate’s entire work history, from start to finish in chronological order, and ask these questions:
What were you hired to do?
What accomplishments are you most proud of?
How did your performance compare to the previous year’s performance? (For example, this person achieved sales of $2 million and the previous year’s sales were only $150,000.)
How did your performance compare to the plan? (For example, this person sold $2 million and the plan was $1.2 million.)
How did your performance compare to that of peers?
What were some low points during that job?
Who were the people you worked with? Specifically:
What was your boss’s name, and how do you spell that? What was it like working with him/her? What will they tell me were your biggest strengths and areas for improvement?
Why did you leave that job?
It’s just a good, natural, flowing way of understanding what things people actually did instead of trying to give them hypothetical future scenarios of what they might do.
It’s also an efficient way of finding red flags 🚩 like:
Candidate does not mention past failures.
Candidate exaggerates his or her answers.
Candidate takes credit for the work of others.
Candidate speaks poorly of past bosses.
Candidate cannot explain job moves.
People most important to candidate are unsupportive of change.
Candidate seems more interested in compensation and benefits than in the job itself.
Candidate tries too hard to look like an expert.
Candidate is self-absorbed.
Winning too much
Telling the world how smart we are
Passing the buck
Making excuses
Getting legal counsel advice was always the intended procedure internally. Thanks for pointing out that the way I worded it had the potential to be misconstrued so I’ve updated it again :)
The “bureaucrat’s curse” reminds me of Vitalik’s bulldozer vs vetocracy political axis: https://vitalik.ca/general/2021/12/19/bullveto.html
Vetocracy can be beneficial if a system’s strength depends on it not changing. e.g. People invest in Bitcoin because it’s incredibly difficult to change its monetary policy. Bitcoin doesn’t need to innovate.
But if Ethereum is too vetocratic and fails to innovate—it could get outcompeted by other more nimble startups like Solana or Avalanche.
The current mood in the AI Safety community appears to be pessimistic. For example, Eliezer bet Bryan Caplan (2-1 odds) that humans will be extinct by Jan 1, 2030.
If you believe that inaction will lead to extinction, reducing vetoes and increasing the variance of outcomes could increase the probability we’ll survive.
As Scott Alexander says,
Healthy people are fragile (increased variance can mostly make them worse), very sick people are antifragile (increased variance can mostly make them better). So it is reasonable to give a terminal cancer patient an experimental drug—the worst that happens is they die (which would happen anyway) and the best that happens is they recover—it’s all upside and no downside.
This is super helpful, thanks!
Thanks for doing this!
I also have x-risks.com and may or may not use it, so feel free to DM me if you have a creative use case for it.
Appreciate the comments!
My personal context: I joined Nonlinear full-time in April 2022. We’ve gone back and forth from being AI safety-focused to more generally x-risk-focused. We removed the fund from our name because we didn’t just want to fund projects but also launch relevant ones ourselves, like the Nonlinear Library.
Thanks for the flag! Was already aware of this but added a qualifier in the post above and added this to our org payout guidelines for extra redundancy.
Appreciate you saying this, Michel. As you can imagine, it’s been rough. Perhaps accidentally, this post seems to often lump me in with situations I wasn’t really a part of.