Co-founded Nonlinear.org, an x-risk incubator. Also into web3, history, rapid learning, complex systems.
DM on twitter for faster response: www.twitter.com/emersonspartz
Co-founded Nonlinear.org, an x-risk incubator. Also into web3, history, rapid learning, complex systems.
DM on twitter for faster response: www.twitter.com/emersonspartz
Thanks for the feedback. I tried to do both. I think the doomerism levels are so intense right now and need to be balanced out with a bit of inspiration.
I worry that the doomer levels are so high EAs will be frozen into inaction and non-EAs will take over from here. This is the default outcome, I think.
Going to say something seemingly-unpopular in a tone that usually gets downvoted but I think needs to be said anyway:
This stat is why I still have hope: 100,000 capabilities researchers vs 300 alignment researchers.
Humanity has not tried to solve alignment yet.
There’s no cavalry coming—we are the cavalry.
I am sympathetic to fears of a new alignment researchers being net negative, and I think plausibly the entire field has, so far, been net negative, but guys, there are 100,000 capabilities researchers now! One more is a drop in the bucket.
If you’re still on the sidelines, go post that idea that’s been gathering dust in your Google Docs for the last six months. Go fill out that fundraising application.
We’ve had enough fire alarms. It’s time to act.
We built an EA bounty platform and have paid out a few dozen bounties!
As a mid-career EA, I strongly agree with this.
Great idea! Would love to help you with this—I’m both an entrepreneur, a history nerd (1,000+ books) and am very interested in AI governance.
Let me know: emersonspartz@nonlinear.org or Twitter DM @EmersonSpartz
They reached out to me. Most (all?) of them saw me speak somewhere else.
Thank you!
Thanks for sharing! If I were only interested in subscribing to 1-4, which would you recommend?
Lots of potential here! I’ve given 6 TEDx talks and would love to be helpful for anyone pursuing this - feel free to reach out.
I agree and will use this opportunity to re-share some tips for increasing readability. I used to manage teams of writers/editors and here are some ideas we found useful:
To remove fluff, imagine someone is paying you $1,000 for every word you remove. Our writers typically could cut 20-50% with minimal loss of information.
Long sentences are hard to read, so try to change your commas into periods.
Long paragraphs are hard to read, so try to break each paragraph into 2-3 sentences.
Most people just skim, and some of your ideas are much more important than others, so bold/italicize your important points.
Love this! I used to manage teams of writers/editors and here are some ideas we found useful for increasing readability:
To remove fluff, imagine someone is paying you $1,000 for every word you remove. Our writers typically could cut 20-50% with minimal loss of information.
Long sentences are hard to read, so try to change your commas into periods.
Long paragraphs are hard to read, so try to break each paragraph into 2-3 sentences.
Most people just skim, and some of your ideas are much more important than others, so bold/italicize your important points.
Agreed—we reached out to some people in the group to see if they wanted to add their listings to the sheet!
We’re trying to keep it super informal for now :)
Light touch curation, yes—we’d certainly appreciate a heads up on anything like this!
That depends on the funders! Give enough bounties, I’d expect an optimal bounty distribution to look power law-ish with a few big bounties (>10k-1m?) and many small ones (<10k).
I didn’t think about it much—public might be better. I assumed some people would be hesitant to share publicly and I’d get more submissions if private, but I’m not sure if that offsets the creative stimulus of sharing publicly.
I’d guess 80% chance at least one gets funded by Feb 2022.
I want to fund every idea that is good enough and then figure out how to scale the bounty market making process 100x.
2 from this particular experiment, but I intend to do more experiments like this.
Sanjay, I just realized you were the top comment, and now I notice that I feel confused, because your comment directly inspired me to express my views in a tone that was more opinionated and less-hedgy.
I appreciate—no, I *love* - EA’s truth seeking culture but I wish it were more OK to add a bit of Gryffindor to balance out the Ravenclaw.