How come this was only posted with five days notice?
Randomized, Controlled
Strong vote of no-confidence in the combo of (the author(s), at this time, with this strategy), based on this post.
This post does not suggest to me that the author(s) have a deep, nuanced understanding of the trade-offs. The strategy of rallying grass-roots might make sense, but could also cause a lot of harm, and be counter productive in many many ways.
Oh, that’s interesting. Did you folks come up with that methodology?
Oh, also:
I was confused by references to amputation until I understood that amputated tentacles can act autonomously for some amount of time. A brief, direct description of this would be useful.
Your 0.025 and 0.035 are extremely specific; it would be interesting to get a brief description of how you ended up with those numbers without having to delve into the full report.
Thanks for this report! I 100% agree with Ben Stewart this is really really cool. However, minor gripe: I do wish this had been edited for clarity of language. Even by EA Forum standards this the prose here is about as twisty as a pissed off octopus’ tentacles.
yeah, in particular, maybe we’re in a short timelines world and savings rates are [much??] less important. Personally, I’m stressing about savings rate less currently, both because my life has shifted in significant ways that just make things more expensive, but also because I’m taking short timelines somewhat more seriously.
Maybe one way I can summarize the update is: still avoid doing things that I would generally tend to disapprove of longer timeline worlds, but also try to enjoy life a little more.
So, I mostly still don’t eat junk food. But I am going on dates and dancing and going to festivals and trying to channel joy while living in an expensive western city and thinking a lot about <15 year timelines.
It’s occurred to me that maybe I should start rolling a dice at some low probability (say 2%) and let that decide for me when I should actually have dessert.
Apply for funding ASAP. Do not burn to much of your savings. Read about Financial Independence (google FIRE / Financial Independence Retire Early) -- I heard somewhere that you have about 78, 657 hours in your career; if you have a wealth engine that can cover your basic living expenses, then you can devote a much larger fraction of that career to risky EA moves. Even if you’re good at self-study, you probably a social cohort for what you’re doing, especially if your goal is vague. Set yourself a hard deadline to return to the labor market by (I’d suggest less than four to six months, definitely less than 12) if you haven’t made some substantive progress that someone is excited about. DO STUFF THAT ISN’T JUST EA OR TECHNICAL—you’ve just opened up a massive amount of slack for yourself, take advantage of it to explore some other aspects of life. I started dancing contact improv during the year I was off, and it was BY FAR the most positive thing I’ve ever done for my mental and physical health and ability to access joy.
Take all this with rock salt. This is just my experiences. I did this in a very different EA scene and a very different economy.
I think this is one of the things that distinguishes EAs and rationalists from randomly selected smart people. I like to say that EAs have a taste for biting bullets.
Oh, I wasn’t implying any link between worlds 335 and 281, I was just riffing off the idea of sentient and/or symbiotic fungi. I actually think by tying them together in the main body of the post it confusing things.
I am the symbiotic sentient lichen responsible for https://worldbuild.ai/W-0000000335/.
Please DM if you’d like to discuss the possibility of having one of my moieties colonize your lungs or other moist crevasses.
Location: Toronto, Canada Remote: Yep Willing to relocate: Maybe! Skills: web programming (9 years), agile development, writing, illustration, resaerch Résumé/CV/LinkedIn: https://www.linkedin.com/in/l-koren-25893152/ (linkedin is out of date, resume available on request) Email: liav dot koren gmail
I feel like this post makes concrete some of the tensions I was more abstractly pointing at in A Keynesian/Hayekian model of community building.
I can see “Republican” becoming its own cluster in the last couple of decades, but what cleanly distinguishes small-c conservative from libertarian? Eg, I definitely would not call Cowen a Republican, but I get the sense he might be somewhat conservative in how he thinks about development, economics and institutions.
Don’t know if you want to include “podcast conversations” in your set here, but if you do:
Russ Roberts is fairly conservative, also seemed quite thoughtful and to have good epistemology, when I was listening to econtalk regularly (which hasn’t been in a few years). He had a conversation with Bostrom, about AGI, which I thought went terribly (no good, bad bad bad). He also had a conversation with MacAskill, which I don’t remember as well, but I have the general sense that it also didn’t go super well. Maybe worth a re-listen. He’s probably talked with some other major figures, if you go digging in the archives—there have been a lot of development economists, some of whom are probably important to EA research.
>There aren’t any prominent conservative EAs (or at least none that I’ve heard of).
I feel like Tyler Cowen is reasonably libertarian/right of centre. I don’t know if he would call himself an EA, but he has an account on the forum, under his full name. I feel like he’s pretty well know, at least in these circles..
A Keynesian/Hayekian model of community building
Thank you for the snippets.
EAG was, by the end, very emotional for me. I found some of my personal failures being juxtaposed with some of my civilization’s failings. I was put in very direct touch with the yearning at my core. I talked with people who I like and respect and feel wary around. Some of them are spooked and worried about the shape of things to come. I felt my own anxieties about my place in the world and my value rear up. It was fun and challenging and exhausting.
In 2017 I quit my job and spent a significant amount of time self studying ML, roughly following a curriculum that Dario Amodei laid out in an 80k podcast. I ran this plan past a few different people, including in an 80k career advising session, but after a year, I didn’t get a job offer from any of the AI Safety orgs I’d applied to (Ought, OpenAI, maybe a couple of others) and was quite burned out and demotivated. I didn’t even feel up to trying to interview for an ML focused job. Instead I went back to web development (although it was with a startup that did suggest I’d be able to do some ML work eventually, but that job ultimately wasn’t a great fit, and I moved on to my current role… as a senior web dev.)
I think there are a bunch of lessons I learned from this exercise, but overall I consider it one of my failures.
For those who have down-voted or disagreed: happy to hear (and potentially engage with) substantive counter arguments. But I don’t think the Forum is a good place to post posturing, which the original post sometimes descends into.