I think “outdated term” is a power move, trying to say you’re a “geek” to separate yourself from the “mops” and “sociopaths”. She could genuinely think, or be surrounded by people who think, 2nd wave or 3rd wave EA (i.e. us here on the forum in 2025) are lame, and that the real EA was some older thing that had died.
quinn
I roughly feel more comfortable passing the responsibility onto wiser successors. I still like the “positive vs negative longtermism” framework, I think positive longtermism (increasing the value of futures where we survive) risks value lock-in too much. Negative longtermism is a clear cut responsibility with no real downside unless you’re presented with a really tortured example about spending currently existing lives to buy future lives or something.
I distinguish believing that good successor criteria are brittle from speciesism. I think antispeciesism does not oblige me to accept literally any successor.
I do feel icky coalitioning with outright speciesists (who reject the possibility of a good successor in principle), but I think my goals and all of generalized flourishing benefits a lot from those coalitions so I grin and bear it.
I wrote a quick take on lesswrong about evals. Funders seem enchanted with them, and I’m curious about why that is.
I love these kinds of questions! I attempted a roundup here but it never really caught on
nitpick: you say open source which implies I can read it and rebuild it on my machine. I can’t really “read” the weights in this way, I can run it on my machine but I can’t compile it without a berjillion chips. “open weight” is the preferred nomenclature, it fits the situation better.
(epistemic status: a pedantry battle, but this ship has sailed as I can see other commenters are saying open source rather than open weight).
And sorry, I’m not going to be embarrassed about trying to improve the world
You, my friend, are not sorry :)
In my mind since EA premises are vague and generic, any criticism above a quality bar gets borg’d in. So no, I didn’t ever see an “external” criticism of EA be any good—if it was good, then it’d be internal criticism, as far as im concerned.
It’s important to consider adverse selection. People who get hounded out of everywhere else are inexplicably* invited to a forecasting conference, of course they come! they have nowhere else to go!
* inexplicably, in the sense that a forecasting conference is inviting people specialized in demographics and genetics—it’s a little related, but not that related.
how much better is chatgpt than claude, in your experience? I feel like it wouldn’t be costly for me to drop down to free tier at openai but keep premium at anthropic, though I would miss the system prompt / custom gpt features. (I’m currently 20/month at both)
I loved Liu’s trilogy because it makes longtermism seem commonsensical.
Decoupling is uncorrelated with the left-right political divide.
Say more? How do we know this?
3 year update: I consider this 2 year update to be a truncated version of the post, but it’s actually too punchy and even superficial.
My opinion lately / these days is too confused and nuanced to write about.
thanks for the writeup! I had a ton of similar feelings for a while, mixing between finding people who say “it’s not worth defending it’s just a meme” and “actually I’ll defend using something like this”.
At one point I was discussing this issue with Rob Miles at manifest, who told me something like “the default is a bool (some two valued variable)”, the idea being that if people are arguing over an interval then we could’ve done way worse.
While I think the fuzzies from cooperating with your vegan friends should be considered rewarding, I know what you mean—it’s not a satisfying moral handshake if it relies on a foundation of friendship!
I’m pretty confident that people who prioritize their health or enjoyment of food over animal welfare can moral handshake with animal suffering vegans by tabooing poultry at the expense of beef. So a non-vegan can “meet halfway” on animal suffering by preferring beef over chicken.
Presumably, a similar moral handshake would work with climate vegans that just favors poultry over beef.
Is there a similar moral handshake between climate ameliaterians (who have a little chicken) and animal suffering (who have a little beef)?
Will @Austin’s ‘In defense of SBF’ have aged well? [resolves to poll]
Posting here because it’s a well worth reading and underrated post, and the poll is currently active. The real reason I’m posting here is so that I can find the link later, since searching over Manifold’s post feature doesn’t really work, and searching over markets is unreliable.
Feel free to have discourse in the comments here.
Any good literature reviews of feed conversion ratio you guys recommend? I found myself frustrated that it’s measured in mass, I’d love a caloric version. The conversion would be straightforward given a nice dataset about what the animals are eating, I think? But I’d be prone to steep misunderstandings if it’s my first time looking at an animal agriculture dataset.
I’m willing to bite the tasty bullets on caring about caloric output divided by brain mass, even if it recommends the opposite of what feed conversion ratios recommend. But lots of moral uncertainty / cooperative reasons to know in more detail how the climate-based agricultural reform people should be expected to interpret the status quo.
I was just sent this https://www.mzbworks.com/prayer.htm—really fantastic. TLDR luck/magic is real but only works on one thing. I normally think of luck like the compass in Pirates of the Caribbean (2003) (that points to what you want most), although unlike this essay, I normally think of it where the user can juggle multiple goals and the compass will adjust to that. Here, with the author’s notion of prayer, we can really only activate the power of luck on one thing. Perhaps “at a time”, perhaps not.
Just found it charming that jesus was like “oh you should’ve mentioned you beat easy mode already, i usually don’t get around to telling people there are different difficulty levels”. But the application to charitable living/giving is obvious. I literally recently said to myself “eh give yourself the beef cheat this month (i’m mostly vegetarian just open to cheating once or twice a month), you donated a kidney” which is of dubious validity, and yet, just might work (in some sense).
setting aside the fact that I almost literally did this personally, I thought this would resonate with some of yall who’ve thought about value drift.