I roughly feel more comfortable passing the responsibility onto wiser successors. I still like the “positive vs negative longtermism” framework, I think positive longtermism (increasing the value of futures where we survive) risks value lock-in too much. Negative longtermism is a clear cut responsibility with no real downside unless you’re presented with a really tortured example about spending currently existing lives to buy future lives or something.
quinn
I distinguish believing that good successor criteria are brittle from speciesism. I think antispeciesism does not oblige me to accept literally any successor.
I do feel icky coalitioning with outright speciesists (who reject the possibility of a good successor in principle), but I think my goals and all of generalized flourishing benefits a lot from those coalitions so I grin and bear it.
I wrote a quick take on lesswrong about evals. Funders seem enchanted with them, and I’m curious about why that is.
I love these kinds of questions! I attempted a roundup here but it never really caught on
nitpick: you say open source which implies I can read it and rebuild it on my machine. I can’t really “read” the weights in this way, I can run it on my machine but I can’t compile it without a berjillion chips. “open weight” is the preferred nomenclature, it fits the situation better.
(epistemic status: a pedantry battle, but this ship has sailed as I can see other commenters are saying open source rather than open weight).
And sorry, I’m not going to be embarrassed about trying to improve the world
You, my friend, are not sorry :)
In my mind since EA premises are vague and generic, any criticism above a quality bar gets borg’d in. So no, I didn’t ever see an “external” criticism of EA be any good—if it was good, then it’d be internal criticism, as far as im concerned.
It’s important to consider adverse selection. People who get hounded out of everywhere else are inexplicably* invited to a forecasting conference, of course they come! they have nowhere else to go!
* inexplicably, in the sense that a forecasting conference is inviting people specialized in demographics and genetics—it’s a little related, but not that related.
how much better is chatgpt than claude, in your experience? I feel like it wouldn’t be costly for me to drop down to free tier at openai but keep premium at anthropic, though I would miss the system prompt / custom gpt features. (I’m currently 20/month at both)
I loved Liu’s trilogy because it makes longtermism seem commonsensical.
Decoupling is uncorrelated with the left-right political divide.
Say more? How do we know this?
3 year update: I consider this 2 year update to be a truncated version of the post, but it’s actually too punchy and even superficial.
My opinion lately / these days is too confused and nuanced to write about.
thanks for the writeup! I had a ton of similar feelings for a while, mixing between finding people who say “it’s not worth defending it’s just a meme” and “actually I’ll defend using something like this”.
At one point I was discussing this issue with Rob Miles at manifest, who told me something like “the default is a bool (some two valued variable)”, the idea being that if people are arguing over an interval then we could’ve done way worse.
While I think the fuzzies from cooperating with your vegan friends should be considered rewarding, I know what you mean—it’s not a satisfying moral handshake if it relies on a foundation of friendship!
I’m pretty confident that people who prioritize their health or enjoyment of food over animal welfare can moral handshake with animal suffering vegans by tabooing poultry at the expense of beef. So a non-vegan can “meet halfway” on animal suffering by preferring beef over chicken.
Presumably, a similar moral handshake would work with climate vegans that just favors poultry over beef.
Is there a similar moral handshake between climate ameliaterians (who have a little chicken) and animal suffering (who have a little beef)?
Will @Austin’s ‘In defense of SBF’ have aged well? [resolves to poll]
Posting here because it’s a well worth reading and underrated post, and the poll is currently active. The real reason I’m posting here is so that I can find the link later, since searching over Manifold’s post feature doesn’t really work, and searching over markets is unreliable.
Feel free to have discourse in the comments here.
Any good literature reviews of feed conversion ratio you guys recommend? I found myself frustrated that it’s measured in mass, I’d love a caloric version. The conversion would be straightforward given a nice dataset about what the animals are eating, I think? But I’d be prone to steep misunderstandings if it’s my first time looking at an animal agriculture dataset.
I’m willing to bite the tasty bullets on caring about caloric output divided by brain mass, even if it recommends the opposite of what feed conversion ratios recommend. But lots of moral uncertainty / cooperative reasons to know in more detail how the climate-based agricultural reform people should be expected to interpret the status quo.
It seems like a super quick habit-formation trick for a bunch of socioepistemic gains is just saying “that seems overconfident”. The old Sequences/Methods version is “just what do you think you know, and how do you think you know it?”
A friend was recently upset about his epistemic environment, like he didn’t feel like people around him were able to reason and he didn’t feel comfortable defecting on their echo chamber. I found it odd that he said he felt like he was the overconfident one for doubting the reams of overconfident people around him! So I told him, start small, try just asking people if they’re really as confident as they sound.
In my experience, it’s a gentle nudge that helps people be better versions of themselves. Tho I said “it seems” cuz I don’t know how many different communities it work would reliably in—the case here is someone almost 30 in a nice college with very few grad students in an isolated town.
I think “outdated term” is a power move, trying to say you’re a “geek” to separate yourself from the “mops” and “sociopaths”. She could genuinely think, or be surrounded by people who think, 2nd wave or 3rd wave EA (i.e. us here on the forum in 2025) are lame, and that the real EA was some older thing that had died.