In my mind since EA premises are vague and generic, any criticism above a quality bar gets borg’d in. So no, I didn’t ever see an “external” criticism of EA be any good—if it was good, then it’d be internal criticism, as far as im concerned.
quinn
It’s important to consider adverse selection. People who get hounded out of everywhere else are inexplicably* invited to a forecasting conference, of course they come! they have nowhere else to go!
* inexplicably, in the sense that a forecasting conference is inviting people specialized in demographics and genetics—it’s a little related, but not that related.
how much better is chatgpt than claude, in your experience? I feel like it wouldn’t be costly for me to drop down to free tier at openai but keep premium at anthropic, though I would miss the system prompt / custom gpt features. (I’m currently 20/month at both)
I loved Liu’s trilogy because it makes longtermism seem commonsensical.
Decoupling is uncorrelated with the left-right political divide.
Say more? How do we know this?
3 year update: I consider this 2 year update to be a truncated version of the post, but it’s actually too punchy and even superficial.
My opinion lately / these days is too confused and nuanced to write about.
thanks for the writeup! I had a ton of similar feelings for a while, mixing between finding people who say “it’s not worth defending it’s just a meme” and “actually I’ll defend using something like this”.
At one point I was discussing this issue with Rob Miles at manifest, who told me something like “the default is a bool (some two valued variable)”, the idea being that if people are arguing over an interval then we could’ve done way worse.
While I think the fuzzies from cooperating with your vegan friends should be considered rewarding, I know what you mean—it’s not a satisfying moral handshake if it relies on a foundation of friendship!
I’m pretty confident that people who prioritize their health or enjoyment of food over animal welfare can moral handshake with animal suffering vegans by tabooing poultry at the expense of beef. So a non-vegan can “meet halfway” on animal suffering by preferring beef over chicken.
Presumably, a similar moral handshake would work with climate vegans that just favors poultry over beef.
Is there a similar moral handshake between climate ameliaterians (who have a little chicken) and animal suffering (who have a little beef)?
Will @Austin’s ‘In defense of SBF’ have aged well? [resolves to poll]
Posting here because it’s a well worth reading and underrated post, and the poll is currently active. The real reason I’m posting here is so that I can find the link later, since searching over Manifold’s post feature doesn’t really work, and searching over markets is unreliable.
Feel free to have discourse in the comments here.
Any good literature reviews of feed conversion ratio you guys recommend? I found myself frustrated that it’s measured in mass, I’d love a caloric version. The conversion would be straightforward given a nice dataset about what the animals are eating, I think? But I’d be prone to steep misunderstandings if it’s my first time looking at an animal agriculture dataset.
I’m willing to bite the tasty bullets on caring about caloric output divided by brain mass, even if it recommends the opposite of what feed conversion ratios recommend. But lots of moral uncertainty / cooperative reasons to know in more detail how the climate-based agricultural reform people should be expected to interpret the status quo.
It seems like a super quick habit-formation trick for a bunch of socioepistemic gains is just saying “that seems overconfident”. The old Sequences/Methods version is “just what do you think you know, and how do you think you know it?”
A friend was recently upset about his epistemic environment, like he didn’t feel like people around him were able to reason and he didn’t feel comfortable defecting on their echo chamber. I found it odd that he said he felt like he was the overconfident one for doubting the reams of overconfident people around him! So I told him, start small, try just asking people if they’re really as confident as they sound.
In my experience, it’s a gentle nudge that helps people be better versions of themselves. Tho I said “it seems” cuz I don’t know how many different communities it work would reliably in—the case here is someone almost 30 in a nice college with very few grad students in an isolated town.
Can you write about cross pollination between technical safety and AI governance and policy? In the case of the new governance mechanisms role (zeroing in on proof of learning and other monitoring schemas), it seems like bridging or straddling the two teams is important.
Yes. In general I’d love to see him try harder to write out tenure overhauls. He seems to stay on a popsci / blogger register instead of designing a new protocol / incentive scheme and shopping it around at metascience conferences.
I’m pro dismissiveness of sneer/dunk culture (most of the time it comes up), but I think the CoI thing about openphil correlated investments/board seats/marriage is a very reasonable thing to say and is not sneer/dunk material. I get the sense from what’s been written publicly that openphil has tried their best to not be manipulating horizon fellows into some parochial/selfish gains for sr openphil staff, but I don’t think that people who are less trusting than me about this are inherently acting in bad faith.
In an “isolated demand for rigor” sense it may turn back into opportunism or sneer/dunk---- I kinda doubt that any industry could do anything ever without a little corruption, or a considerable risk of corruption, especially new industries. (i.e. my 70% hunch is that if an honest attempt to learn about the reference class of corporation and foundation partnerships wining and dining people on the hill and consulting on legislation was conducted, these risks from horizon in particular would not look unusually dicey. I’d love for someone to update me in either direction). But we already know that we can’t trust rhetorical strategies in environments like this.
Yeah—there’s an old marxist take that looks like “religion is a fake proxy battle for eminent domain and ‘separate but equal’ style segregation” that I always found compelling. I can’t imagine it’s 100% true, but Yovel implies that it’s 100% false.
Thanks guys! I support what you were saying about application timeliness, expectation management, etc. This seems like a super reasonable set of norms.
I overall thought you crushed it, no notes, etc. My literal only grievance was that there were a bajillion forlorn nametags on the table of people I specifically had important world-saving business to check in with on, so the no-shows definitely lowered my productivity at the conf. I ended up being surprised by great discussions that popped up in spite of not having the ones I had hoped to, so I’m not complaining.
I was one of the “not supposed to be on that coast that weekend” people who had a bunch of stuff fall apart / come together at the last minute, it literally was the wednesday or thursday of the week itself that I was confident I would not be tied up in california---- so I’m wondering, should I have applied on time with an annotated application “I’m 90% sure I can’t make it, but it’ll be easier for me to be in the system already if we end up in the 10% world”? At this point, it becomes important for me to update the team so I don’t impose the costs like the forlorn nametags or anything else discussed in post, but those updates themselves increase the overall volume of comms / things to keep track of for both me and the staff, which is also a cost.
Even if my particular case is too extreme and unusual to apply to others, I hope norms or habits get formed in the territory of “trying to be thoughtful at all” cuz it sounds like we’re at the stage where we only have to be directionally correct.
I’m inferring that the kind of reasons Kurt had for not talking about this as much (in, say, earshot of me) before now are the exact same kind of reasons people are overall intimidated / feel it’s too scary to not use burner accounts for things.
You, my friend, are not sorry :)