I’m glad you’re alive. I wasn’t sure what happened to you, and was worried.
DC
This post is mostly noise because this is a basic point going back over a decade and you do nothing to elaborate it or incorporate objections to naive utilitarianism. There is prior literature on the topic. I want you to do better because this is an important topic to me. The SBF example is a poor one that’s obfuscatory of the basic point because you don’t address the hard question of whether his fraud-funded donations were or weren’t worth the moral and reputational damage, which is debatable and a separate interesting topic I haven’t seen hard analysis of; you open up a can of ethical worms and don’t address it in a way that reasonably looks bad to low decouplers, which is probably the reason for the downvoting. Personally I would endorse downvoting because you haven’t contributed anything novel about increasing the number of probably good high net worth philanthropists, though I didn’t downvote. I only decided to give this feedback because your bio says you’re an econ grad student at GMU, which is notorious for disagreeable economists, and so I think you can take it.
“First they came for the high decouplers...”
I forget what you told me in our shared car ride a few months ago about why you ended up handing off ALERT, but my naive pattern match is that you didn’t do the thing cflexman suggested and that was a large factor for why it didn’t work out for you. Is that right or am I off?
when we have no evidence that aligning AGIs with ‘human values’ would be any easier than aligning Palestinians with Israeli values, or aligning libertarian atheists with Russian Orthodox values—or even aligning Gen Z with Gen X values?
When I ask an LLM to do something it usually outputs something that is its best attempt at being helpful. How is this not some evidence of alignment that is easier than inter-human alignment?
The eggs and milk quip might be offensive on animal welfare reasons. Eggs at least are one of the worst commonly consumed animal products according to various ameliatarian Fermi estimates.
I know of one that is less widely reported; not sure if they’re counted in the two Joseph Miller knows of that are less widely reported, or if separate.
I would personally recommend waiting to sell your kidney when there is a feasible jurisdiction you can travel to that allows kidney markets (e.g. Argentina under Milei).
I recommend asking clarifying questions to reduce confusion before confidently expressing what turn out to be at least in part, spurious criticisms. I guarantee you it’s not fun for the people announcing their cool new project to receive.
I feel a little alienation by the emphasis on elite education from both sides of this kind of debate. Not that there’s necessarily much that can be changed there, it’s probably just the nature of the game mostly. But I find a little odd that the “be more normal [with career capital]” camp presumes normal to include being in the upper middle class of the Anglo world. That’s usually the sort of person making the critique. Though I could see a blue-collar worker levying it too.
And? Do you have a particular solution to guarantee pandemic prevention that deals with the specific logistical complexities inherent to the task, that can be applied to every country on Earth without being resisted?
“Step 2: Draw the rest of the owl.”
I see you state your solutions will come in later posts but I think it’s better to do that upfront given your rhetoric is currently not justified. Given your title I expect to see a theory of change that attempts to address the overwhelming challenges involved.
It would be helpful to know what events have been hosted there by now.
“X-Risk” Movement-Building Considered Probably Harmful
My instinct has generally been for a while now that it’s probably really really bad for the majority of the population to be aware of the meme of x-risk, or at least more harm than good. See climate doomerism. See (attempted) gain of function research at Wuhan. See asteroid deflection techniques that are dual-use with respect to asteroid weaponization which is orders of magnitude worse of a still far-off risk than natural asteroid impact. See gain of function research at Anthropic which, idk, maybe it’s good but that’s kinda concerning, as well as all the other resources provided to questionably benevolent AGI companies under the assumption it will do good. “X-risk” seems like something that will make people go crazy in ways that will cause destruction, e.g. people use the term “pivotal act” even when I’d claim it’s been superceded by Critch’s “pivotal process”. I’m also worried about dark triad elites or bureaucrats co-opting these memes for unnecessary power and control, a take from the e/acc vein of thought that I find their most sympathetic position, because it’s probably correct when you think in the limit of social memetic momentum. Sorta relatedly, I’m worried about EA becoming a collection of high modernist midwittery as it mainstreams, watered down and unable to course correct from co-options and simplifications. Please message me if you want to riff on these topics.
Minor point that isn’t engaging with the substance of your post, which I basically agree with the main point, but a negative externality here is that fundraising is often annoying. There is adverse selection where organizations that fundraise are often corrupt (see: Wikipedia) and ineffective. If an org is fundraising, it makes me think implicitly, “Why do you need my money? What has caused you to have this scarcity? Are you ineffective and have been passed over?” Personally I’d prefer moving past the social technology of donations and move more towards impact market-like mechanisms.
one part of me is under the impression that more people should commit themselves to things that probably won’t work out but would pay off massively if they do. The relevant conflict here is this means losing optionality and taking yourself out of the game for other purposes. We need more wild visions of the future that may work out if e.g. AI doesn’t. Playing to your outs is very related but I’m thinking more generally we do in fact need more visions based on different epistemics about how the world is going, and someone might necessarily have to adopt some kind of provisional story of the world that will probably be wrong but is requisite to model any kind of payoff their commitment may have. Real change requires real commitment. Also, most ways to help look like particular bets towards building particular infrastructural upgrades, vs starting an AGI company that Solves Everything. On the flip side, we also need people holding onto their wealth and paying attention, ready to pounce on opportunities that may arise. And maybe you really should just get as close to the dynamo of technocapital acceleration as possible.
Not noticing big obvious problems with impact certificates/markets
What problems are you thinking of in particular?
Ancestor worship also came to mind a la What We Owe The Past, but I wasn’t sure if OP had something different in mind than that post.
https://forum.effectivealtruism.org/posts/ndvguMbcdAMXzTHJG/what-we-owe-the-past
What are specific examples of retrospectivism/retrospectivist causes that you have in mind? Are there things in this category that differ from global health and poverty causes? When you say reparations, what do you mean in particular? When would or wouldn’t reducing a global catastrophic risk such as pandemics count as helping one’s ancestors via the repair of their descendants?
Reminder that there is an EA Focusmate group, where you can do 50 minute coworking calls with other EAs. Also, if you’re already in the group, please give any feedback on it here or via DM.