Mostly agreed, but I do think that donating some money, if you are able, is a big part of being in EA. And again this doesn’t mean reorienting your entire career to become a quant and maximize your donation potential.
RedStateBlueState
Allocate Donation Election Funds by Proportional Representation
All punishment is tragic, I guess, in that it would be a better world if we didn’t have to punish anyone. I guess I just don’t think the fact that SBF on some level “believed” in EA (whatever that means, and if that is even true) - despite not acting in accordance with the principles of EA—is a reason that his punishment is more tragic than anyone else’s
This is just not true if you read about the case, he obviously knew he was improperly taking user funds and tells all sorts of incoherent lies to explain it, and it’s really disappointing to see so many EAs continue to believe he was well-intentioned. You can quibble about the length of sentencing, but he broke the law, and he was correctly punished for it.
Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve “global capacity”, and that a better way would be to pursue more controversial/speculative causes like population growth or long-run economic growth or whatever. I just don’t see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuff, it is practically meaningless. This is why I was so drawn to this post—I think you correctly point out that “improving the lives of current humans” is not really what GHW is about!
The non-controversial stuff doesn’t have to be anti-malaria efforts or anything that GiveWell currently pursues; I agree with you there that we shouldn’t dogmatically accept these current causes. But you should really be defining your GHW worldview such that it always centers on non-controversial stuff. Is this kind of arbitrary? You bet! As you state in this post, there are at least some reasons to stay away from weird causes, so it might not be totally arbitrary. But honestly it doesn’t matter whether it’s arbitrary or not; some donors are just really uncomfortable about pursuing philosophical weirdness, and GHW should be for them.
How are you defining global capacity, then? This is currently being argued in other replies better than I can, but I think there’s a good chance that the most reasonable definition implies optimal actions very different than GiveWell. Although I could be wrong.
I don’t really think the important part is the metric—the important part is that we’re aiming for interventions that agree with common sense and don’t require accepting controversial philosophical positions (beyond rejecting pro-local bias I guess)
Love the post, don’t love the names given.
I think “capacity growth” is a bit too vague, something like “tractable, common-sense global interventions” seems better.
I also think “moonshots” is a bit derogatory, something like “speculative, high-uncertainty causes” seems better.
This post is a great exemplar for why the term “AI alignment” has proven a drag on AI x-risk safety. The concern is and has always been that AI would dominate humanity like humans dominate animals. All of the talk about aligning AI to “human values” leads to pedantic posts like this one arguing about what “human values” are and how likely AIs are to pursue them.
Altman, like most people with power, doesn’t have a totally coherent vision for why him gaining power is beneficial for humanity, but can come up with some vague values or poorly-thought-out logic when pressed. He values safety, to some extent, but is skeptical of attempts to cut back on progress in the name of safety.
Hmm, I still don’t think this response quite addresses the intuition. Various groups yield outsized political influence owing to their higher rates of voting—seniors, a lot of religious groups, post-grad degree ppl, etc. Nonetheless, they vote in a lot of uncompetitive races where it would seem their vote doesn’t matter. It seems wrong that an individual vote of theirs has much EV in an uncompetitive race. On the other hand, it seems basically impossible to mediate strategy such that there is still a really strong norm of voting in competitive races but not in uncompetitive races (and besides it’s not clear that that would even suffice given that uncompetitive races would become competitive in the absence of a very large group). I think all the empirical evidence shows that groups that turn out more in competitive races also do so in uncompetitive races.
Sorry, I shouldn’t have used the phrase “the fact that”. Rephrased, the sentence should say “why would the universe taking place in an incomputable continuous setting mean it’s not implemented”. I have no confident stance on if the universe is continuous or not, just that I find the argument presented unconvincing.
That and/or acausal decision theory is at play for this current election
I will say that I think most of this stuff is really just dancing around the fundamental issue, which is that expected value of your single vote really isn’t the best way of thinking about it. Your vote “influences” other people’s vote, either through acausal decision theory or because of norms that build up (elections are repeated games, after all!).
I may go listen to the podcast if you think it settles this more, but on reading it I’m skeptical of Joscha’s argument. It seems to skip the important leap from “implemented” to “computable”. Why does the fact that our universe takes place in an incomputable continuous setting mean it’s not implemented? All it means is that it’s not being implemented on a computer, right?
I think there’s a non-negligible chance we survive until the heat death of the sun or whatever, maybe even after, which is not well-modelled by any of this.
To clarify: the point of this parenthetical was to state reasons why a world without transhumanist progress may be terrible. I don’t think animal welfare concerns disappear or even are remedied much with transhumanism in the picture. As long as animal welfare concerns don’t get much worse however, transhumanism changes the world either from good to amazing (if we figure out animal welfare) or terrible to good (if we don’t). Assuming AI doesn’t kill us obviously.
I think the simplest answer is not that such a world would be terrible (except for factory farming and wild animal welfare, which are major concerns), but that a world with all these transhumanist initiatives would be much better
I am glad somebody wrote this post. I often have the inclination to write posts like these, but I feel like advice like this is sometimes good and sometimes bad and it would be disingenuous for me to stake out a claim in any direction. Nonetheless, I think it’s a good mental exercise to explicitly state the downsides of comparative claims and the upsides of absolute claims, and then people in the comments will (and have) assuredly explain the opposite.
″...for most professional EA roles, and especially for “thought leadership”, English-language communication ability is one of the most critical skills for doing the job well”
Is it, really? Like, this is obviously true to some extent. But I’m guessing that English communication ability isn’t much more important for most professional EA roles than it is for eg academics or tech startup founders. These places are much more diverse in native language than EA I think.
Yes, I just would have emphasized it more. I sort of read it as “yeah this is something you might do if you’re really interested”, while I would more say “this is something you should really probably do”