Data scientist working on AI governance at MIRI, previously forecasting at Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.
Robi Rahmanšø
Iām definitely not assuming the my-favorite-theory rule.
I agree that what Iām describing is favored by the maximize-expected-choiceworthiness approach, though I think you should reach the same conclusion even if you donāt use it.
Can you explain how a moral parliament would end up voting to split the donations? That seems impossible to me in the case where two conflicting views disagree on the best charityāI donāt see any moral trade the party with less credence/āvoting power can offer the larger party not to just override them. For parliaments with 3+ views but no outright majority, are you envisioning a spoiler view threatening to vote for the charity favored by the second-place view unless the plurality view allocates it some donation money in the final outcome?
edit: actually, I think the donations might end up split if you choose the allocation by randomly selecting a representative in the parliament and implementing their vote, in which case the dominant party would offer a little bit of donations in cases where it wins in exchange for donations in cases where someone else is selected?
Of course they might be uncertain of the moral status of animals and therefore uncertain whether donations to an animal welfare vs a human welfare charity is more effective. That is not at all a reason for an individual to split their donations between animal and human charities. You might want the portfolio of all EA donations to be diversified, but if an individual splits their donations in that way, they are reducing the impact of their donations relative to contributing only to one or the other.
Moral uncertainty is completely irrelevant at the level of individual donors.
Can you give examples of āadversarialā altruistic actions? Like protesting against ICE to help immigrants? Getting CEOs fired to improve what their corporations do?
By āgreater threat to AI safetyā you mean itās a bigger culprit in terms of amount of x-risk caused, right? As opposed to being a threat to AI safety itself, by e.g. trying to get safety researchers removed from the industry/āgovernment (like this).
What is positivism and what are some examples of non-positivist forms of knowledge?
IMO, merely 4x-ing the number of individual donors or the frequency of protests isnāt near the threshold for āmass social changeā in the animal welfare area.
āIndividual donors shouldnāt diversify their donationsā
Arguments in favor:
this is the strategy that maximizes the benefit to the recipients
Arguments against:
itās personally motivating to stay in touch with many causes
when each cause comes up in a conversation with non-EAs, you can mention youāve donated to it
Iām not a lawyer but this sounds⦠questionably legal.
US tax rules for donaĀtions changĀing next year
Can I take you up on the offer to do a video call and see if we can install it on Chrome OS? Will DM you
In the same way that two human super powers canāt simply make a contract to guarantee world peace, two AI powers could not do so either.
Thatās not true. AI can see (and share) its own code.
Just want to note that I think this comment has basically been vindicated in the three years since FTX.
I love this idea, and I think youāre on to something with
We donāt notice how much of EAās āindependent thinkingā comes from people who can afford to do it.
(but I disagree-voted because I donāt think āEA shouldā do this; I doubt itās cost-effective)
I got to the terminal but wasnāt able to access the download and gave up at that step because for some reason I assumed it would only install the app for the linux development environment as opposed to the rest of Chrome OS. Iāll try again, and email you if I canāt get it working.
Is it possible to use it on Chrome OS somehow? It auto-detects that as Linux but I think it wonāt work if I use the Linux installer. Iām pretty sure it would be installable as a browser add-on but then not sure if it would work when youāre using other programs.
This isnāt deontology, itās lexical-threshold negative utilitarianism.
https://āāreducing-suffering.org/āāthree-types-of-negative-utilitarianism/āā
For me, it was a moderate update against ābycatchā amongst LTFF grantees (an audience which, in principle, should be especially vulnerable to bycatch)
Really? I think it would be the opposite: LTFF grantees are the most persistent and accomplished applicants and are therefore the least likely to end up as bycatch.
Strongly agree with this post. I think my session at EAG Boston 2024 (audience forecasting, which was fairly group-brainstormy) was suboptimal for exactly the reasons you mentioned.
Thank you very much, I hadnāt seen that the moral parliament calculator had implemented all of those.
Moral Marketplace strikes me as quite dubious in the context of allocating a single personās donations, though Iām not sure itās totally illogical.
Maximize Minimum is a nonsensically stupid choice here. A theory with 80% probability, another with 19%, and another with 0.000001% get equal consideration? I can force someone who believes in this to give all their donations to any arbitrary cause by making up an astronomically improbable theory that will be very dissatisfied if they donāt, e.g. āthe universe is ruled by a shrimp deity who will torture you and 10^^10 others for eternity unless you donate all your money to shrimp welfareā. You can be 99.9999...% sure this isnāt true but never 100% sure, so this gets a seat in your parliament.