A major factor in my calculus is also that, given enough time, it’ll probably become much more feasible to send non-biological life without any need for terraforming to planets outside of our solar system than to send biological life, so that even in terraforming scenarios the terraforming will probably be limited to just planets within the solar system.
I’ve written a comparative article on plausible intervention for human rights in North Korea. The activists I interviewed had already considered running campaigns to discourage travel to North Korea because tourism is an important source of foreign currency for the government. (They can force their citizens to stage North Korean life for tourists while paying them in their worthless national currency, so that they make a large profit on tourism.)
To my knowledge, these activists never pursued that strategy because it may actually be an attention hazard and thus actually increase tourism, and because it might strain relationships with organizations that think that tourists may show North Koreans that other ways of life are possible. But I find that implausible because almost no one is allowed to travel within North Korea (and tourists are even more tightly controlled and restricted) so that it’s always only the same most loyal North Koreans who come into contact with tourists.
But I discuss other more promising interventions in the article. For more detailed, reliable, and up-to-date information you can get in touch with, e.g., Saram as I’m not myself active in the space.
Very promising! They have plans to create a mobile client, and maybe the web version will also eventually support HTML and ebook formats. Looking forward to that!
CFAR and Encompass (https://encompassmovement.org/) might also fit the bill? Maybe also some (other) EA meta-charities whose current team configuration I don’t remember well enough.
I’d particularly appreciate an updated version of “Astronomical waste, astronomical schmaste” that disentangles the astronomical waste argument from arguments for the importance of AI safety. The current one makes it hard for me to engage with it because I don’t go along with the astronomical waste argument at all but are still convinced that a lot of projects under the umbrella of AI safety are top priorities, because extinction is considered bad by a wide variety of moral systems irrespective of astronomical waste, and particularly in order to avert s-risks, which are also considered bad by all moral systems I have a grasp on.
This is fascinating! I’ve heard (though it may well be bunk) that intelligence in humans is somewhat correlated with brain size but that the brain size is limited by the size of the birth canal. (Which made me think that c-section should lead to smarter people in the long run.) But if there’s still so much room for optimization left without changing the brain size, does that merely indicate that the changes would take too many mutations to be likely to happen (sort of why we still have our weird eye architecture when other animals have straightforward eyes) or that a lot of human thinking happens at a lower abstraction level than that of the neuron so that, e.g., whole brain emulation at a neuronal level would be destined to fail?
Seminal for me has been Owen Cotton-Barratt’s paper “How valuable is movement growth?” I therefore welcome the shift toward very careful if any growth that has happened over the past years. Today I think of the EA community like a startup of sorts that tries to hire slowly and selects staff carefully based on culture fit, character, commitment, etc.
Hi! Thank you! That sounds good (“Charity X” would be a free text field?), but I don’t know whether there are other problems it doesn’t address. To guard against that, an FAQ entry explaining the problem would be best. Generally that’ll be needed because this is probably unintuitive for many people (like me), so even if they have the information about the swap counterfactual, they may not be able to use it optimally without an explanation.
Hmm, yeah, curious as well. Maybe it’s because I link long essays without summarizing them, so people are left wondering whether the essays are relevant enough to be worth reading.
But apart from the link to Simon’s reply, Kaj’s comment is much better than mine anyway.
Wow! Thank you again for another amazing overview! :-D
With regard to the FRI section: Here is a reply to Toby Ord by Simon Knutsson and another piece that seems related. (And by “suffering focus,” people are referring to something much broader than NU, which may be true of some CUs too.)
Very interesting talks – thank you! For me, especially Phillip Trammell’s talk.
Thank you for creating this! I want to understand some possible risks to my value system better. So here’s one scenario that I’ve been thinking about.
I realize that it’s a trust system but if Donor A trusts Donor B on something that is not clear enough to Donor A to be able to ask and that’s so unremarkable to Donor B that they see no reason to tell Donor A about it any more than their espresso preferences, then no one is really at fault if they miscommunicate.
Say Donor A:
is neutral between Rethink Priorities (RP) and the Against Malaria Foundation (AMF) (but Donor B doesn’t know this),
can get tax exemption for a donation to RP but not AMF, and
wants to donate $2k.
And Donor B:
values a dollar to RP more than 100 times as highly as one to AMF (but Donor A doesn’t know this),
can get tax exemption for a donation to AMF but not RP, and
wants to donate $1k.
Without donation swap:
Donor A is perfectly happy, and donates $2k to RP because of the tax exemption as tie breaker (but even if they split the donation 50:50 or donate with 50% probability, this case is still problematic).
Donor B is a bit sad but donates, say, $850 to RP, which comes down to the same cost to them due to the lacking tax break.
In effect: RP gains $2,850 and AMF gains $0. Both donors are reasonably happy with this result, the only wrinkle being the taxes.
But with donation swap:
Donor A loves helping their fellow EAs and so offers a swap even though they don’t personally need it.
Donor B enthusiastically takes them up on the offer to save the taxes, donates $1k to AMF, and Donor A donates $1k to RP. Later, Donor A donates their remaining $1k to RP.
In effect: RP gains $2k and AMF gains $1k. Slightly positive for Donor A but big loss for Donor B.
This seems like a plausible scenario to me but there are other scenarios that are less extreme but still very detrimental to one side and possibly even harder to spot.
So am I overlooking something that alleviates this worry or do donors have to know (commit) and be transparent about where they will donate if no swap happens, in order for the other party to know whether they should take them up on the offer?
Very interesting! This strikes me as a particular type of mission hedging, right?
I’d love to subscribe to a blog where you publish what grants you’ve recommended. Are you planning to run something like that?
Oh, cool! I’m reading that study at the moment. I’ll be able to say more once I’m though. Then I’ll turn to your article. Sounds interesting!
Thank you for starting that discussion. Some resources that come to mind that should be relevant here are:
Lukas Gloor’s concept of Tranquilism,
different types of happiness (a talk by Michael Plant where I think I heard them explained), and
the case for the relatively greater moral urgency and robustness of suffering minimization over happiness maximization, i.e., a bit of a focus on suffering.
I’m against it. ;-)
Just kidding. I think monopolies and competition are bundles of advantages and disadvantages that we can also combine differently. Competition comes with duplication of effort, sometimes sabotaging the other rather than improving oneself, and some other problems. A monopoly would come with the local optima problem you mentioned. But we can also acknowledge (as we do in many other fields) that we don’t know how to run the best wiki, and have different projects that try out different plausible strategies while not being self-interested by being interested in the value of information from the experiment. So they can work together, automatically synchronize any content that can be synchronized, etc. We’ll first need meaningful differences between the projects that it’ll be worthwhile to test out, e.g., restrictive access vs. open access.
That would be an immensely valuable meta problem to solve!
Then maybe we can have a wiki which reaches “meme status”.
Then maybe we can have a wiki which reaches “meme status”.
On a potentially less serious note, I wonder if one could make sure that a wiki remains popular by adding a closed section to it that documents particular achievements from OMFCT the way Know Your Meme does. xD
Sweet! I hope it’ll become a great resource! Are you planning to merge it with https://causeprioritization.org/? If there are too many wikis, we’d just run into the same problem with fragmented bits of information again.
Thank you! I suspect, this is going to be very helpful for me.