Meta-point – I think it would be better if this was called something other than “baby longtermism”, as I found this confusing. Specifically, I initially thought you were going to be writing a post about a baby (i.e., “dumbed-down”) version of longtermism.
“That said, when I started the 10% thing, I did so under the impression that it was what the sacrifice I needed to make to gain acceptance in EA”If this sentiment is at all widespread among people on the periphery of EA or who might become EA at some point, then I find that VERY concerning. We’d lose a lot of great people if everyone assumed they couldn’t join without making that kind of sacrifice.
Hmm, I don’t read it that way. My read of this passage is: the risk of WWIII by 2070 might be as high as somewhat over 20% (but that estimate is probably picked from the higher end of serious estimates), WWIII may or may not lead to all-out nuclear war, all-out nuclear war has some unknown chance of leading to the collapse of civilization, and if that happened then there would also be some further unknown chance of never recovering. So all-in-all, I’d read this as Will thinking that X-risk from nuclear war in the next 50 years was well below 20%.
I also don’t think NYT readers have particularly clear prejudices about nuclear war (they probably have larger prejudices about things like overpopulation), so this would be a weird place to make a concession, in my mind.
My personal view is that targeted small-dollar political donations (which large donors cannot simply fill, due to campaign finance laws) are likely to be vastly higher value on the margin than corresponding-sized (equivalent size plus tax savings) non-political donations to organizations that large donors can fill, insofar as such targeted political opportunities arise. So if I was in the situation you’re describing, I’d accept the higher salary with the intention of donating to such political opportunities when they arose. Of course, this logic is specific to a particular kind of donation opportunity, and won’t generalize to most areas that EAs currently donate to.
It’s interesting the term ‘abused’ was used with respect to AI. It makes me wonder if the authors have misalignment risks in mind at all or only misuse risks.
A separate press release says, “It is important that the federal government prepare for unlikely, yet catastrophic events like AI systems gone awry” (emphasis added), so my sense is they have misalignment risks in mind.
You might be interested in my paper on this topic, where I also come to the conclusion that achieving WBE before de novo AI would be good:https://informatica.si/index.php/informatica/article/view/1874
Go to EA conferences even if you don’t think you are a good fit or 100% bought in to EA. It sparked my interest, sprouted ideas and I was able to tangibly help and share my experiences with others. I underestimated the value of my perspectives for others in different walks of life.
This resonated with me. For my first EA Global (back in 2016), I applied on a whim, attracted by a couple of the speakers and the fact that the conference was close to my hometown, but hesitant due to a few negative misperceptions I had about EA at the time. While there, I felt very much at home, and I’ve been heavily involved in EA ever since. Of course, not everyone will have the same experience, but my sense is there’s a pretty wide range of surprising upsides from going to these sorts of conferences, and it’s often worth going to at least one if you’re uncertain.
I’ve also found going for walks during 1-on-1s to be nice, to the point that I do this for the majority of my 1-on-1s (this also has the side benefit of reducing covid risk)
The possibility of try-once steps allows one to reject the existence of hard try-try steps, but suppose very hard try-once steps.
I’m not seeing why this is. Why is that the case?
Because if (say) only 1/10^30 stars has a planet with just the right initial conditions to allow for the evolution of intelligent life, then that fully explains the Great Filter, and we don’t need to posit that any of the try-try steps are hard (of course, they still could be).
FWIW, I found the interview with SBF to be quite fair, and imho it presented Sam in a neutral-to-positive light (though perhaps a bit quirky). Teddy’s more recent reporting/tweets about Sam also strike me as both fair and neutral to positive.
Hmm, culturally YIMBYism seems much harder to do in suburbs/rural areas. I wouldn’t be too surprised if the easiest ToC here is to pass YIMBY-energy policies on the state level, with most of the support coming from urbanites.
But sure, still probably worth trying.
I thought YIMBYs were generally pretty in favor of this already? (Though not generally as high a priority for them as housing.) My guess is it would be easier to push the already existing YIMBY movement to focus on energy more, as opposed to creating a new movement from scratch.
Not just EA funds, I think (almost?) all random, uninformed EA donations would be much better than donations to an Index fund considering all charities on Earth.
if one wants longtermism to get a few big wins to increase its movement building appeal, it would surprise me if the way to do this was through more earning to give, rather than by spending down longtermism’s big pot of money and using some of its labor for direct work
I agree – I think the practical implication is more “this consideration updates us towards funding/allocating labor towards direct work over explicit movement building” and less “this consideration updates us towards E2G over direct work/movement building”.
because of scope insensitivity, I don’t think potential movement participants would be substantially more impressed by $2*N billions of GiveDirectly-equivalents of good per year vs just $N billions
Agree (though potential EAs may be more likely to be impressed with that stuff than most people), but I think qualitative things that we could accomplish would be impressive. For instance, if we funded a cure for malaria (or cancer, or …) I think that would be more impressive than if we funded some people trying to cure those diseases but none of the people we funded succeeded. I also think that people are more likely to be attracted to AI safety if it seems like we’re making real headway on the problem.
I think you answered your own question? The index fund would just allocate in proportion to current donations, reducing both overhead for fund managers and the necessity to trust the managers’ judgement (other than for deciding which charities do/don’t qualify to begin with). I’d imagine the value of the index fund might increase as EA grows and the number of manager-directed funds increases (as many individual donors wouldn’t know which direct fund to give to, and the index fund would track donations as a whole, including to direct funds).
This looks good! One possible modification that I think would enhance the model would be an arrow from “direct work” or “good in the world” to “movement building” – I’d imagine that the movement will be much more successful in attracting new members if we’re seen as doing valuable things in the world.
Presumably someone (or a group) would have to create a list (potentially after creating an explicit set of criteria), and then the list would be updated periodically (say, yearly).
Should there be an “EA Donation Index Fund” that allows people to simply “donate the market” (similar to how index funds like the S&P500 allow for simply buying the market)? This fund could allocate donations to EA orgs in proportion to the total donations that those funds receive (from EA sources?) over the year (it would perhaps make sense for there to be a few such funds – such as one for EA as a whole, one for longtermism, one for global health and development, etc).I see a few potential benefits:• People who want to donate effectively (and especially if wanting to diversify donations) but don’t have the knowledge/expertise/time/etc, and for whatever reason don’t necessarily trust EA funds to donate appropriately on their behalf, could do so. I expect there may be many people holding back from donating now for lack of a sense of how to donate best (including from people on the periphery of EA), so this might increase donations. I further expect the quality of donations would increase from those not as knowledgable, if they simply donated the market.• Could be lower overhead and more scalable compared to other funds.• Aesthetically, I’d imagine this sort of setup might appeal to finance people, and finance people have a lot of money, so it may widen to pool of donors to EA.• Index fund donations would effectively be matching donations – if, for instance, half of all EA donations were through an EA index fund, then that would mean direct donations to specific charities would be matched by moving money from the index fund towards the specific charity as well (of course, at the expense of other charities in the fund) – this would arguably provide greater incentive for direct donors to donate more (at least insofar as they thought they knew more than/had better values than the market, but that would be their revealed preference from choosing to be direct donors instead of just donating to the index fund).