I think I am quite sympathetic to A, and to the things Owen wrote in the other branch, especially about operationalizing imprecise credences. But this is sufficiently interesting and important-seeming that I am noting to read later some of the references you give to justify A being false.
OscarD🔸
Surely we should have nonzero credence, and maybe even >10% that there aren’t any crucial considerations we are missing that are on the scale of ‘consider nonhumans’ or ‘consider future generations’. In which case we can bracket worlds where there is a crucial consideration we are missing as too hard, and base our decision on the worlds where we have the most crucial considerations already, and base our analysis on that. Which could still move us slightly away from pure agnosticism?
Your view seems to imply the futility of altruistic endeavour? Which of course doesn’t mean it is incorrect, just seems like an important implication.
I also didn’t find it too compelling, I think partly it is the issue of the choice seeming not important or high-stakes enough. Maybe the philanthropist should be deciding whether to fund clean energy R&D or vaccines R&D, or similar.
I don’t think I quite agreed with this, or at least it felt misleading:
And you cannot reasonably believe these chaotic changes will be even roughly the same no matter whether the beneficiaries of the donation are dog or cat shelters.
I think it may be very reasonable to think that in expectation the longterm effects will be ‘roughly the same’. This feels more like a simple cluelessness case than complex cluelessness (unless you explain why the cats vs dogs will predictably change economic growth, world values, population size etc).
Whereas the vaccines vs clean energy I think there would be more plausible reasons why one or the other will systematically have different consequences. (Maybe a TB vaccine will save more lives, increasing population and economic growth (including making climate change slightly worse), whereas the clean energy will increase growth slightly, make climate change slightly less bad, and therefore increase population a bit as well, but with a longer lag time.)
Also on your question 1, I think being agnostic about which one is better is quite different to being agnostic about whether something is good at all (in expectation) and I think the first is a significantly easier thing to argue for than the second.
Thanks for writing this up, and congrats on having preliminary promising signs!
I left a bunch of more minor comments in the CEA sheet (thanks for making that public).
Are there any interest groups on the other side of this issue? I suppose budget hawks and fiscal conservatives may try to shoot down any new funding plan, particularly given EU budgetary woes. But otherwise, it seems like a good issue in terms of not making powerful enemies (since the Pharma industry is onside).
In the field where you can leave a comment after voting it says the comment will be copied here but not who you voted for, probably some people just missed that info though.
How come LTFF isn’t in the donation election? Maybe it is too late to be added now though.
How does LTFF relate to https://www.airiskfund.com/about?
I am confused given the big overlap in people and scope.
Why do you think tactical voting is good/should be allowed? (I haven’t thought about it much myself, I just have a vague sense that it often seen as bad.)
I agree these sound like great (though of course high-risk) opportunities, but find myself confused: why are such things not already being funded?
My understanding is that Good Ventures is moving away from some such areas. But what about e.g. the EA Animal Welfare Fund or other EA funders? I don’t know much about animal welfare funding, so on face value I am pretty convinced these seem worth funding, but I am worried I am missing something if more sensible/knowledgeable people aren’t already funding them. (Though deferring too much to other funders could create too much group-think.)
Could you spell out why you think this information would be super valuable? I assume something like you would worry about Jaan’s COIs and think his philanthropy would be worse/less trustworthy?
On Pauses
If the US AI industry slowed down, but the rest of the world didn’t, how good or bad would this be? How could we avoid adverse selection where countries that don’t pause are presumably going to be less interested in safety all else equal?
In general, are there any notable updates/things you’ve changed your mind about/relevant things changing in the world since you wrote https://forum.effectivealtruism.org/posts/Y4SaFM5LfsZzbnymu/the-case-for-ai-safety-advocacy-to-the-public?
(As you note much of the value may come from your advocacy making more ‘mainstream’ policies more palatable, in which case the specifics of Pause itself matter less, but are still good to think about.)
Adverse selection
What did SFF or the funders you applied to or talked to say (insofar as you know/are allowed to share)?
I am thinking a bit about adverse selection in longtermist grantmaking and how there are pros and cons to having many possible funders. Someone else not funding you could be evidence I/others shouldn’t either, but conversely updating too much on what a small number of grantmakers think could lead to missing lots of great opportunities as a community.
Fundraising scenarios
What will likely happen with different levels of fundraising success, e.g. if you raise less than the target how do you scale down, if you raise more, how do you scale up?
A comment not a question (but feel free to respond): let’s imagine Pause AI US doesn’t get much funding and the org dies, but then in two years someone wants to start something similar—this would seem quite inefficient and bad. Or conversely that Pause AI US get’s lots of funding and hires more people, and then funding dries up in a year and they need to shrink. My guess is there is an asymmetry where an org shrinking for lack of funding is more bad than growing with extra funding, which I suppose leans towards growing slower with a larger runway, but not sure about this.
Donation mechanics
How useful are recurring vs lump donations (e.g. for what X does $1 in recurring donation = $X in lump donation ~now)?
For people outside the US and so not getting a tax deduction, is it best to donate directly rather than via Manifund?
Politics
What does a Trump 2 admin mean for PauseAI US? The likely emphasis on defense and natsec and China competition seems to make Pause lobbying harder.
One worry I have heard, and share, is that some sorts of public advocacy will unhelpfully polarise and politicize the AI debate. How do you think about this, and if you grant the premise that this is a worry, what are you doing/can you do to mitigate it?
Yes strange, maybe @Will Howard🔹 will know re new accounts?
Or maybe a few EAF users just don’t like PauseAI and downvoted, probably the simplest explanation.
And while we are talking about non-object level things, I suggest adding Marginal Funding Week as a tag.
This analysis seems roughly right to me. Another piece of it I think is that being a ‘soldier’ or a ‘bednet-equivalent’ probably feels low status to many people (sometimes me included) because:
people might feel soldiering is generally easier than scouting, and they are more replaceable/less special
protesting feels more ‘normal’ and less ‘EA’ and people want to be EA-coded
To be clear I don’t endorse this, I am just pointing out something I notice within myself/others. I think the second one is mostly just bad, and we should do things that are good regardless of whether they have ‘EA vibes’. The first one I think is somewhat reasonable (e.g. I wouldn’t want to pay someone to be a fulltime protest attendee to bring up the numbers) but I think soldiering can be quite challenging and laudable and part of a portfolio of types of actions one takes.
I plan on trying to do this for any project that gives me any (ethical) doubts, and/or will take up at least 3 months of my full-time work.
Choosing ‘and’ or ‘or’ feels important here since they seem quite different! Maybe our rough model should be cause-for-introspection = ethical qualms * length of project
yeah sure, lmk what you find out!