My sense is that of the many EAs who have taken EtG jobs quite a few have remained fairly value-aligned? I donât have any data on this and am just going on vibes, but I would guess significantly more than 10%. Which is some reason to think the same would be the case for AI companies. Though plausibly the finance companyâs values are only orthogonal to EA, while the AI companyâs values (or at least plans) might be more directly opposed.
OscarDđ¸
The comment that Ajeya is replying to is this one from Ryan, who says his timelines are roughly the geometric mean of Ajeyaâs and Danielâs original views in the post. That is sqrt(4*13) = 7.2 years from the time of the post, so roughly 6 years from now.
As Josh says, the timelines in the original post were answering the question âMedian Estimate for when 99% of currently fully remote jobs will be automatableâ.
So I think it was a fair summary of Ajeyaâs comment.
There is some discussion of strategy 4 on LW at the moment: https://ââwww.lesswrong.com/ââposts/ââJotRZdWyAGnhjRAHt/ââtail-sp-500-call-options
Good point, I agree that ideally that would be the case, but my impression (from the outside) is that OP is somewhat capacity-constrained, especially for technical AI grantmaking? Which I think would mean if non-OP people feel like they can make useful grants now that could still be more valuable given the likelihood that OP scales up and gets more AI grantmaking in coming years. But all that is speculation, I havenât thought carefully about the value of donations over time, beyond deciding to not save all my donations for later for me personally.
I suppose it depends whether the counterfactual is the two parties to the bet donate the 10k to their preferred causes now, or donate the 10k inflation adjusted in 2029, or donât donate it at all. Insofar as we think donations now are better (especially for someone who has short AI timelines) there might be a big difference between the value of money now vs the value of money after (hypothetically) winning the bet.
Good on you all!
Does anyone know whether CE/âAIM has looked into this, and if not it seems like they should? Great that you guys have already started something so now maybe there is no need to go via their incubation program, but conversely they might still have a significant value add in terms of networks + advice + funding. Iâm not sure who the relevant CE person to ask would be.
Thanks for writing this up, I just looked back at the results of a generic blood test measuring many different things I did earlier in the year and I had a creatinine value of 0.82 (the reference range was given as 0.7-1.3).
I havenât looked through the literature you cited, do you happen to know if I am already in the healthy range whether it is still helpful to be supplementing, or if it is bad to go over 1.3 if I do supplement?
I agree that 5 (accepting OP-dominated balance sheets) seems like the best solution.
I think a different but related point is that an org that can fundraise outside of EA is that much more valuable than an org producing identical outputs but fundraising from within EA. The big example of this of course is GiveWellâusing EA principles but getting money from a far wider set of people. Raising $1 from OP (and even more so other EA sources) has pretty direct opportunity costs for other high-impact projects, but raising $1 from someone else mainly trades off against that donorâs consumption or their other donations which we (putatively) think are a lot less impactful.
EA Australia and LTFF. Reflections at https://ââforum.effectivealtruism.org/ââposts/ââ7FufeFhDE7Fp9i3fr/ââfive-years-of-donating
I found this a really clear and useful explanation (though I already had a decent idea how NAO worked)!
If ever you want to reach a broader audience, I think making an animated video based on this content, maybe with the help of Rational Animations or Kurtzgesagt, would work well.
Assuming a key inefficiency of the nasal swabs method is the labour costs of people collecting them, is the process straightforward enough that you could just set up an unmanned sample collection place where in a busy building somewhere people can just swab themselves and drop the sample in a chute or box or something? Hopefully post-Covid people are fairly familiar with nasal swabbing technique.
Thanks for sharing the raw data!
Interestingly, of the 44 people who ranked every charity, the candidates with most last-placed votes were: PauseAI = 10, VidaPlena = ARMoR = 5, Whylome = 4, SWP = AMF = Arthropoda = 3, ⌠. This is mostly just noise Iâm guessing, except perhaps that it is some evidence PauseAI is unusually polarising and a surprisingly large minority of people think it is especially bad (net negative, perhaps).
Also here is the distribution of how many candidates people ranked:
I am a bit surprised there were so many people who voted for none of the winning charitiesâI would have thought most people would have some preference between the top few candidates, and that if their favourite charity wasnât going to win they would prefer to still choose between the main contenders. Maybe people just voted once initially and then didnât update it based on which candidates had a chance of winning.
I think the main reason to update oneâs vote based on the results is if you voted number 1 for a charity that is first or second, but a charity you also quite like is e.g. fourth or fifth, then strategically switching to rank the latter first would make sense. But this was not the case for me.
Overall my guess is the live vote tallies adds to the excitement but doesnât actually contribute much epistemically?
yeah sure, lmk what you find out!
I think I am quite sympathetic to A, and to the things Owen wrote in the other branch, especially about operationalizing imprecise credences. But this is sufficiently interesting and important-seeming that I am noting to read later some of the references you give to justify A being false.
Surely we should have nonzero credence, and maybe even >10% that there arenât any crucial considerations we are missing that are on the scale of âconsider nonhumansâ or âconsider future generationsâ. In which case we can bracket worlds where there is a crucial consideration we are missing as too hard, and base our decision on the worlds where we have the most crucial considerations already, and base our analysis on that. Which could still move us slightly away from pure agnosticism?
Your view seems to imply the futility of altruistic endeavour? Which of course doesnât mean it is incorrect, just seems like an important implication.
I also didnât find it too compelling, I think partly it is the issue of the choice seeming not important or high-stakes enough. Maybe the philanthropist should be deciding whether to fund clean energy R&D or vaccines R&D, or similar.
I donât think I quite agreed with this, or at least it felt misleading:
And you cannot reasonably believe these chaotic changes will be even roughly the same no matter whether the beneficiaries of the donation are dog or cat shelters.
I think it may be very reasonable to think that in expectation the longterm effects will be âroughly the sameâ. This feels more like a simple cluelessness case than complex cluelessness (unless you explain why the cats vs dogs will predictably change economic growth, world values, population size etc).
Whereas the vaccines vs clean energy I think there would be more plausible reasons why one or the other will systematically have different consequences. (Maybe a TB vaccine will save more lives, increasing population and economic growth (including making climate change slightly worse), whereas the clean energy will increase growth slightly, make climate change slightly less bad, and therefore increase population a bit as well, but with a longer lag time.)
Also on your question 1, I think being agnostic about which one is better is quite different to being agnostic about whether something is good at all (in expectation) and I think the first is a significantly easier thing to argue for than the second.
Thanks for writing this up, and congrats on having preliminary promising signs!
I left a bunch of more minor comments in the CEA sheet (thanks for making that public).
Are there any interest groups on the other side of this issue? I suppose budget hawks and fiscal conservatives may try to shoot down any new funding plan, particularly given EU budgetary woes. But otherwise, it seems like a good issue in terms of not making powerful enemies (since the Pharma industry is onside).
In the field where you can leave a comment after voting it says the comment will be copied here but not who you voted for, probably some people just missed that info though.
How come LTFF isnât in the donation election? Maybe it is too late to be added now though.
Good points, I agree with this, trends 1 and 3 seem especially important to me. As you note though the competitive (and safety) reasons for secrecy and research automation probably dominate.
Another thing that current trends in AI progress means though is that it seems (far) less likely that the first AGIs will be brain emulations. This in turn makes it less likely AIs will be moral patients (I think). Which I am inclined to think is good, at least until we are wise and careful enough to create flourishing digital minds.
Two quibbles:
âGiven the amount of money invested in the leading companies, investors are likely to want to take great precautions to prevent the theft of their most valuable ideas.â This would be nice, but companies are generally only incentivised to prevent low-resourced actors steal their models. To put in enough effort to make it hard for sophisticated attackers (e.g. governments) to steal the models is a far heavier lift and probably not something AI companies will do of their own accord. (Possibly you already agree with this though.
âThe power of transformer-based LLMs was discovered collectively by a number of researchers working at different companies.â I thought it was just Google researchers who invented the Transformer? It is a bit surprising they published it, I suppose they just didnât realise how transformative it would be, and there was a culture of openness in the AI research community.