Thank you both for your past and future work on EV, and best wishes to both of you in your new roles. Really looking forward to seeing you more in the geographic vicinity of Open Phil!
Katja_Grace
Ten arguments that AI is an existential risk
Partial value takeover without world takeover
Survey of 2,778 AI authors: six parts in pictures
Longer/ongoing list here.
Will AI end everything? A guide to guessing | EAG Bay Area 23
Possible, but likely a smaller effect than you might think because: a) I was very ambiguous about the subject matter until they were taking the survey (e.g. did not mention AGI or risk or timelines) b) Last time (for the 2016 survey) we checked the demographics of respondents against those for a random subset of non-respondents, and they weren’t very different.
Participants were also mostly offered substantial payment for taking the survey ($50 usually, for a ~15m survey), in part in the hope of making payment a larger motivator than desire to express some particular view, but I don’t think payment actually made a large difference to the response rate, so probably failed have the desired effect on possible response bias.
How bad a future do ML researchers expect?
We don’t trade with ants
Let’s think about slowing down AI
Counterarguments to the basic AI risk case
LW4EA: A game of mattering
Beyond fire alarms: freeing the groupstruck
Do incoherent entities have stronger reason to become more coherent than less?
Coherence arguments imply a force for goal-directed behavior
>I would be very excited to see research by Giving Green into whether their approach of recommending charities which are, by their own analysis, much less cost effective than the best options is indeed justified.
Several confusions I have:
When did they say these were much less cost-effective? I thought they just failed to analyze cost effectiveness? (Which is also troubling, but different from what you are saying, so I’m confused)
What do you mean by it being justified? It looks like you mean ‘does well on a comparison of immediate impact’, but, supposing these things are likely to be interpreted as recommendations about what is most cost-effective, this approach sounds close to outright dishonesty, which seems like it would still not be justified. (I’m not sure to what extent they are presenting them that way.)
Do they explicitly say that this is their approach?
Note that we didn’t tell them the topic that specifically.
Tried sending them $100 last year and if anything it lowered the response rate.
If you are inclined to dismiss this based on your premise “many AI researchers just don’t seem too concerned about the risks posed by AI”, I’m curious where you get that view from, and why you think it is a less biased source.