Hi there! I’m an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)
PabloAMC 🔸
Hits based giving sorry! I wrote too fast.
What are his thoughts on impact-based giving?
1⁄6 might be high, but perhaps not too many orders of magnitude off. There is an interview in the 80000hours podcasts (https://80000hours.org/podcast/episodes/ezra-karger-forecasting-existential-risks/) about a forecasting contest in which experts and superforecasters estimated AI extinction risk in this century to be 1% to 10%. And after all, AI is likely to dominate the prediction.
Where will the podcast be released?
It’s inevitable tulip farmer wages will go down if we airdrop an additional tulip farmer.
Maybe what is inevitable is the additional person will start producing something else.
I believe the thing that people would be willing to change their behaviour most for is feeling in-group. Eg, when people know that they are expected to do X, and people around them will know if they do not. But that is very hard to implement.
Commenters are also confusing ‘should we give PauseAI more money?’ with ‘would it be good if we paused frontier models tomorrow?’
I think it is a reasonable assumption that we only should give PauseAI more money (necessary conditions) if (1) we thought that pausing AI is desirable and (2) PauseAI methods are relatively likely to achieve that outcome, conditioned on having the resources to do so. I would argue that many of the comments highlight that both those assumptions are not clear for many of the forum participants. In fact I think it is reasonable to stress disagreement with (2) in particular.
This reminds me of quantum computers or fusion reactors — we can build them, but the economics are far from working.
A quantum research scientist here: actually I would argue that is a misleading model for quantum computing. The main issue right now is technical, not economical. We still have to figure out error correction, without which you are bound to roughly 1000 logical gates. Far too little to do anything interesting.
But they also gave 0.5 million to research which is a 14% roughly.
I would say they also do a fair amount of helping foster an alternative protein market, see eg $1 million dollars in Science and Technology (https://animalcharityevaluators.org/charity-review/the-good-food-institute/2021-nov/) and also has (or had) a research grant program (https://gfi.org/wp-content/uploads/2023/01/Research-Grant-Program-RFP-2023.pdf).
Hi! I wonder if there is a reason why all recommendations are in the area of outreach/advocacy (with the exception of wild animal welfare). The Good Food Institute, which works on research and development, used to be recommended by ACE, but it is no longer recommended. I am curious about why this might be the case, though perhaps it is simply that other organizations have more pressing funding needs.
I tend to dislike treating all AI policy equal, the type of AI policy that affects AI safety is unlikely to represent a significant burden when developing frontier models. Thus reducing red tape on AI might actually be pretty positive.
Actually, something I am confused about is whether the AI academics are per person*year as the technical researchers in various fields.
Hi there! Some minor feedback for the webpage: instead of starting with the causes, I’d argue you should start with the value proposition: “your euro goes further or something along those lines”. You may want to check ayudaefectiva.org for an example. Congratulations on the new org!
Thanks, Chris, that’s very much true. I’ve clarified I meant donations.
[Question] What is the counterfactual value of different AI Safety professionals?
I already give everything, except what’s required for the bare living necessities, away.
While admirable consider whether this is healthy or sustainable. I think donating less is ok, that’s why Giving what we can suggests 10% as a calibrated point. You can of course donate more, but I would recommend against the implied current situation.
FWIW, I believe not every problem has to be centered around “cool” cause areas, and in this case I’d argue both animal welfare and AI Safety should not be significantly affected.
I think you should explain in this post what the pledge people may take :-)
I am particularly interested in how to pledge more concrete. I have always thought that the 10% pledge is somewhat incomplete because it does not consider the career. However, I think it would be useful to make the career pledge more actionable.