Undergraduate in Cognitive Science
Currently writing my thesis on genetic engineering attribution with deep learning under the supervision of Dr Oliver Crook at Oxford University
aaron_mai
Hi Johannes!
I appreciate you taking the time.
”Linch’s comment on FP funding is roughly right, for FP it is more that a lot of FP members do not have liquidity yet”
I see, my mistake! But is my estimate sufficiently off to overturn my conclusion?
” There were also lots of other external experts consulted.”
Great! Do you agree that it would be useful to make this public?
“There isn’t, as of now, an agreed-to-methodology on how to evaluate advocacy charities, you can’t hire an expert for this.”
And the same ist true for evaluating cost-effectiveness analyses of advocacy charities (e.g. yours on CATF)?
”So the fact that you can be much more cost-effective when you are risk-neutral and leverage several impact multipliers (advocacy, policy change, technological change, increased diffusion) is hard to explain and not intuitively plausible.”
Sure, thats what I would argue as well. Thats why its important to counter this skepticism by signalling very strongly that your research is trustworthy (e.g. through publishing expert reviews).
“The way I did my reviewing was to check the major assumptions and calculations and see if those made sense. But where a report, say, took information from academic studies, I wouldn’t necessarily delve into those or see if they had been interpreted correctly. “
>> Thanks for clarifying! I wonder if it would be even better if the review was done by people outside the EA community. Maybe the sympathy of belonging to the same social group and shared, distinctive assumptions (assuming they exist), make people less likely to spot errors? This is pretty speculative, but wouldn’t surprise me.
“Re making things public, that’s a bit trickier than it sounds. Usually I’d leave a bunch of comments in a google doc as I went, which wouldn’t be that easy for a reader to follow. You could ask someone to write a prose evaluation—basically like an academic journal review report—but that’s quite a lot more effort and not something I’ve been asked to do.”
>> I see, interesting! This might be a silly idea, but what do you think about setting up a competition where there is a cash-prize of a few thousand dollars for the person who spots an important mistake? If you manage to attract the attention of a lot of phd students in the relevant area, you might really get a lot of competent people trying hard to find your mistakes.
“it’s like you’re sending the message “you shouldn’t take our word for it, but there’s this academic who we’ve chosen and paid to evaluate us—take their word for it”.”
>> Maybe that would be weird for some people. I would be surprised though if the majority of people wouldn’t interpret a positive expert review as a signal that your research is trustworthy (even if its not actually a signal because you chose and paid that expert).
Hi Michael!
”You only mention Founders Pledge, which, to me, implies you think Founders Pledge don’t get external reviews but other EA orgs do.”
> No, I don’t think this, but I should have made this clearer. I focused on FP, because I happened to know that they didn’t have an external, expert review on one of their main climate-charity recommendations, CATF and because I couldn’t find any report on their website about an external, expert review.
I think my argument here holds for any other similar organisation.
“This doesn’t seem right, because Founders Pledge do ask others for reviews: they’ve asked me/my team at HLI to review several of their reports (StrongMinds, Actions for Happiness, psychedelics) which we’ve been happy to do, although we didn’t necessarily get into the weeds.”
>Cool, I’m glad they are doing it! But, if you say “we didn’t necessarily get into the weeds.”, does it count as an independent, in-depth, expert review? If yes, great, then I think it would be good to make that public. If no, the conclusion in my question/post still holds, doesn’t it?
I’m not sure, but according to Wikipedia, in total ~3 billion dollars have been pledged via Founders Pledge. Even if that doesn’t increase and only 5% of that money is donated according to their recommendations, we are still in the ballpark of around a hundred million USD right?
On the last question I can only guess as well. So far around 500 million USD have been donated via FoundersPledge. Founders Pledge exists for around 6 years, so on average around 85 million $ per year since it started. It seems likely to me that at least 5% have been allocated according to their recommendations, which makes an average of ~4 million USD per year. The true value is of course much higher because other people, who haven’t taken the pledge, are following their recommendations as well.
I actually think there is more needed.
If “its a mistake not to do X” means “its in alignment with the persons goal to do X”, then I think there are a few ways in which the claim could be false.
I see two cases where you want to maximize your contribution to the common good, but it would still be a mistake (in the above sense) to pursue EA:
you are already close to optimal effectiveness and the increase in effectiveness by some additional research in EA is so small that you would be maximizing by just using that time to earn money and donate it or have a direct impact
pursuing EA causes you to not achieve another goal which you value at least equally or a set of goals which you, in total, value at least equally
If that’s true, then we need to reduce the scope of the conclusion VERY much. I estimate that the fraction of people caring about the common good for whom Bens claim holds is in [1/10000,1/100000]. So in the end the claim can be made for hardly anyone right?
Super interesting, thanks!
[Question] On GiveWell’s estimates of the cost of saving a life
I’d say that pursuing the project of effective altruism is worthwhile, only if the opportunity cost of searching C is justified by the amount of additional good you do as a result of searching for better ways to do good, rather then go by common sense A. It seems to me that if C>= A, then pursuing the project of EA wouldn’t be worth it. If, however, C< A, then pursuing the project of EA would be worth it, right?
To be more concrete let us say that the difference in value between the commonsense distribution of resources to do good and the ideal might be only 0.5%. Let us also assume it would cost you only a minute to find out the ideal distribution and that the value of spending that minute in your commonsense way is smaller than getting that 0.5% increase. Surely it would still be worth seeking the ideal distribution (=engaging in the project of EA), right?
Do you still recommend these approaches or has your thinking shifted on any? Personally, I’d be especially interested if you still recommend to “Produce a shallow review of a career path few people are informed about, using the 80,000 Hours framework. ”.
Hey, thank you very much for the summary!
I have two questions:
(1) how should one select which moral theories to use in ones evaluation of the expected choice worthiness of a given action?
“All” seems impossible, supposing the set of moral theories is indeed infinite; “whatever you like” seems to justify basically any act by just selecting or inventing the right subset of moral theories; “take the popular ones” seems very limited (admittedly, I dont have an argument against that option, but is there a positive one for it?)
(2) how should one assign probabilities to moral theories?
I realise that these are probably still controversial issues in philosophy, so I dont expect a solution. Rather, any (yet speculative) ideas on how to resolve them would be great!
Hey Pablo,
Thanks a lot for the answer, I appreciate you taking the time! I think I now have a much better idea of how these calculations work (and much more skeptical tbh because there are so many effects which are not captured in the expected value calculations that might make a big difference).
Also thanks for the link to Holdens post!