Note that GiveWell / Good Ventures (unsurprisingly) like to research a charity or cause area themselves before they direct funding to it, and this is tightly constrained by the pace of GiveWell research staff growth, so in practice many high-leverage opportunities are still (in my opinion) available to marginal EtGers — at-least, if those EtGers are willing to be at least 1/5th as proactive about finding good opportunities as, say, Matt Wage is. Maybe that won’t be true after 10 years of additional research conducted by GiveWell (incl. OpenPhil), but I think it’ll be true for the foreseeable future.
There are probably additional reasons GiveWell / Good Ventures won’t fund particular things, besides the fact that they haven’t been researched in sufficient depth by GiveWell. E.g. GiveWell might think it’s a good thing for there to be multiple meta-charities in the EA space that maintain independence, and so even if funding CEA projects is a clear win, they still might think it’s a bad idea for GW/GV to direct any support to CEA projects.
And finally, it’s also possible that individual EtGers might have different values or world-models than the public faces of GW/GV have, and for that reason those marginal EtGers could have good opportunities available to them that are not likely to be met by GW-directed funding anytime soon, if ever.
(I say all this as a random EA who thinks about these things, not as a soon-to-be GW staffer.)
That said, I also think people with the right collection of talents should seriously consider applying to do cause prioritization research at GW or elsewhere, and people with a different right collection of talents should consider starting new projects/organizations, especially when doing so in coordination with an already-interested funder like GV.
Yes, I think it’s right that people can find opportunities beyond those that are researched by GW if they have different values, different epistemology, pro-actively investigate opportunities to fund, or even outsource this evaluation to Wage, EA Ventures, Beckstead or elsewhere.
I love the idea of outsourcing my donation decisions to someone who is much more knowledgeable than I am about how to be most effective. An individual might be preferable to an organization for reasons of flexibility. Is anyone actually doing this—e.g., accepting others’ EtG money?
In fact, I’d outsource all kinds of decisions to the smartest, most well-informed, most value-aligned person I could find. Why on earth would I trust myself to make major life decisions if I’m primarily motivated by altruistic considerations?
Well, even if you’re primarily motivated by altruistic considerations, there are likely to be some significant personal factors that you can introspect more easily. But what’s related, and clearly beneficial, is getting advice from mentors who you talk to when you have a bigger than usual decision.
My other thought is: what kinds of decisions do you want to outsource? Clever altruistic people have occasionally described why they made various kinds of decisions in their personal lives, and these can be copied e.g.:
Absolutely re personal factors. “Outsource” is an overstatement.
And no, I don’t mean decisions like whether to be a vegetarian (which, as I’ve noted elsewhere, presents a false dichotomy) or whether to floss, which can be generically answered.
I mean a personalized version of what 80,000 hours does for people mid-career. Imagine several people in their mid-30s to −40s—a USAID political appointee; a law firm partner; a data scientist working in the healthcare field—who have decided they are willing to make significant lifestyle changes to better the world. What should they do? This seems to be a very different inquiry than it is for an undergrad. And for some people, a lot turns on it—millions of dollars. Given the amount at stake, it seems like a decision that should be taken just as seriously by the EA community as how an EA organization should spend millions of dollars.
Ah, mid-career work-related decisions. Yes, it seems important. As mid-career decisions are more tailored, they’re harder for 80,000 Hours, who are nonetheless better equipped than most for this task.
Although career direction is important, you can see why it might be done less than directing donations—everyone’s money works the same, and so one set of charity-evaluations generalises reasonably well to everyone, assuming they have fairly similar values. Career decisions are harder.
Mentors who sympathise with the idea of effective altruism are helpful here, because they know you. Also special interest groups could be useful. So for people in policy, it makes sense for them to be acquainted with other effective altruists in a similar space, even if they’re living in a different country. If someone who had an unusually high-stakes career (say Jaan Tallinn, a cofounder of Skype) wanted to make an altruistic decision about his career, I’m sure he could pull together some of 80,000 Hours and others to do some relevant research for him.
Beyond that, how we can get these questions better answered is an open question :)
I’m thinking more along the line of mentors for the mentors, and I think one solution would be a platform on which to crowd source ideas for individuals’ ten-year strategic plan. In a perfect world, one would be able to donate one’s talents (in addition to one’s money) to the EA cause, which could then be strategically deployed by an all-seeing EA director. Maybe MIRI could work on that.
MIRI is focussing on mathematical AI safety research at the moment, so they wouldn’t currently want to act as a director of EA resources in general!!
I think for people who really have substantial personal non-monetary resources to give away, there are people who are prepared to step into a temporary advice-giving role, which might not even be so materially different from what you’re describing. even with my limited non-monetary resources, I’ve got quite helpful advice from people like Carl Shulman, Nick Beckstead and Paul Christiano, who I think are somewhat of a collective miscellaneous-problem-EA-question-answerer!!
Mentoring the mentors: the problem with giving advice to senior people is that if you know less about their domain than they do, then your advice might well make them worse off. So in such cases, it’s often preferable to bring you together with similar people, so that you can bounce ideas off one another. Or maybe I’m still missing some considerations, but these reservations seem worth taking into account.
in practice many high-leverage opportunities are still (in my opinion) available to marginal EtGers — at-least, if those EtGers are willing to be at least 1/5th as proactive about finding good opportunities as, say, Matt Wage is.
Interesting! Are you able to be more concrete about those opportunities? (Or how proactive Matt is?)
And finally, it’s also possible that individual EtGers might have different values or world-models than the public faces of GW/GV have, and for that reason those marginal EtGers could have good opportunities available to them that are not likely to be met by GW-directed funding anytime soon, if ever.
Yeah, definitely agree that this is the case—on the other hand, it seems like there are a lot of EtGers with a fairly diverse set of values/world-models in place already. I’m worried specifically about marginal EtGers; I think the average EtGer is doing super useful stuff.
From talking to Matt Wage a few times I got the impression that he spends the equivalent of a few full time work weeks per year figuring out where to donate. Requiring potential donors to spend that much time seems like a flaw in the system, and EA ventures seems to be addressing it.
I don’t know the whole story, but Matt Wage kept close tabs on FLI, and gave a substantial amount of money at a well-chosen time, which helped make the AI conference planning go more smoothly.
Note that GiveWell / Good Ventures (unsurprisingly) like to research a charity or cause area themselves before they direct funding to it, and this is tightly constrained by the pace of GiveWell research staff growth, so in practice many high-leverage opportunities are still (in my opinion) available to marginal EtGers — at-least, if those EtGers are willing to be at least 1/5th as proactive about finding good opportunities as, say, Matt Wage is. Maybe that won’t be true after 10 years of additional research conducted by GiveWell (incl. OpenPhil), but I think it’ll be true for the foreseeable future.
There are probably additional reasons GiveWell / Good Ventures won’t fund particular things, besides the fact that they haven’t been researched in sufficient depth by GiveWell. E.g. GiveWell might think it’s a good thing for there to be multiple meta-charities in the EA space that maintain independence, and so even if funding CEA projects is a clear win, they still might think it’s a bad idea for GW/GV to direct any support to CEA projects.
And finally, it’s also possible that individual EtGers might have different values or world-models than the public faces of GW/GV have, and for that reason those marginal EtGers could have good opportunities available to them that are not likely to be met by GW-directed funding anytime soon, if ever.
(I say all this as a random EA who thinks about these things, not as a soon-to-be GW staffer.)
That said, I also think people with the right collection of talents should seriously consider applying to do cause prioritization research at GW or elsewhere, and people with a different right collection of talents should consider starting new projects/organizations, especially when doing so in coordination with an already-interested funder like GV.
Yes, I think it’s right that people can find opportunities beyond those that are researched by GW if they have different values, different epistemology, pro-actively investigate opportunities to fund, or even outsource this evaluation to Wage, EA Ventures, Beckstead or elsewhere.
I love the idea of outsourcing my donation decisions to someone who is much more knowledgeable than I am about how to be most effective. An individual might be preferable to an organization for reasons of flexibility. Is anyone actually doing this—e.g., accepting others’ EtG money?
In fact, I’d outsource all kinds of decisions to the smartest, most well-informed, most value-aligned person I could find. Why on earth would I trust myself to make major life decisions if I’m primarily motivated by altruistic considerations?
Well, even if you’re primarily motivated by altruistic considerations, there are likely to be some significant personal factors that you can introspect more easily. But what’s related, and clearly beneficial, is getting advice from mentors who you talk to when you have a bigger than usual decision.
My other thought is: what kinds of decisions do you want to outsource? Clever altruistic people have occasionally described why they made various kinds of decisions in their personal lives, and these can be copied e.g.:
http://www.gwern.net/DNB%20FAQ
https://meteuphoric.wordpress.com/2014/11/21/when-should-an-effective-altruist-be-vegetarian/
http://robertwiblin.com/2012/04/19/should-you-floss-a-cost-benefit-analysis/
Absolutely re personal factors. “Outsource” is an overstatement.
And no, I don’t mean decisions like whether to be a vegetarian (which, as I’ve noted elsewhere, presents a false dichotomy) or whether to floss, which can be generically answered.
I mean a personalized version of what 80,000 hours does for people mid-career. Imagine several people in their mid-30s to −40s—a USAID political appointee; a law firm partner; a data scientist working in the healthcare field—who have decided they are willing to make significant lifestyle changes to better the world. What should they do? This seems to be a very different inquiry than it is for an undergrad. And for some people, a lot turns on it—millions of dollars. Given the amount at stake, it seems like a decision that should be taken just as seriously by the EA community as how an EA organization should spend millions of dollars.
Ah, mid-career work-related decisions. Yes, it seems important. As mid-career decisions are more tailored, they’re harder for 80,000 Hours, who are nonetheless better equipped than most for this task.
Although career direction is important, you can see why it might be done less than directing donations—everyone’s money works the same, and so one set of charity-evaluations generalises reasonably well to everyone, assuming they have fairly similar values. Career decisions are harder.
Mentors who sympathise with the idea of effective altruism are helpful here, because they know you. Also special interest groups could be useful. So for people in policy, it makes sense for them to be acquainted with other effective altruists in a similar space, even if they’re living in a different country. If someone who had an unusually high-stakes career (say Jaan Tallinn, a cofounder of Skype) wanted to make an altruistic decision about his career, I’m sure he could pull together some of 80,000 Hours and others to do some relevant research for him.
Beyond that, how we can get these questions better answered is an open question :)
I’m thinking more along the line of mentors for the mentors, and I think one solution would be a platform on which to crowd source ideas for individuals’ ten-year strategic plan. In a perfect world, one would be able to donate one’s talents (in addition to one’s money) to the EA cause, which could then be strategically deployed by an all-seeing EA director. Maybe MIRI could work on that.
MIRI is focussing on mathematical AI safety research at the moment, so they wouldn’t currently want to act as a director of EA resources in general!!
I think for people who really have substantial personal non-monetary resources to give away, there are people who are prepared to step into a temporary advice-giving role, which might not even be so materially different from what you’re describing. even with my limited non-monetary resources, I’ve got quite helpful advice from people like Carl Shulman, Nick Beckstead and Paul Christiano, who I think are somewhat of a collective miscellaneous-problem-EA-question-answerer!!
Mentoring the mentors: the problem with giving advice to senior people is that if you know less about their domain than they do, then your advice might well make them worse off. So in such cases, it’s often preferable to bring you together with similar people, so that you can bounce ideas off one another. Or maybe I’m still missing some considerations, but these reservations seem worth taking into account.
Thanks, Ryan. That’s all very helpful.
(And the MIRI reference was a superintelligent AI joke.)
Haha ohhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh!
This is now a thing: http://effective-altruism.com/ea/174/introducing_the_ea_funds/
Interesting! Are you able to be more concrete about those opportunities? (Or how proactive Matt is?)
Yeah, definitely agree that this is the case—on the other hand, it seems like there are a lot of EtGers with a fairly diverse set of values/world-models in place already. I’m worried specifically about marginal EtGers; I think the average EtGer is doing super useful stuff.
From talking to Matt Wage a few times I got the impression that he spends the equivalent of a few full time work weeks per year figuring out where to donate. Requiring potential donors to spend that much time seems like a flaw in the system, and EA ventures seems to be addressing it.
I don’t know the whole story, but Matt Wage kept close tabs on FLI, and gave a substantial amount of money at a well-chosen time, which helped make the AI conference planning go more smoothly.