These are two things I posted on the EA Facebook group that I thought should be saved somewhere for future reference and discussion. I link to the Facebook posts so you can see the responses people have made.
Alyssa Vance thought this was one of the biggest problems with effective altruism as it exists in practice today (my comments below):
“Most critically, a lot of EA (though notably not OPP) discourages donors from taking risks, by advocating interventions whose effects are short-term, concrete, certain, and easy to evaluate. You need lots of risk to get very high payoffs, but since donors won’t directly benefit from those payoffs, they tend to play it very safe, the same way that bureaucrats do. The problem is, playing it safe lops off the long tail of very good outcomes. Paul Graham explains the general principle (http://paulgraham.com/hiring.html):
“Risk and reward [in wealth investing] are always proportionate. For example, stocks are riskier than bonds, and over time always have greater returns. So why does anyone invest in bonds? The catch is that phrase “over time.” Stocks will generate greater returns over thirty years, but they might lose value from year to year. So what you should invest in depends on how soon you need the money. If you’re young, you should take the riskiest investments you can find.
All this talk about investing may seem very theoretical. Most undergrads probably have more debts than assets. They may feel they have nothing to invest. But that’s not true: they have their time to invest, and the same rule about risk applies there. Your early twenties are exactly the time to take insane career risks.
The reason risk is always proportionate to reward is that market forces make it so. People will pay extra for stability. So if you choose stability—by buying bonds, or by going to work for a big company—it’s going to cost you.”″
--
Clearly as a society we should do a combination of:
i) scaling things with well-understood payoffs;
ii) trying things with uncertain returns, but where at least either the benefit or likelihood of success can be roughly estimated;
iii) trying new things with uncertain returns where the benefit and likelihood of success are very hard to estimate.
There will be some outstanding opportunities in all of these categories—which approach this community should be disproportionately focussed on depends on which one is relatively neglected by the rest of society.
So, is the rest of the world overall risk-loving, risk-neutral or risk-averse when it comes to their (intentional and accidental) social impact?
Or to put it another way, are we being left mostly neglected safe opportunities, or mostly neglected high-risk/high-leverage opportunities?
Is ‘highly skeptical EA’ a self-undermining position? Here’s my line of thought:
* ‘Highly skeptical EAs’ think you should demand strong empirical evidence before believing that something works / is true / will happen.
* As a result they typically think we should work on scaling up ‘proven’ interventions, as opposed to doing ‘speculative/unproven’ things which can’t be demonstrated to a high standard of evidence to be better.
* But the claim that it’s higher expected value to do the best ‘proven’ thing rather than a speculative/unproven thing that on its face looks important, neglected and tractable, is itself unproven to a high standard of evidence. Indeed, it’s a very hard claim to substantiate and would require a very large project involving lots of people over a long period of time investigating the average long-term return on, e.g. basic science research. As a result I think we don’t really know at the moment and should be pretty agnostic on this question.
* So if we need strong evidence to accept positions, should we in fact believe with confidence that we really need strong evidence to think something has a high expected social impact?
Philosophers may note the similarity to logical positivism undermining its own core claim, though in this case it’s more probabilistic than a matter of contradictory formal logic.
The fact that an idea partially undermines itself isn’t a decisive argument against it, but it does suggest we should tread with care.
Two observations about ‘skeptical vs speculative’ effective altruism
These are two things I posted on the EA Facebook group that I thought should be saved somewhere for future reference and discussion. I link to the Facebook posts so you can see the responses people have made.
--
Post I
Alyssa Vance thought this was one of the biggest problems with effective altruism as it exists in practice today (my comments below):
“Most critically, a lot of EA (though notably not OPP) discourages donors from taking risks, by advocating interventions whose effects are short-term, concrete, certain, and easy to evaluate. You need lots of risk to get very high payoffs, but since donors won’t directly benefit from those payoffs, they tend to play it very safe, the same way that bureaucrats do. The problem is, playing it safe lops off the long tail of very good outcomes. Paul Graham explains the general principle (http://paulgraham.com/hiring.html):
“Risk and reward [in wealth investing] are always proportionate. For example, stocks are riskier than bonds, and over time always have greater returns. So why does anyone invest in bonds? The catch is that phrase “over time.” Stocks will generate greater returns over thirty years, but they might lose value from year to year. So what you should invest in depends on how soon you need the money. If you’re young, you should take the riskiest investments you can find.
All this talk about investing may seem very theoretical. Most undergrads probably have more debts than assets. They may feel they have nothing to invest. But that’s not true: they have their time to invest, and the same rule about risk applies there. Your early twenties are exactly the time to take insane career risks.
The reason risk is always proportionate to reward is that market forces make it so. People will pay extra for stability. So if you choose stability—by buying bonds, or by going to work for a big company—it’s going to cost you.”″
--
Clearly as a society we should do a combination of:
i) scaling things with well-understood payoffs;
ii) trying things with uncertain returns, but where at least either the benefit or likelihood of success can be roughly estimated;
iii) trying new things with uncertain returns where the benefit and likelihood of success are very hard to estimate.
There will be some outstanding opportunities in all of these categories—which approach this community should be disproportionately focussed on depends on which one is relatively neglected by the rest of society.
So, is the rest of the world overall risk-loving, risk-neutral or risk-averse when it comes to their (intentional and accidental) social impact?
Or to put it another way, are we being left mostly neglected safe opportunities, or mostly neglected high-risk/high-leverage opportunities?
---
Post 2
Is ‘highly skeptical EA’ a self-undermining position? Here’s my line of thought:
* ‘Highly skeptical EAs’ think you should demand strong empirical evidence before believing that something works / is true / will happen.
* As a result they typically think we should work on scaling up ‘proven’ interventions, as opposed to doing ‘speculative/unproven’ things which can’t be demonstrated to a high standard of evidence to be better.
* But the claim that it’s higher expected value to do the best ‘proven’ thing rather than a speculative/unproven thing that on its face looks important, neglected and tractable, is itself unproven to a high standard of evidence. Indeed, it’s a very hard claim to substantiate and would require a very large project involving lots of people over a long period of time investigating the average long-term return on, e.g. basic science research. As a result I think we don’t really know at the moment and should be pretty agnostic on this question.
* So if we need strong evidence to accept positions, should we in fact believe with confidence that we really need strong evidence to think something has a high expected social impact?
Philosophers may note the similarity to logical positivism undermining its own core claim, though in this case it’s more probabilistic than a matter of contradictory formal logic.
The fact that an idea partially undermines itself isn’t a decisive argument against it, but it does suggest we should tread with care.