Makes sense.
davidc
This seems mostly reasonable, but also seems like it has some unstated (rare!) exceptions that maybe seem too obvious to state, but that I think would be good to state anyway.
E.g. if you already have reason to believe an organization isn’t engaging in good faith, or is inclined to take retribution, then giving them more time to plan that response doesn’t necessarily make sense.
Maybe some other less extreme examples along the same lines.
I wouldn’t be writing this comment if the language in the post hedged a bit more / left more room for exceptions, but reading a sentence like this makes me want to talk about exceptions:
When posting critical things publicly, however, unless it’s very time-sensitive we should be letting orgs review a draft first.
Makes sense.
We can’t sustain current growth levels
Is this about GDP growth or something else? Sustaining 2% GDP growth for a century (or a few) seems reasonably plausible?
Not quite the same question but I believe ACE started as one of the CEA children but is a separate entity now.
Thanks!
It still doesn’t fully entail Matt’s claim, but the content of the interview gets a lot closer than that description. You don’t need to give it a full listen, I’ve quoted the relevant part:
When I listened to the interview, I briefly thought to myself that that level of risk-neutrality didn’t make sense. But I didn’t say anything about that to anyone, and I’m pretty sure I also didn’t play through in my head anything about the actual implications if Sam were serious about it.
I wonder if we could have taken that as a red flag. If you take seriously what he said, it’s pretty concerning (implies a high chance of losing everything, though not necessarily anything like what actually happened)!
Seems worthwhile to quote the relevant bit of the interview:
====
Sam Bankman-Fried: If your goal is to have impact on the world — and in particular if your goal is to maximize the amount of impact that you have on the world — that has pretty strong implications for what you end up doing. Among other things, if you really are trying to maximize your impact, then at what point do you start hitting decreasing marginal returns? Well, in terms of doing good, there’s no such thing: more good is more good. It’s not like you did some good, so good doesn’t matter anymore. But how about money? Are you able to donate so much that money doesn’t matter anymore? And the answer is, I don’t exactly know. But you’re thinking about the scale of the world there, right? At what point are you out of ways for the world to spend money to change?
Sam Bankman-Fried: There’s eight billion people. Government budgets run in the tens of trillions per year. It’s a really massive scale. You take one disease, and that’s a billion a year to help mitigate the effects of one tropical disease. So it’s unclear exactly what the answer is, but it’s at least billions per year probably, so at least 100 billion overall before you risk running out of good things to do with money. I think that’s actually a really powerful fact. That means that you should be pretty aggressive with what you’re doing, and really trying to hit home runs rather than just have some impact — because the upside is just absolutely enormous.
Rob Wiblin: Yeah. Our instincts about how much risk to take on are trained on the fact that in day-to-day life, the upside for us as individuals is super limited. Even if you become a millionaire, there’s just only so much incrementally better that your life is going to be — and getting wiped out is very bad by contrast.
Rob Wiblin: But when it comes to doing good, you don’t hit declining returns like that at all. Or not really on the scale of the amount of money that any one person can make. So you kind of want to just be risk neutral. As an individual, to make a bet where it’s like, “I’m going to gamble my $10 billion and either get $20 billion or $0, with equal probability” would be madness. But from an altruistic point of view, it’s not so crazy. Maybe that’s an even bet, but you should be much more open to making radical gambles like that.
Sam Bankman-Fried: Completely agree. …
- 25 Nov 2022 16:19 UTC; 37 points) 's comment on Clarifications on diminishing returns and risk aversion in giving by (
- 27 Nov 2022 13:27 UTC; 5 points) 's comment on Clarifications on diminishing returns and risk aversion in giving by (
(Same comment as gcmm posted at the same time… Won’t delete mine but it’s basically a duplicate.)
Seems like it’s counterfactual in the same sense as the Facebook match: all of this money is going to charities one way or another, but mostly won’t go to charities EAs find plausible so you’re moving money from some random charity to something you think is especially good.
I realize this is a bit hypothetical but it does seem like the numbers matter a bit, so I want to ask:
Is there some basis on which you’re imagining 50% of folks in an animal welfare EA group think that if factory farmed animals are moral patients, it’s more likely that they have net-positive lives?
That surprised me a bit (I’d imagine it close to 0%, but I’m not too active in any EA groups right now, especially not any animal-focused ones so I don’t have much data).
This subject would benefit from making distinctions among software projects, and some example projects.
There’s huge variation in comp for programmers (across variables like location, and the kind of work they can do), and also huge variation in complexity across projects and what they need. I think this post understates those distinctions, and therefore somewhat overstates the risk of a cheap engineer of the needed sort leaving for high pay.
Thanks!
Do you have any more details on the opinions you’ve gotten from legal experts? I’d be interested in hearing more about the reasoning for why it’s okay.
I think Paul Christiano explained well here why it might be questionable:
If you make an agreement “I’ll do X if in exchange you do Y,” … Obviously the tax code will treat that differently than doing X without any expectation of reciprocity, and the treatment depends on Y. …
We think these matches are … mostly attributable to this initiative
As someone whose donation was partially matched ($3k of $5k), I can attest that this is correct, I would not have participated without at least some of these efforts from this group of people.
are best thought of as target populations than cause areas … the space not covered by these three is basically just wealthy modern humans
I guess this thought is probably implicit in a lot of EA, but I’d never quite heard it stated that way. It should be more often!
That said, I think it’s not quite precise. There’s a population missing: humans in the not-quite-far-future (e.g. 100 years from now, which I think is not usually included when people say “far future”).
For what it’s worth, I think maybe this would be improved by some more information about the standards for application acceptance. (Apologies if that already exists somewhere that I haven’t been able to find.)
[Edited to remove the word “transparency”, which might have different connotations than I intended.]
Yeah, recidivists reverted once, so it seems reasonable to expect they’re more likely to again. That makes the net impact of re-converting a recidivists unclear. Targeting them may be less valuable even if they’re much easier to convert.
I don’t have much confidence in how AI will go, so this is very speculative, but one consideration for personal planning that I think about:
If AI does become as powerful as some hope (and doesn’t kill us all), then maybe your personal situation (money, power) at a particular crucial point will be very important. Examples:
are you still alive when crucial health advances come that could keep you alive much longer?
can you afford those crucial health advances? (for yourself and/or loved ones)
are you still alive when technology to “upload” your mind works well, and can you afford it?
is there going to be some future grab for resources at a crucial time (before or after uploading...), and will you be in a good position for that?
hard for me to speculate about what those resources are, but for a probably-quite-silly example: Maybe we’ll auction off whole solar systems?
How you answer these questions could affect whether you live for the next million years, and what that life is like. I see those as reasons to prioritize personal health, money, and power more than you would otherwise.
Note: I’m not actually living my life according to this prescription. If I had to answer why, I think it’s partly that I think probably AI progress will stall out before creating such scientific/tech breakthroughs that allow for uploading minds. But even a small chance could be worth optimizing for, so I’m not sure I’m being rational about this.
(This is about personal planning, but sort of parallels some EA considerations, like “value lock-in”.)