Would you still disagree if this were an outright 15M£ expense?
This is a very risky investment. I don’t know what Oliver’s point is based on, but I saw another (equally baseless) opinion online that since they bought it right before a market crash, chances are they’ve already lost millions. I’d probably not feel the same way about some diverse investment portfolio, but millions in a single real estate investment? This does require significant oversight.
Re: your analogy—I both disagree with the claim and with the fact that this is analogous. CEA is not like a person and should not be treated as one; they’re an organisation purporting to represent the entire movement. And when people do something that they hope have a big impact, if it’s important to them that it’s positive, broad oversight is much more important than if it was an investment with no chance of a big impact.
Would you still disagree if this were an outright 15M£ expense?
E.g., if EAs overpaid 30M£ for a property that resells at 15M£? I’d be a bit surprised they couldn’t get a better deal, but I wouldn’t feel concerned without knowing more details.
Seems to me that EA tends to underspend on this category of thing far more than they overspend, so I’d expect much more directional bias toward risk aversion than risk-seeking, toward naive virtue signaling over wealth signaling, toward Charity-Navigator-ish overhead-minimizing over inflated salaries, etc. And I naively expect EVF to err in this direction more than a lot of EAs, to over-scrutinize this kind of decision, etc. I would need more information than just “they cared enough about a single property with unusual features to overpay by 15M£” to update much from that prior.
We also have far more money right now than we know how to efficiently spend on lowering the probability that the world is destroyed. We shouldn’t waste that money in large quantities, since efficient ways to use it may open up in the future; but I’d again expect EA to be drastically under-spending on weird-looking ways to use money to un-bottleneck us, as opposed to EA being corrupt country-estate-lovers.
It’s good that there’s nonzero worry about simple corruption, since we want to notice early warning signs in a world where EAs do just become corrupt and money-hungry (and we also want to notice if specific individual EAs or pseudo-EAs acquire influence in the community and try to dishonestly use it for personal gain). But it’s not high on my list of ways EA is currently burning utility, or currently at risk of burning utility.
I’m confused why you wouldn’t feel concerned about EA potentially wasting 15M pounds (talking about your hypothetical example, not the real purchase). I feel that would mean that EA is not living up to its own standards of using evidence and reasoning to help others in the best possible way.
Since EA isn’t optimizing the goal “flip houses to make a profit”, I expect us to often be willing to pay more for properties than we’d expect to sell them for. Paying 2x is surprising, but it doesn’t shock me if that sort of thing is worth it for some reason I’m not currently tracking.
MIRI recently spent a year scouring tens of thousands of properties in the US, trying to find a single one that met conditions like “has enough room to fit a few dozen people”, “it’s legal to modify the buildings or construct a new one on the land if we want to”, and “near but not within an urban center”. We ultimately failed to find a single property that we were happy with, and gave up.
Things might be easier outside the US, but the whole experience updated me a lot about how hard it is to find properties that are both big and flexible / likely to satisfy more than 2-3 criteria at once.
At a high level, seems to me like EA has spent a lot more than 15M£ on bets that are vastly more uncertain and dependent-on-contested-models than “will we want space to house researchers or host meetings?”. Whether discussion and colocation is useful is one of the only things I expect EAs to not disagree about; most other categories of activity depend on much more complicated stories, and are heavily about placing bets on more specific models of how the future is likely to go, what object-level actions to prioritize over other actions, etc.
Would you still disagree if this were an outright 15M£ expense?
This is a very risky investment. I don’t know what Oliver’s point is based on, but I saw another (equally baseless) opinion online that since they bought it right before a market crash, chances are they’ve already lost millions. I’d probably not feel the same way about some diverse investment portfolio, but millions in a single real estate investment? This does require significant oversight.
Re: your analogy—I both disagree with the claim and with the fact that this is analogous. CEA is not like a person and should not be treated as one; they’re an organisation purporting to represent the entire movement. And when people do something that they hope have a big impact, if it’s important to them that it’s positive, broad oversight is much more important than if it was an investment with no chance of a big impact.
E.g., if EAs overpaid 30M£ for a property that resells at 15M£? I’d be a bit surprised they couldn’t get a better deal, but I wouldn’t feel concerned without knowing more details.
Seems to me that EA tends to underspend on this category of thing far more than they overspend, so I’d expect much more directional bias toward risk aversion than risk-seeking, toward naive virtue signaling over wealth signaling, toward Charity-Navigator-ish overhead-minimizing over inflated salaries, etc. And I naively expect EVF to err in this direction more than a lot of EAs, to over-scrutinize this kind of decision, etc. I would need more information than just “they cared enough about a single property with unusual features to overpay by 15M£” to update much from that prior.
We also have far more money right now than we know how to efficiently spend on lowering the probability that the world is destroyed. We shouldn’t waste that money in large quantities, since efficient ways to use it may open up in the future; but I’d again expect EA to be drastically under-spending on weird-looking ways to use money to un-bottleneck us, as opposed to EA being corrupt country-estate-lovers.
It’s good that there’s nonzero worry about simple corruption, since we want to notice early warning signs in a world where EAs do just become corrupt and money-hungry (and we also want to notice if specific individual EAs or pseudo-EAs acquire influence in the community and try to dishonestly use it for personal gain). But it’s not high on my list of ways EA is currently burning utility, or currently at risk of burning utility.
I’m confused why you wouldn’t feel concerned about EA potentially wasting 15M pounds (talking about your hypothetical example, not the real purchase). I feel that would mean that EA is not living up to its own standards of using evidence and reasoning to help others in the best possible way.
Since EA isn’t optimizing the goal “flip houses to make a profit”, I expect us to often be willing to pay more for properties than we’d expect to sell them for. Paying 2x is surprising, but it doesn’t shock me if that sort of thing is worth it for some reason I’m not currently tracking.
MIRI recently spent a year scouring tens of thousands of properties in the US, trying to find a single one that met conditions like “has enough room to fit a few dozen people”, “it’s legal to modify the buildings or construct a new one on the land if we want to”, and “near but not within an urban center”. We ultimately failed to find a single property that we were happy with, and gave up.
Things might be easier outside the US, but the whole experience updated me a lot about how hard it is to find properties that are both big and flexible / likely to satisfy more than 2-3 criteria at once.
At a high level, seems to me like EA has spent a lot more than 15M£ on bets that are vastly more uncertain and dependent-on-contested-models than “will we want space to house researchers or host meetings?”. Whether discussion and colocation is useful is one of the only things I expect EAs to not disagree about; most other categories of activity depend on much more complicated stories, and are heavily about placing bets on more specific models of how the future is likely to go, what object-level actions to prioritize over other actions, etc.