First of all—I’m really glad you wrote this comment. This is exactly the kind of transparency I want to see from EA orgs.
On the other hand, I want to push back against your now last paragraph (on why you didn’t write about this before). I strongly think that it’s wrong to wait for criticism before you explain big and important decisions (like spending 15 million pounds on a castle). The fact that criticism arose here is basically random, and is a result of outside critics looking in. In a better state of affairs, you want the EA community to know about the things they need to look at and maybe criticise. Otherwise there’s a big chance they’ll miss things.
In other words, I think it’s very important that major EA orgs proactively share the information and reasoning about big decisions.
This sort of comment sounds good in the abstract, but what specific process would you propose that you think would actually achieve this? CEA has to post all project proposals over a certain amount to the EA forum? Are people actually going to read them? What if they only appeal to specific funders? How much of a tax on the time of CEA staff are we willing to pay in order to get this additional transparency?
Personally, I think something like a quarterly report on incoming funds and outgoing expenses, ongoing projects and cost breakdowns, and expected and achieved outcomes would work very well. This is something I’d expect of any chariry or NGO that values effectiveness and empirical backing, and particularly from one that places it at the center of its mission statement, so I struggle to think of it as a “tax” on the time of CEA workers rather than something that should be an accepted and factored in cost of doing business.
The grandparent comment asks for decisions to be explained before criticism appears. Your proposal (which I do think is fairly reasonable) would not have helped in this case: Wytham Abbey would have got a nice explanation in the next quarterly report after it got done, i.e. far too late.
You would instead require ongoing, very proactive transparency on a per-decision basis in order to really pre-empt criticism.
I struggle to think of it as a “tax” on the time of CEA workers rather than something that should be an accepted and factored in cost of doing business.
I put a negative framing on it and you put a positive one, but it’s a cost that prevents staff from doing other things with their time and so should be prioritised and not just put onto an unbounded queue of “stuff that should be done”.
The grandparent comment asks for decisions to be explained before criticism appears. Your proposal (which I do think is fairly reasonable) would not have helped in this case: Wytham Abbey would have got a nice explanation in the next quarterly report after it got done, i.e. far too late.
I think my broader frame around the issue affected how I read the parent comment. I took it as the problem being a general issue in EA transparency—my general thinking on a lot of the criticisms from within EA was something along the lines of the lack of transparency as a general issue is the larger problem, if EAs knew there would be a report/justification coming, it would not have been such an issue within the community. I do see your point now, although I do think there are some pretty easy ways around it, like determining a reasonably high bar on the basis of CEA’s general spending-per-line-item that would necessitate a kind of “this is a big deal” announcement.
I put a negative framing on it and you put a positive one, but it’s a cost that prevents staff from doing other things with their time and so should be prioritised and not just put onto an unbounded queue of “stuff that should be done”.
I agree that it is a cost, like all other things. On the point of prioritization, I would argue that because of EA principles being so heavily tied into cost effectiveness and empiricism, treating this as something that can be foregone to give CEA staff to do other stuff that should be done is not only hypocritical, it’s bad epistemically insofar as it implies that EAs (or at least EAs who work at CEA) are not beholden to the same principles of transparency and epistemic rigor that they expect from other similar organizations, ie. “we are above these principles for some reason or other”.
I think this is all pretty reasonable, but also I suspect I might think that existing similar organisations were doing too much of this kind of transparency activity.
CEA has to post all project proposals over a certain amount to the EA forum?
Yes, that sounds about it. Although I would add decisions that are not very expensive but are very influential.
What if they only appeal to specific funders?
What do you mean?
How much of a tax on the time of CEA staff are we willing to pay in order to get this additional transparency?
A significant amount. This is well worth it. Although in practice I don’t imagine there are that many decisions of this calibre. I would guess about 2-10 per year?
That’s pretty unclear to me. We are in the position of maximum hindsight bias. An unusual and bad event has happened, that’s the classic point at which people overreact about precautions.
I strongly think that it’s wrong to wait for criticism before you explain big and important decisions (like spending 15 million pounds on a castle).
I disagree with this. The property may well increase in value over time, and be sold at a profit if EAs sell it. I don’t think EAs should publicly discuss every investment they make at the $20M level (except insofar as public discussion is useful for all decisions), and if there’s a potential direct altruistic benefit to the investment then that makes it less important to publicly debate, not more important.
(Analogy: if a person deciding to invest $20 million in a for-profit startup with zero direct altruistic benefit requires no special oversight, then a person deciding to invest $20 million in a for-profit startup that also has potential altruistic benefits suggests even less use for oversight, since we’ve now added a potentially nice and useful feature to an action that was already fine and acceptable beforehand. Altruistic side-effects shouldn’t increase the suspiciousness of an action that already makes sense on its own terms.)
See also Oliver’s point, “Purchase price—resale price will probably end up in the $1-$3MM range.”
I’m not sure about this particular case, but I don’t think the value of the property increasing over time is a generally good argument for why investments need not be publicly discussed. A lot of potential altruistic spending has benefits that accrue over time, where the benefits of money spent earlier outweighs the benefits of money spent later—as has been discussed extensively when comparing giving now vs. giving later.
The whole premise of EA is that resources should be spent in effective ways, and potential altruistic benefits is no excuse for an ineffective spending of money.
Would you still disagree if this were an outright 15M£ expense?
This is a very risky investment. I don’t know what Oliver’s point is based on, but I saw another (equally baseless) opinion online that since they bought it right before a market crash, chances are they’ve already lost millions. I’d probably not feel the same way about some diverse investment portfolio, but millions in a single real estate investment? This does require significant oversight.
Re: your analogy—I both disagree with the claim and with the fact that this is analogous. CEA is not like a person and should not be treated as one; they’re an organisation purporting to represent the entire movement. And when people do something that they hope have a big impact, if it’s important to them that it’s positive, broad oversight is much more important than if it was an investment with no chance of a big impact.
Would you still disagree if this were an outright 15M£ expense?
E.g., if EAs overpaid 30M£ for a property that resells at 15M£? I’d be a bit surprised they couldn’t get a better deal, but I wouldn’t feel concerned without knowing more details.
Seems to me that EA tends to underspend on this category of thing far more than they overspend, so I’d expect much more directional bias toward risk aversion than risk-seeking, toward naive virtue signaling over wealth signaling, toward Charity-Navigator-ish overhead-minimizing over inflated salaries, etc. And I naively expect EVF to err in this direction more than a lot of EAs, to over-scrutinize this kind of decision, etc. I would need more information than just “they cared enough about a single property with unusual features to overpay by 15M£” to update much from that prior.
We also have far more money right now than we know how to efficiently spend on lowering the probability that the world is destroyed. We shouldn’t waste that money in large quantities, since efficient ways to use it may open up in the future; but I’d again expect EA to be drastically under-spending on weird-looking ways to use money to un-bottleneck us, as opposed to EA being corrupt country-estate-lovers.
It’s good that there’s nonzero worry about simple corruption, since we want to notice early warning signs in a world where EAs do just become corrupt and money-hungry (and we also want to notice if specific individual EAs or pseudo-EAs acquire influence in the community and try to dishonestly use it for personal gain). But it’s not high on my list of ways EA is currently burning utility, or currently at risk of burning utility.
I’m confused why you wouldn’t feel concerned about EA potentially wasting 15M pounds (talking about your hypothetical example, not the real purchase). I feel that would mean that EA is not living up to its own standards of using evidence and reasoning to help others in the best possible way.
Since EA isn’t optimizing the goal “flip houses to make a profit”, I expect us to often be willing to pay more for properties than we’d expect to sell them for. Paying 2x is surprising, but it doesn’t shock me if that sort of thing is worth it for some reason I’m not currently tracking.
MIRI recently spent a year scouring tens of thousands of properties in the US, trying to find a single one that met conditions like “has enough room to fit a few dozen people”, “it’s legal to modify the buildings or construct a new one on the land if we want to”, and “near but not within an urban center”. We ultimately failed to find a single property that we were happy with, and gave up.
Things might be easier outside the US, but the whole experience updated me a lot about how hard it is to find properties that are both big and flexible / likely to satisfy more than 2-3 criteria at once.
At a high level, seems to me like EA has spent a lot more than 15M£ on bets that are vastly more uncertain and dependent-on-contested-models than “will we want space to house researchers or host meetings?”. Whether discussion and colocation is useful is one of the only things I expect EAs to not disagree about; most other categories of activity depend on much more complicated stories, and are heavily about placing bets on more specific models of how the future is likely to go, what object-level actions to prioritize over other actions, etc.
First of all—I’m really glad you wrote this comment. This is exactly the kind of transparency I want to see from EA orgs.
On the other hand, I want to push back against your now last paragraph (on why you didn’t write about this before). I strongly think that it’s wrong to wait for criticism before you explain big and important decisions (like spending 15 million pounds on a castle). The fact that criticism arose here is basically random, and is a result of outside critics looking in. In a better state of affairs, you want the EA community to know about the things they need to look at and maybe criticise. Otherwise there’s a big chance they’ll miss things.
In other words, I think it’s very important that major EA orgs proactively share the information and reasoning about big decisions.
This sort of comment sounds good in the abstract, but what specific process would you propose that you think would actually achieve this? CEA has to post all project proposals over a certain amount to the EA forum? Are people actually going to read them? What if they only appeal to specific funders? How much of a tax on the time of CEA staff are we willing to pay in order to get this additional transparency?
Personally, I think something like a quarterly report on incoming funds and outgoing expenses, ongoing projects and cost breakdowns, and expected and achieved outcomes would work very well. This is something I’d expect of any chariry or NGO that values effectiveness and empirical backing, and particularly from one that places it at the center of its mission statement, so I struggle to think of it as a “tax” on the time of CEA workers rather than something that should be an accepted and factored in cost of doing business.
The grandparent comment asks for decisions to be explained before criticism appears. Your proposal (which I do think is fairly reasonable) would not have helped in this case: Wytham Abbey would have got a nice explanation in the next quarterly report after it got done, i.e. far too late.
You would instead require ongoing, very proactive transparency on a per-decision basis in order to really pre-empt criticism.
I put a negative framing on it and you put a positive one, but it’s a cost that prevents staff from doing other things with their time and so should be prioritised and not just put onto an unbounded queue of “stuff that should be done”.
I think my broader frame around the issue affected how I read the parent comment. I took it as the problem being a general issue in EA transparency—my general thinking on a lot of the criticisms from within EA was something along the lines of the lack of transparency as a general issue is the larger problem, if EAs knew there would be a report/justification coming, it would not have been such an issue within the community. I do see your point now, although I do think there are some pretty easy ways around it, like determining a reasonably high bar on the basis of CEA’s general spending-per-line-item that would necessitate a kind of “this is a big deal” announcement.
I agree that it is a cost, like all other things. On the point of prioritization, I would argue that because of EA principles being so heavily tied into cost effectiveness and empiricism, treating this as something that can be foregone to give CEA staff to do other stuff that should be done is not only hypocritical, it’s bad epistemically insofar as it implies that EAs (or at least EAs who work at CEA) are not beholden to the same principles of transparency and epistemic rigor that they expect from other similar organizations, ie. “we are above these principles for some reason or other”.
I think this is all pretty reasonable, but also I suspect I might think that existing similar organisations were doing too much of this kind of transparency activity.
Yes, that sounds about it. Although I would add decisions that are not very expensive but are very influential.
What do you mean?
A significant amount. This is well worth it. Although in practice I don’t imagine there are that many decisions of this calibre. I would guess about 2-10 per year?
That’s pretty unclear to me. We are in the position of maximum hindsight bias. An unusual and bad event has happened, that’s the classic point at which people overreact about precautions.
I’ve been writing the same calls for transparency for months. This has nothing to do with FTX.
I disagree with this. The property may well increase in value over time, and be sold at a profit if EAs sell it. I don’t think EAs should publicly discuss every investment they make at the $20M level (except insofar as public discussion is useful for all decisions), and if there’s a potential direct altruistic benefit to the investment then that makes it less important to publicly debate, not more important.
(Analogy: if a person deciding to invest $20 million in a for-profit startup with zero direct altruistic benefit requires no special oversight, then a person deciding to invest $20 million in a for-profit startup that also has potential altruistic benefits suggests even less use for oversight, since we’ve now added a potentially nice and useful feature to an action that was already fine and acceptable beforehand. Altruistic side-effects shouldn’t increase the suspiciousness of an action that already makes sense on its own terms.)
See also Oliver’s point, “Purchase price—resale price will probably end up in the $1-$3MM range.”
I’m not sure about this particular case, but I don’t think the value of the property increasing over time is a generally good argument for why investments need not be publicly discussed. A lot of potential altruistic spending has benefits that accrue over time, where the benefits of money spent earlier outweighs the benefits of money spent later—as has been discussed extensively when comparing giving now vs. giving later.
The whole premise of EA is that resources should be spent in effective ways, and potential altruistic benefits is no excuse for an ineffective spending of money.
Would you still disagree if this were an outright 15M£ expense?
This is a very risky investment. I don’t know what Oliver’s point is based on, but I saw another (equally baseless) opinion online that since they bought it right before a market crash, chances are they’ve already lost millions. I’d probably not feel the same way about some diverse investment portfolio, but millions in a single real estate investment? This does require significant oversight.
Re: your analogy—I both disagree with the claim and with the fact that this is analogous. CEA is not like a person and should not be treated as one; they’re an organisation purporting to represent the entire movement. And when people do something that they hope have a big impact, if it’s important to them that it’s positive, broad oversight is much more important than if it was an investment with no chance of a big impact.
E.g., if EAs overpaid 30M£ for a property that resells at 15M£? I’d be a bit surprised they couldn’t get a better deal, but I wouldn’t feel concerned without knowing more details.
Seems to me that EA tends to underspend on this category of thing far more than they overspend, so I’d expect much more directional bias toward risk aversion than risk-seeking, toward naive virtue signaling over wealth signaling, toward Charity-Navigator-ish overhead-minimizing over inflated salaries, etc. And I naively expect EVF to err in this direction more than a lot of EAs, to over-scrutinize this kind of decision, etc. I would need more information than just “they cared enough about a single property with unusual features to overpay by 15M£” to update much from that prior.
We also have far more money right now than we know how to efficiently spend on lowering the probability that the world is destroyed. We shouldn’t waste that money in large quantities, since efficient ways to use it may open up in the future; but I’d again expect EA to be drastically under-spending on weird-looking ways to use money to un-bottleneck us, as opposed to EA being corrupt country-estate-lovers.
It’s good that there’s nonzero worry about simple corruption, since we want to notice early warning signs in a world where EAs do just become corrupt and money-hungry (and we also want to notice if specific individual EAs or pseudo-EAs acquire influence in the community and try to dishonestly use it for personal gain). But it’s not high on my list of ways EA is currently burning utility, or currently at risk of burning utility.
I’m confused why you wouldn’t feel concerned about EA potentially wasting 15M pounds (talking about your hypothetical example, not the real purchase). I feel that would mean that EA is not living up to its own standards of using evidence and reasoning to help others in the best possible way.
Since EA isn’t optimizing the goal “flip houses to make a profit”, I expect us to often be willing to pay more for properties than we’d expect to sell them for. Paying 2x is surprising, but it doesn’t shock me if that sort of thing is worth it for some reason I’m not currently tracking.
MIRI recently spent a year scouring tens of thousands of properties in the US, trying to find a single one that met conditions like “has enough room to fit a few dozen people”, “it’s legal to modify the buildings or construct a new one on the land if we want to”, and “near but not within an urban center”. We ultimately failed to find a single property that we were happy with, and gave up.
Things might be easier outside the US, but the whole experience updated me a lot about how hard it is to find properties that are both big and flexible / likely to satisfy more than 2-3 criteria at once.
At a high level, seems to me like EA has spent a lot more than 15M£ on bets that are vastly more uncertain and dependent-on-contested-models than “will we want space to house researchers or host meetings?”. Whether discussion and colocation is useful is one of the only things I expect EAs to not disagree about; most other categories of activity depend on much more complicated stories, and are heavily about placing bets on more specific models of how the future is likely to go, what object-level actions to prioritize over other actions, etc.