First I want to explain that I think it’s misleading to think of this as a CEA decision (I’ve edited to be more explicit about this). To explain that I need to disambiguate between:
CEA, the project that runs the EA Forum, EA Global, etc.
This is what I think ~everyone usually thinks of when they think of “CEA”, as it’s the group that’s been making public use of that brand
CEA, the former name of a legal entity which hosts lots of projects (including #1)
This is a legacy naming issue …
The name of the legal entity was originally intended as a background brand to house 80,000 Hours and Giving What We Can; other projects have been added since, especially in recent years
Since then the idea of “effective altruism” has become somewhat popular in its own right! And one of the projects within the entity started making good use of the name “CEA”
We’ve now renamed the legal entity to EVF, basically in order to avoid this kind of ambiguity!
Wytham Abbey was bought by #2, and isn’t directly related to #1, except for being housed within the same legal entity. I was the person who owned the early development of the project idea, and fundraised for it. (The funding comes from a grant specifically for this project, and is not FTX-related.) I brought it to the rest of the board of EVF to ask for fiscal sponsorship (i.e. I would direct the funding to EVF and EVF would buy the property and employ staff to work on the project). So EVF made two decisions here: they approved fiscal sponsorship, agreeing to take funds for this new project; and they then followed through and bought the property with the funds that had been earmarked for that. The second of these is technically a decision to buy the building (and was done by a legal entity at the time called CEA), but at that point it was fulfilling an obligation to the donor, so it would have been wild to decide anything else. The first is a real decision, but the decision was to offer sponsorship to a project that would likely otherwise have happened through another vehicle, not to use funds to buy a building rather than for another purpose. Neither of these decisions were made by any staff of the group people generally understand as “CEA”. (All of this ambiguity/confusion is on us, not on readers.)
I’d also like to speak briefly to the “why” — i.e. why I thought this was a good idea. The central case was this:
I’ve personally been very impressed by specialist conference centres. When I was doing my PhD, I think the best workshops I went to were at Oberwolfach, a mathematics research centre funded by the German government. Later I went to an extremely productive workshop on ethical issues in measuring the global burden of disease at the Brocher Foundation. Talking to other researchers, including in other fields, I don’t think my impression was an outlier. Having an immersive environment which was more about exploring new ideas than showing off results was just very good for intellectual progress. In theory this would be possible without specialist venues, but researchers want to spend time thinking about ideas not event logistics. Having a venue which makes itself available to experts hosting events avoids this issue.
In the last few years, I’ve been seeing the rise of what seems to me an extremely important cluster of ideas — around asking what’s most important to do in the world, and taking chains of reasoning from there seriously. I think this can lead to tentative answers like “effective altruism” or “averting existential risk”, but for open-minded intellectual exploration I think it’s better to have the focus on questions than answers. I thought it would be great if we could facilitate more intellectual work of this type, and the specialist-venue model was a promising one to try. We will experiment with a variety of event types.
We had various calculations about costings, which made it look somewhere between “moderately money-saving” and “mildly money-spending” vs renting venues for events that would happen anyway, depending on various assumptions e.g. about usage that we couldn’t get great data on before running the experiment. The main case for the project was not a cost-saving one, but that if it was a success it could generate many more valuable workshops than would otherwise exist. Note that this is a much less expensive experiment than it may look on face value, since we retain the underlying asset of the building.
We wanted to be close to Oxford for easy access to the intellectual communities there. (Property prices weren’t falling off significantly with distance until travel time from Oxford and London had become significantly higher.) We looked at a lot of properties online, and visited the three properties we found for sale with 20+ bedrooms within about 50 minutes of Oxford. These were all “country houses”, which are commonly repurposed as event venues in England. The other two were cheaper (one ~£6M and one ~£9M at the end of a competitive process; compared to a purchase price for Wytham of a bit under £15M) but needed significantly more work before they were usable, which would have added large expense (running into the millions) and delay (likely years). (And renovation expense isn’t obviously recoverable if one sells — it depends on how much the buyers want the same things from the property as you do.)
We thought Wytham had the most long-term potential as a venue because it had multiple large common rooms that could take >40 people. The other properties had one large room each holding perhaps a max of 40, but there would be pressure on this space since it would be wanted as both a dining space and for workshop sessions, and would also reduce flexibility of use for meetings (extra construction might have been able to address this, but it was a big question mark whether you could get planning consent). Wytham also benefited from being somewhat larger (about 27,000 sq ft vs roughly 20,000 sq ft for each of the other two) and a more accessible location. Overall we thought that a combination of factors made it the most appropriate choice.
I did feel a little nervous about the optical effects, but think it’s better to let decisions be guided less by what we think looks good, and more by what we think is good — ultimately this was a decision I felt happy to defend.
On why we hadn’t posted publicly about this before: I’m not a fan of trying to create hype. I thought the natural time to post about the project publicly would be when we were ready to accept public applications to run events, and it felt a bit gauche to post before that. Now that there’s a public discussion, of course, it seemed worth explaining some of the thinking.
This comment sounds qualitatively reasonable, but it needs a quantitative complement—it could have been made virtually verbatim had the cost been £1.5m or £150m. I would like to hear the case for why it was actually worth £15m.
Also, a lot of people are talking about ‘optics’, with the implication that the irrational public will misunderstand the +EV of such a decision. But ‘bad optics’ don’t come from nowhere—they come from a very reasonable worry that over time, people who influence a lot of money have some risk of, if not becoming corrupt, at least getting carried away with that influence and rationalising away things like this.
I think we should always take such possibilities seriously, not to imply anyone has actually done anything wildly irresponsible, but to insure against anyone doing so—and to keep grey areas as thin as possible. And I’m increasingly worried that CEA are seriously undertransparent in ways that suggest they don’t think such risks could materialise—which increases my credence that they could. So while I could be convinced this was a reasonable use of funds, I think the decision not to ‘hype’ it builds a dangerous precedent.
You can check how many events GPI, FHI, and CEA have run in Oxford, requiring renting hotels, etc., and the associated costs. I know that GPI runs at least a couple such events per year. Given that, I think that over the next 10-20 years, £15m isn’t outside the realm of plausible direct costs saved, especially if it’s available for other groups to rent in order to help cover costs.
That said, the cost-benefit analysis could be more transparent. On the other hand, I don’t think that private donors should be required to justify their decisions, regardless of the vehicle used. But I do think that CEA is the wrong place for this to be done, given that they aren’t even likely to be a key user of the space. (Edit: Owen’s explanation, that this was done by the parent org of CEA, means I will withdraw the last claim.)
I accept that it’s plausibly in the realm, but that’s not very helpful for knowing whether it’s actually worthwhile—plausibility is a very low bar.
the other hand, I don’t think that private donors should be required to justify their decisions
This doesn’t seem like a good blanket policy. If private donors can use a charity to buy large luxury goods, it raises worries about that charity becoming a tax haven, or a reward for favours, or any other number of such hard-to-predefine but questionable activities. There are legal implications around charities taking too much of their money from a single donor for exactly that reason.
I don’t think we’re there yet, but, per above, I would like to see more discussion from CEA of the risks associated with moving in that direction.
But I do think that CEA is the wrong place for this to be done, given that they aren’t even likely to be a key user of the space.
Agree. I don’t mind the idea of a sort of EA special projects orgs that has relatively high autonomy, but I don’t want that org to also be the face of the community—as I understand it, that’s basically how we ended up with FTXgate. We’d also probably want them to source funding from a wide range of sources to avoid the unilateralists’ curse, which this situation is at least flirting with.
I think we mostly agree, but don’t think this was unilateralists’ curse, and it isn’t even close. Many people were aware of or involved in discussions about this, and having multiple donors doesn’t guarantee not falling into unilateralism.
Re unilateralism obviously more donors isn’t anything like a guarantee, but is one of hopefully many safeguards. On the other hand, many people approving of it here (assuming they broadly did) doesn’t mean it’s not a form of unilateralism depending on how those people were included in the discussion—if, for eg, they were all major CEA funders and staff, there’s likely to be extreme selection bias in their opinions on the relevant questions.
I’m telling you that, as someone who hasn’t ever worked for or with CEA directly, I spoke with a couple people about this months before it happened. Clearly, plenty of people were aware, and discussed this—and I didn’t know the price tag, but thought that a center for retreats focused on global priorities and related topics near Oxford sounded like a very good idea. I still think it is, and honestly don’t think that it’s unreasonable as a potential investment into priorities research. Of course, given the current post-FTX situation, it would obviously not have been considered if the project was being proposed today.
I feel like this is pretty important. I think this is basically fine if it’s a billionaire who thinks CEA needs real estate, and less fine if it is incestuous funding from another EA group.
I think the key question is whether the money would counterfactually have been available for another purpose. OCB half-implies it wouldn’t have been by saying that buying the property fulfiled an obligation to the donor, but then I’m confused by the claim “we retain the underlying asset”. If EVF holds the asset subject to an obligation only to use it as a specialist conference centre, it’s unable to realise the value. On the other hand, it would seem surprising if the donation was made on the condition that it must be used to buy the building, but then EVF could do whatever it liked with it (including immediately reselling it).
If EVF is in fact able to resell the building now, then the argument that it was an ear-marked donation is weak, because EVF is making a decision now to hold the asset rather than sell it to raise funds for other EA causes.
Max Planck Institutes had a dedicated conference center in the Alps (Schloss Ringberg) that is hugely inspirational, and that promotes intensive collaboration, brain-storming, and discussion very effectively.
Likewise for the Center for Advanced Study in the Behavioral Sciences at Stanford—the panoramic views over the San Francisco Bay, the ease of both formal and informal interaction, and the optimal degree of proximity to the main campus (near, but not too near), promote very good high-level thinking and exchange of ideas.
I’ve been to about 70 conferences in my academic career, and I’m noticed that the aesthetics, antiquity, and uniqueness of the venue can have a significant effect on the seriousness with which people take ideas and conversations, and the creativity of their thinking. And, of course, it’s much easier to attract great talent and busy thinkers to attractive venues. Moreover, I suspect that meeting in a building constructed in 1480 might help promote long-termism and multi-century thinking.
It’s hard to quantify the effects that an uplifting, distinctive, and beautiful venue can have on the quality and depth of intellectual and moral collaboration. But I think it’s a real effect. And Wytham Abbey seems, IMHO, to be an excellent choice to capitalize on that effect.
The problem to me seems to be that “being hard to quantify” in this case very easily enables rationalizing spending money on fancy venues. I’m also not convinced that non-EA institutions spending money on fancy venues is a good argument for also doing so or an argument that fancy venues enable better research. These institutions probably just use fancy venues because it is self serving. As they don’t usually promote doing the most good by being effective, I guess that nobody cares much that they do that.
Personally, I think that a certain level of comfort is helpful, e.g. having single / double rooms for everybody so they can sleep well or don’t needing to cook etc. However, I’m very skeptical of anything above that being worth the money.
I don’t want to be adversarial, but I just have to note how much your comment reads to me and other people I spoke to like motivated reasoning. I think it’s very problematic if EA advocates for cost effectiveness on the one hand and then lightly spends a lot of money on fancy stuff which seems self serving.
Agreed. The whole founding insight of the EA movement was the importance of rigorously measuring value for money. The same logic is used to justify every warm and fuzzy but low value charity. And it’s entirely reasonable to be very worried when major figures in the EA movement revert to that kind of reasoning when it’s in their self interest.
Yes. It seems very plausible that conferences are good and also that conferences in attractive venues are better, but it seems surprising that this would be the most effective use of the money.
First of all—I’m really glad you wrote this comment. This is exactly the kind of transparency I want to see from EA orgs.
On the other hand, I want to push back against your now last paragraph (on why you didn’t write about this before). I strongly think that it’s wrong to wait for criticism before you explain big and important decisions (like spending 15 million pounds on a castle). The fact that criticism arose here is basically random, and is a result of outside critics looking in. In a better state of affairs, you want the EA community to know about the things they need to look at and maybe criticise. Otherwise there’s a big chance they’ll miss things.
In other words, I think it’s very important that major EA orgs proactively share the information and reasoning about big decisions.
This sort of comment sounds good in the abstract, but what specific process would you propose that you think would actually achieve this? CEA has to post all project proposals over a certain amount to the EA forum? Are people actually going to read them? What if they only appeal to specific funders? How much of a tax on the time of CEA staff are we willing to pay in order to get this additional transparency?
Personally, I think something like a quarterly report on incoming funds and outgoing expenses, ongoing projects and cost breakdowns, and expected and achieved outcomes would work very well. This is something I’d expect of any chariry or NGO that values effectiveness and empirical backing, and particularly from one that places it at the center of its mission statement, so I struggle to think of it as a “tax” on the time of CEA workers rather than something that should be an accepted and factored in cost of doing business.
The grandparent comment asks for decisions to be explained before criticism appears. Your proposal (which I do think is fairly reasonable) would not have helped in this case: Wytham Abbey would have got a nice explanation in the next quarterly report after it got done, i.e. far too late.
You would instead require ongoing, very proactive transparency on a per-decision basis in order to really pre-empt criticism.
I struggle to think of it as a “tax” on the time of CEA workers rather than something that should be an accepted and factored in cost of doing business.
I put a negative framing on it and you put a positive one, but it’s a cost that prevents staff from doing other things with their time and so should be prioritised and not just put onto an unbounded queue of “stuff that should be done”.
The grandparent comment asks for decisions to be explained before criticism appears. Your proposal (which I do think is fairly reasonable) would not have helped in this case: Wytham Abbey would have got a nice explanation in the next quarterly report after it got done, i.e. far too late.
I think my broader frame around the issue affected how I read the parent comment. I took it as the problem being a general issue in EA transparency—my general thinking on a lot of the criticisms from within EA was something along the lines of the lack of transparency as a general issue is the larger problem, if EAs knew there would be a report/justification coming, it would not have been such an issue within the community. I do see your point now, although I do think there are some pretty easy ways around it, like determining a reasonably high bar on the basis of CEA’s general spending-per-line-item that would necessitate a kind of “this is a big deal” announcement.
I put a negative framing on it and you put a positive one, but it’s a cost that prevents staff from doing other things with their time and so should be prioritised and not just put onto an unbounded queue of “stuff that should be done”.
I agree that it is a cost, like all other things. On the point of prioritization, I would argue that because of EA principles being so heavily tied into cost effectiveness and empiricism, treating this as something that can be foregone to give CEA staff to do other stuff that should be done is not only hypocritical, it’s bad epistemically insofar as it implies that EAs (or at least EAs who work at CEA) are not beholden to the same principles of transparency and epistemic rigor that they expect from other similar organizations, ie. “we are above these principles for some reason or other”.
I think this is all pretty reasonable, but also I suspect I might think that existing similar organisations were doing too much of this kind of transparency activity.
CEA has to post all project proposals over a certain amount to the EA forum?
Yes, that sounds about it. Although I would add decisions that are not very expensive but are very influential.
What if they only appeal to specific funders?
What do you mean?
How much of a tax on the time of CEA staff are we willing to pay in order to get this additional transparency?
A significant amount. This is well worth it. Although in practice I don’t imagine there are that many decisions of this calibre. I would guess about 2-10 per year?
That’s pretty unclear to me. We are in the position of maximum hindsight bias. An unusual and bad event has happened, that’s the classic point at which people overreact about precautions.
I strongly think that it’s wrong to wait for criticism before you explain big and important decisions (like spending 15 million pounds on a castle).
I disagree with this. The property may well increase in value over time, and be sold at a profit if EAs sell it. I don’t think EAs should publicly discuss every investment they make at the $20M level (except insofar as public discussion is useful for all decisions), and if there’s a potential direct altruistic benefit to the investment then that makes it less important to publicly debate, not more important.
(Analogy: if a person deciding to invest $20 million in a for-profit startup with zero direct altruistic benefit requires no special oversight, then a person deciding to invest $20 million in a for-profit startup that also has potential altruistic benefits suggests even less use for oversight, since we’ve now added a potentially nice and useful feature to an action that was already fine and acceptable beforehand. Altruistic side-effects shouldn’t increase the suspiciousness of an action that already makes sense on its own terms.)
See also Oliver’s point, “Purchase price—resale price will probably end up in the $1-$3MM range.”
I’m not sure about this particular case, but I don’t think the value of the property increasing over time is a generally good argument for why investments need not be publicly discussed. A lot of potential altruistic spending has benefits that accrue over time, where the benefits of money spent earlier outweighs the benefits of money spent later—as has been discussed extensively when comparing giving now vs. giving later.
The whole premise of EA is that resources should be spent in effective ways, and potential altruistic benefits is no excuse for an ineffective spending of money.
Would you still disagree if this were an outright 15M£ expense?
This is a very risky investment. I don’t know what Oliver’s point is based on, but I saw another (equally baseless) opinion online that since they bought it right before a market crash, chances are they’ve already lost millions. I’d probably not feel the same way about some diverse investment portfolio, but millions in a single real estate investment? This does require significant oversight.
Re: your analogy—I both disagree with the claim and with the fact that this is analogous. CEA is not like a person and should not be treated as one; they’re an organisation purporting to represent the entire movement. And when people do something that they hope have a big impact, if it’s important to them that it’s positive, broad oversight is much more important than if it was an investment with no chance of a big impact.
Would you still disagree if this were an outright 15M£ expense?
E.g., if EAs overpaid 30M£ for a property that resells at 15M£? I’d be a bit surprised they couldn’t get a better deal, but I wouldn’t feel concerned without knowing more details.
Seems to me that EA tends to underspend on this category of thing far more than they overspend, so I’d expect much more directional bias toward risk aversion than risk-seeking, toward naive virtue signaling over wealth signaling, toward Charity-Navigator-ish overhead-minimizing over inflated salaries, etc. And I naively expect EVF to err in this direction more than a lot of EAs, to over-scrutinize this kind of decision, etc. I would need more information than just “they cared enough about a single property with unusual features to overpay by 15M£” to update much from that prior.
We also have far more money right now than we know how to efficiently spend on lowering the probability that the world is destroyed. We shouldn’t waste that money in large quantities, since efficient ways to use it may open up in the future; but I’d again expect EA to be drastically under-spending on weird-looking ways to use money to un-bottleneck us, as opposed to EA being corrupt country-estate-lovers.
It’s good that there’s nonzero worry about simple corruption, since we want to notice early warning signs in a world where EAs do just become corrupt and money-hungry (and we also want to notice if specific individual EAs or pseudo-EAs acquire influence in the community and try to dishonestly use it for personal gain). But it’s not high on my list of ways EA is currently burning utility, or currently at risk of burning utility.
I’m confused why you wouldn’t feel concerned about EA potentially wasting 15M pounds (talking about your hypothetical example, not the real purchase). I feel that would mean that EA is not living up to its own standards of using evidence and reasoning to help others in the best possible way.
Since EA isn’t optimizing the goal “flip houses to make a profit”, I expect us to often be willing to pay more for properties than we’d expect to sell them for. Paying 2x is surprising, but it doesn’t shock me if that sort of thing is worth it for some reason I’m not currently tracking.
MIRI recently spent a year scouring tens of thousands of properties in the US, trying to find a single one that met conditions like “has enough room to fit a few dozen people”, “it’s legal to modify the buildings or construct a new one on the land if we want to”, and “near but not within an urban center”. We ultimately failed to find a single property that we were happy with, and gave up.
Things might be easier outside the US, but the whole experience updated me a lot about how hard it is to find properties that are both big and flexible / likely to satisfy more than 2-3 criteria at once.
At a high level, seems to me like EA has spent a lot more than 15M£ on bets that are vastly more uncertain and dependent-on-contested-models than “will we want space to house researchers or host meetings?”. Whether discussion and colocation is useful is one of the only things I expect EAs to not disagree about; most other categories of activity depend on much more complicated stories, and are heavily about placing bets on more specific models of how the future is likely to go, what object-level actions to prioritize over other actions, etc.
Can we all just agree that if you’re gonna make some funding decision with horrendous optics, you should be expected to justify the decision with actual numbers and plans?
Would be nice if we actually knew how many conferences/retreats were going to be held at the EA castle.
It might be justifiable (I got a tremendous amount of value being in Berkeley and London offices for 2 month stints), but now we’re here talking about it, and it obviously looks bad to anyone skeptical about EA. Some will take it badly regardless, but come on. Even if other movements/institutions way overspend on bad stuff, let’s not use that as an excuse in EA.
The “EA will justify any purchase for the good of humanity” argument will just continue to pop up. I know many EAs who are aware of this and constantly concerned about overspending and rationalizing a purchase. As much as critics act like this is never a consideration and EAs are just naively self-rationalizing any purchase, it’s certainly not the case for most EAs I’ve met. It’s just that an EA castle with very little communication is easy ammo for critics when it comes to rationalizing purchases.
One failed/bad project is mostly bad for the people involved, but reputational risk is bad for the entire movement. We should not take this lightly.
Can we all just agree that if you’re gonna make some funding decision with horrendous optics, you should be expected to justify the decision with actual numbers and plans?
Justify to who? I would like to have an EA that has some individual initiative, where people can make decisions using their resources to try to seek good outcomes. I agree that when actions have negative externalities, external checks would help. But it’s not obvious to me that those external checks weren’t passed in this case*, and if you want to propose a specific standard we should try to figure out whether or not that standard would actually help with optics.
Like, if the purchase of Wytham Abbey had been posted on the EA forum, and some people had said it was a good idea and some people said it was a bad idea, and then the funders went ahead and bought it, would our optics situation look any different now? Is the idea that if anyone posted that it was a bad idea, they shouldn’t have bought it?
[And we need to then investigate whether or not adding this friction to the process ends up harming it on net; property sales are different in lots of places, but there are some where adding a week to the “should we do this?” decision-making process means implicitly choosing not to buy any reasonably-priced property, since inventory moves too quickly, and only overpriced property stays on the market for more than a week.]
* I don’t remember being consulted about Wytham, but I’m friends with the people running it and broadly trust their judgment, and guess that they checked with people as to whether or not they thought it was a good idea. I wasn’t consulted about the specific place Irena ended up buying, but I was consulted somewhat on whether or not Irena should buy a venue, and I thought she should, going so far as being willing to support it with some of my charitable giving, which ended up not being necessary.
(I edited in a way which changed which paragraph was penultimate. I believe Larks was referring to the content which is now expanded on in paragraphs starting “We wanted …” and “We thought …”.)
Sounds like a reasonable decision to me, but I do wonder why the reasoning behind such large and not immediately obvious decisions isn’t communicated publicly more often.
let decisions be guided less by what we think looks good, and more by what we think is good
In general I would agree that it’s better to do what is good rather than what looks good. However, when you are the face of a global movement, optics have a meaningful financial implication. Imagine if this bad press made 1 billionaire 0.1% less likely to get involved with EA. That calculation would dominate any potential efficiency savings from insourcing a service provider.
I used to think this and I increasingly don’t. Doing good thing is what we’re all about. Doing good things even if it looks bad in the tabloid press is good publicity to the people who actually care about doing good, and they’re more important to us than the rest.
I think an EA that was weirder and more unapologetic about doing its stuff attracts more of the right kind of people and can generally get on with things more than an EA that frantically tries to massage it’s optics to appeal to everyone.
I am having a hard time here and speckled throughout the rest of this post with people writing that we are doing the “good thing” and we should do that and not just what looks good with the “good thing” in question being buying a castle and not say, caring about wild animal suffering.
I guess I’ve gone off into the abstract argument about whether we should care about optics or not. I don’t mean to assert that buying Wytham Abbey was a good thing to do, I just think that we should argue about whether it was a good thing to do, not whether it looks like a good thing to do.
I’m arguing that deciding whether or not it is a good thing should include the PR impact (i.e. a weak consequentialist approach). I don’t care if things look bad, unless that perception leads bad outcomes. In this case, I think the perception could lead to bad outcomes that dominate the good outcomes in the expected value calculation
I think this kind of reasoning is difficult to follow in practice, and likely to do more harm than good. Eg, I expect some billionaires are drawn to a movement that says fuck PR and actually tries to do what’s important—what if trying to account for PR has a 0.1% chance of putting off those billionaires? Etc.
At the very least, “do what is actually good rather than just what looks good” seems like a valid philosophy to follow if trying to do good, even after accounting for optics—trying to account for optics can easily be misleading, paralysing, etc.
EA is all about uncertain EV calculations—I don’t see why we should exclude optics when calculating EV. We should just embrace the uncertainty and try our best.
The only part of EA that doesn’t involve super uncertain EV calculations which can be misleading and paralysing is randomista development.
This is fair, and I don’t want to argue that optics don’t matter at all or that we shouldn’t try to think about them.
My argument is more that actually properly accounting for optics in your EV calculations is really hard, and that most naive attempts to do so can easily do more harm than good. And that I think people can easily underestimate the costs of caring less about truth or effectiveness or integrity, and overestimate the costs of being legibly popular or safe from criticism. Generally, people have a strong desire to be popular and to fit in, and I think this can significantly bias thinking around optics! I particularly think this is the case with naive expected value calculations of the form “if there’s even a 0.1% chance of bad outcome X we should not do this, because X would be super bad”. Because it’s easy to anchor on some particularly salient example of X, and miss out on a bunch of other tail risk considerations.
The “annoying people by showing that we care more about style than substance” was an example of a counter-veiling consideration that argues in the opposite direction and could also be super bad.
This argument is motivated by the same reasoning as the “don’t kill people to steal their organs, even if it seems like a really good idea at the time, and you’re confident no one will ever find out” argument.
Thanks! Glad to hear it. This classic Yudkowsky post is a significant motivator. Key quote:
But if you are running on corrupted hardware, then the reflective observation that it seems like a righteous and altruistic act to seize power for yourself—this seeming may not be be much evidence for the proposition that seizing power is in fact the action that will most benefit the tribe.
By the power of naive realism, the corrupted hardware that you run on, and the corrupted seemings that it computes, will seem like the fabric of the very world itself—simply the way-things-are.
And so we have the bizarre-seeming rule: “For the good of the tribe, do not cheat to seize power even when it would provide a net benefit to the tribe.”
In general, I agree with you (as I say in my first sentence), but
EV’s objectives are the promotion of EA, i.e. PR is it’s raisin d’etre.
in this case, the benefit seems like a rounding error (maybe you could argue it would save ~£100k p.a.) compared to the PR potential.
Even if it’s hard to assess the PR impact (and I acknowledge it could go either way), it’s negligent not to consider it.
A large portion of your rationale is based on the intellectually stimulating effects of being surrounded by nice things. Do you think the people in the building will feel great when there’s such negative media coverage, and they feel the guilt of such an opulent purchase? If I were invited to this place, I’d feel uncomfortable and guilty all the time. There’s already a bunch of negative media coverage. It’s not going to stop. And it’s not going to make the program participants feel inspired.
I did feel a little nervous about the optical effects, but think it’s better to let decisions be guided less by what we think looks good, and more by what we think is good — ultimately this was a decision I felt happy to defend.
While I understand this sentiment, optics can sometimes matter much more than you may at first expect. In this specific case, the kneejerk response of many people on social media of this seeming incongruity (a seemingly extravagant purchase by a main EA org) can potentially cement negative sentiment. By itself, maybe it’s not that bad. But in combination with the other previous bad press we have from the FTX debacle, people will get in their heads that “EA = BAD”. I’m literally seeing major philosophers who might otherwise be receptive to EA being completely turned off because of tweets about Wytham Abbey.
This isn’t to say that the purchase shouldn’t have been made. But you specifically said that you think the general rule should be that we make decisions about what we think is good rather than by what looks good. While technically I agree with this, I think that blindly following such a rule puts us in a state of mind where we are at risk of underestimating just how bad optics can become.
I can see this point, but I’m curious—how would you feel about the reverse? Let’s say that CEA chose not to buy it, and instead did conferences the normal way. A few months later, you’re talking to someone from CEA, and they say something like:
Yeah, we were thinking of buying a nice place for these retreats, which would have been cheaper in the long run, but we realised that would probably make us look bad. So we decided to eat the extra cost and use conference halls instead, in order to help EA’s reputation.
Would you be at all concerned by this statement, or would that be a totally reasonable tradeoff to make?
+1 to Jay’s point. I would probably just give up on working with EAs if this sort of reasoning were dominant to that degree? I don’t think EA can have much positive effect on the world if we’re obsessed with reputation-optimizing to that degree; it’s the sort of thing that can sound reasonable to worry about on paper, but in practice tends to cause more harm than good to fixate on in a big way.
(More reputational harm than reputational benefit, of the sort that matters most for EA’s ability to do the most good; and also more substantive harm than substantive benefit.
Being optics-obsessed is not a good look! I think this is currently the largest reputational problem EA currently actually faces: we promote too much of a culture of fearing and obsessing over optics and reputational risks.)
I think a movement is shaped to a rather large degree by its optics/culture, because that is what will determine who joins and to a lesser extent, who stays when things go wrong.
It seems plausible to me that a culture of somewhat spartan frugality, which seems (from my relatively uninformed perspective) like it was a larger part of the movement in the past, would have a larger positive impact on EA conferences than the stimulating-ness of the site. There’s something poetic about working harder in less onerous conditions than others would, forgoing luxury for extra donations, that I would imagine is at least as animating to the types of people in EA as scenery.
Beyond that, preserving core cultural aspects of a movement, even if the cost is substantial, is crucial to the story that the movement aims to tell.
Most people who are EAs today were inspired by the story of scrappy people gathering in whatever way is cheapest and most accessible, cheeks flushed with intellectual passion, figuring out how to stretch their dollars for the greater good. I think this aesthetic differs substantially from that of AI researchers in a castle, in terms of both losing the “slumming it out for the world” vibe and focusing on the reduction of an existential risk in a way that only a few people can understand rather than global development in a way that everyone can understand.
I’m sure the AI researchers are extremely competent and flushed with intellectual passion for the greatest good too, regardless of where they’re working. Maybe even more so in the castles. I am solely critiquing the optics and their potential cultural effect.
I have little formal evidence for this except the interest in and occasional resistance to the shift towards longtermism that seems widespread on the forum and a few external articles on EA. But I strongly suspect that “people with a career relating to longtermism” is an attractive archetypal representation of the epitome EA to far fewer people than “person who argues about the best place to donate, and donates as much as they can”, because the latter is much more relatable and attainable.
Perhaps an EA mostly focused on attracting select candidates for high impact careers will be more impactful than an EA attempting to make a wide, diffuse cultural impact by including many grassroots supporters. However, it seems that this runs the risk of modifying the target audience of EA from “everyone, because nearly everyone can afford at least 1% with a giving pledge” to .1% of the population of developed countries.
To me, it is at least plausible that the sheer cost of losing the grassroots-y story, paid in fewer, perhaps less-ideologically-committed new recruits, and a generally less positive public view of things related to effective altruism and rationality, could swing the net effect in the other direction. I think the mainstream being influenced over time to be more concerned with sentient beings, more concerned with rationality and calculating expected values on all sorts of purchases/donations, etc is a major potential positive impact that a more outward-facing EA could make.
If EA loses hold of the narrative and becomes, in the eye of the public, “sketchy, naive Masonic elites who only care about their own pet projects, future beings and animals”, I believe the cost to both EA and broader society will be high. Anecdotally, I have seen external articles critiquing EA from these angles, but never from the angle “EA worries too much about its own image”.
I refuse to believe that renting out a conference hall would actually have cost more.
Investing £15,000,000 a year would yield roughly £1,000,000 a year on the stock market. If you are spending a million pounds on the venue alone for a 1,000 person conference, you are not doing it right. A convention hall typically runs in the tens of thousands of dollars, not the millions. This is a 100x markup.
The calculations there are completely correct under the assumption that the space is being used 365 days a year, which strikes me as wildly implausible. I was working on the assumption the is space used a few days each year. If this space is actually being occupied 100% of the time, I’d gladly retract my criticism.
The actual usage of the abbey is very likely to be somewhere between these two numbers. Definitely I would expect it to be used far more than for one major conference per year, but I wouldn’t expect 100% usage either.
It depends. In isolation, that statement does seem concerning to me, like they may have been overestimating the potential negative optics.
What matters to me here is whether sufficient thought was put into all the different aspects. Clearly, they thought a lot about the non-optics stuff. I have no way of easily evaluating those kinds of statements, as I have very little experience organizing conferences. But I’m concerned that maybe there wasn’t sufficient thought given to just how bad the optics can get with this sort of thing.
My career has been in communications, so I’m used to thinking about PR risks and advocating for thinking about those aspects. Perhaps I’m posting here with a bias from that point of view. If I were in a room with decision-makers, I’d expect my comments here to be balanced by arguments on the other side.
Even so, my suspicion is that, if you write something like “do what really is good rather than just what seems good”, you’re more likely to be underestimating rather than overestimating PR risks.
FWIW, as someone who also works in communications, I strongly disagree here and think EA spends massively too much of its mental energy thinking about optics.
More specifically:
I tend to criticize virtue ethics and deontology a lot more than I praise them—IMO these are approaches that often go badly wrong. But I think PR (for a community like EA) is an area where deontology-like adherence to “behave honestly and with integrity” and virtue-ethics-like focus on “be the sort of person internally who you would find most admirable and virtuous” tends to have far better consequences than “select the action that naively looks as though it will make others like you the most”.
If you’re an EA and you want to improve EA’s reputation, my main advice to you is going to look very virtue-ethics-flavored: be brave, be thoughtful, be discerning, be honest, be honorable, be fair, be compassionate, be trustworthy; and insofar as you’re not those things, be honest about it (because honesty is on the list, and is paramount to trusting everything else about your apparent virtues); and let your reputation organically follow from the visible signs of those internal traits of yours, rather than being a thing you work hard on optimizing separately from optimizing whether you’re actually an awesome person.
Have integrity, and speak truth even when you’re scared to, and be the sort of person you’d have found inspiring to run into in your early days at EA, if someone could read your mind and see the generators of your behavior.
Do stuff that you feel really and deeply proud of, rather than stuff that you’d be embarrassed by if someone fully understood what you were doing and why, context and all.
I think that for all or nearly-all EAs, that should pretty much be the entire focus of their thoughts about EA’s reputation.
The other 10% is something like: “But sometimes adding time and care to how, when, and whether you say something can be a big deal. It could have real effects on the first impressions you, and the ideas and communities and memes you care about, make on people who (a) could have a lot to contribute on goals you care about; (b) are the sort of folks for whom first impressions matter.”
10% is maybe an average. I think it should be lower (5%?) for an early-career person who’s prioritizing exploration, experimentation and learning. I think it should be higher (20%?) for someone who’s in a high-stakes position, has a lot of people scrutinizing what they say, and would lose the opportunity to do a lot of valuable things if they substantially increased the time they spent clearing up misunderstandings.
I wish it could be 0% instead of 5-20%, and this emphatically includes what I wish for myself. I deeply wish I could constantly express myself in exploratory, incautious ways—including saying things colorfully and vividly, saying things I’m not even sure I believe, and generally ‘trying on’ all kinds of ideas and messages. This is my natural way of being; but I feel like I’ve got pretty unambiguous reasons to think it’s a bad idea.
If you want to defend 0%, can you give me something here beyond your intuition? The stakes are high (and I think “Heuristics are almost never >90% right” is a pretty good prior).
Frankly I would think that there was finally someone with a modicum of sense and understanding of basic PR working in the area. And upgrade my views of the competency of the organisation accordingly.
Also I’d not that “this will save money in the long run” is a fairly big claim that has not been justified. There are literally hundreds of conference venues within a reasonable distance of Oxford, all of which are run by professional event managers who are able to take advantage of specialisation and economies of scale. Making it difficult to believe
Optics is real. We live in the real world. Optics factor into QUALYs or any other metric. Why would the reverse be true, that we ignore reputation-related effects, even if they are fully real?
I feel a bit awkward quoting the Bible, but there’s one part that’s super relevant to this discussion from a secular perspective. It’s Corinthians 8:6 to 8:13, and is basically like, “hey, we know doing X isn’t bad, but anyone seeing us doing X they’d think we’re casting away our principles, which would cause them to do wrong, so we’re not going to do X.” Here’s the quote,
yet for us there is one God, the Father, from whom are all things and for whom we exist, and one Lord, Jesus Christ, through whom are all things and through whom we exist. However, not all possess this knowledge. But some, through former association with idols, eat food as really offered to an idol, and their conscience, being weak, is defiled. Food will not commend us to God. We are no worse off if we do not eat, and no better off if we do. But take care that this right of yours does not somehow become a stumbling block to the weak. For if anyone sees you who have knowledge eating in an idol’s temple, will he not be encouraged, if his conscience is weak, to eat food offered to idols? And so by your knowledge this weak person is destroyed, the brother for whom Christ died. Thus, sinning against your brothers and wounding their conscience when it is weak, you sin against Christ. Therefore, if food makes my brother stumble, I will never eat meat, lest I make my brother stumble.
It also comes off as quite manipulative and dishonest, which puts people off. There are many people who’ll respect you if you disagree with them but state your opinion plainly and clearly, without trying to hide the weird or objectionable parts of your view. There are relatively few who will respect you if they find out you tried to manipulate their opinion of you, prioritizing optics over substance.
And this seems especially harmful for EA, whose central selling point is “we’re the people who try to actually do the most good, not just signal goodness or go through the motions”. Most public conversations about EA optics are extremely naive on this point, treating it as a free action for EAs to spend half their time publicly hand-wringing about their reputations.
What sort of message do you think that sends to people who come to the EA Forum for the first time, interested in EA, and find the conversations dominated by reputation obsession, panicky glances at the news cycle, complicated strategies to toss first-order utility out the window for the sake of massaging outsiders’ views of EA, etc.? Is that the best possible public face you could pick for EA?
In fact, I don’t think that we should adopt the stance “be so terrified of PR risks that you refuse to talk about PR”. I think EA should blurt far more than it currently does, and this will inevitably mean talking at least a little about people’s emotional fears re looking weird to others, being embarrassed to do something, etc.
But recognizing the deep PR costs of EA’s long-standing public obsession with reputation management is at least a first step in the direction of unraveling the obsession for some people, I’d hope.
Yeah I totally agree. I’d agree with the statement “it’s helpful to take optics into account, but not let it dominate our decision making process”. My original comment was in response to the idea that ‘actually doing good is more important than looking like doing good’ which I would argue is an oversimplification of the real world and not a good principle. I don’t think that it’s helpful to care entirely about optics or never care about optics. It’s more nuanced.
I also think it could help to break down the term “optics” a bit. I think the purchase is bad for first impressions, which is one particular type of optics.
Anyways this whole discussion about optics is kind of a red herring. People will be shocked by the purchase because it was by a charity and was pretty exorbitant, and in fact it was (by that one guy’s admission… I’m on a phone and don’t want to look up his name in the comment above) purchased for to make conference participants feel inspired and was not made as a cost savings mechanism. Appearance (not being frugal) reflects reality in this case, at least based on that comment I read by that one guy (and if I’m wrong just let me be wrong at this point, I have work to do and don’t care to debate this further).
But yeah I agree about let’s not wholly concentrate on optics. Of course.
Let’s say we had one charitable person who has a reputation for being charitable, and another charitable person who has a reputation for hurting others. Someone needing charity avoid the latter, even though the latter is also beneficial.
There’s a big difference between trying to represent yourself in an accurate or an inaccurate way. In either case you’re caring about what people think about you, but if we assume the perceiver is acting in their self interest, then the accurate representation will benefit them, and the inaccurate representation may harm them.
I’m not disagreeing with what you wrote. I’m adding to it that “caring about optics” can actually be more honest. It’s possible to care about optics so that you’re represented honestly, too.
SBF caused damage not because he virtue signaled with his cheap car and lack of style, but because he was misrepresenting himself and being a dick.
It makes sense for people to talk about not wanting to be misrepresented, and if I were a new visitor to the forum and I saw people upset about being misrepresented, I’d probably be sympathetic to them. I also might think they were cry babies and too obsessed with their image, which is what you’re saying could be the case, and I agree with that.
Also just by the way, I guess the ideal would be to care what other people think but be strong enough to do what one thinks is right. I think there’s a psychological element to all this. I’ve lived in some places where I was looked down on, even though I was working hard for their benefit, and it did suck psychologically. It would’ve been better for everyone if people had known how much I cared about them, but yeah it can be important to not worry too much about what other people think, as you wrote.
I do think it’s great when EAs see a false thing being said about EA and they speak up to say what they think the true thing is.
Which can look like “optics”, and is not exactly the same as “just be internally virtuous and go about your business”.
Ideally, though, I think this would apply to false positive claims about EA, as well as false negative claims about EA.
Otherwise, discerning people will be rightly skeptical when you object to criticisms of EA, or will generally suspect you of cherry-picking evidence when you want to push a specific conclusion.
I don’t think EA should be indifferent to “other human beings have false beliefs about something that seems very important to me”.
And plausibly a few EAs should specialize in thinking about EA’s reputation, and go further still. Though I think EA PR specialists should still ground their activities in “first and foremost, be actually-virtuous and honest”.
The stronger claim I want to make is less “never ever think about EA’s reputation”, more “make it your goal to give people accurate models of EA, rather than to improve EA’s reputation per se”.
Treat people (including non-EAs) more like agents, and more like peers.
Even when you don’t actually think someone is your peer, I think that on the whole it turns out to be a really good behavioral policy to share information with people like you would with a (new-to-this-topic) peer.
Try to empower them by giving them more accurate models, even if you’re quietly worried they’ll screw it up somehow. Even when it’s tempting to try to steer people by feeding them subtly-unrepresentative data, resist the temptation and just tell them what your actual model is.
You’ll lose some winnable battles that way, but I think the net effect of EA adopting this approach is that we’re a lot likelier to win the war and cause things to go well in the future.
EA is typically an (honest) virtue signal to sufficiently savvy, discerning assessors of virtue.
This isn’t the only reason people do EA, but it’s a good and important reason, because virtue is a good thing and therefore it’s good when there are incentives to honestly signal it.
Like, the central reason EA is a good idea is that helping people is good, and EA is a good approach / community / body of knowledge for achieving that.
But if something is a way to help others, then it’s also typically a way to show that you care about others (and are smart, discerning, etc.)
And that’s good too. Because it’s socially valuable to have ways to accurately signal those characteristics to others, and because (insofar as visibly having those characteristics is socially rewarded) this increases the incentive to actually do good.
This also suggests that some level of insularity and elitism is prosocial, as long as you don’t fuck up the process of picking which people to trust and value the opinions of.
“EA is a signal of goodness” incentivizes actual goodness insofar as either (a) EA’s good in a way that’s maximally legible, impressive, and emotionally salient to a Random Human Being, or (b) you care more about signaling goodness to the most discerning and knowledgeable people.
It’s the same mechanism as, e.g., an academic field: the quality of academic discourse depends on the fact that physicists care more about impressing their colleagues (plus visibly smart and discerning outsiders), than about impressing random CNN viewing audiences.
If physicists cared more about impressing CNN watchers, then there would be a lot more wannabe Deepak Chopras, people engaged in hyperbole and fuzzy thinking, etc. It’s important that physicists care primarily about elite opinion, and it’s important they chose the right elite.
[...] Embracing the wrong elites can potentially be arbitrarily bad, because any group can be thought of as an “elite” relative to some selection process. For some people, Infowars is elite opinion, and Infowars fans are the specialists you try to impress. For others, theologians are the elite. Or nutrition scientists, or psi researchers, or your favorite political party, or...
Picking the wrong elite to signal to (and defer to, etc.) can be a complete disaster. That, plus general pressures to virtue-signal egalitarianism, causes a lot of people to try to find some way to avoid having to pick an “elite”.
But populism faces the same core problem as elitism: just as elites can be wrong, Mass Opinion can be wrong too.
In the end, there’s no good alternative to putting in the hard work to identify which people have a stronger grasp on reality.
But once you’ve put in that work, insofar as you feel confident about the result, you should indeed care more about the opinion of people who have accurate beliefs than about what a random person on the street thinks of you.
This has failure modes, because every option has failure modes. But the failure mode “the physics community mostly just cares about popular book sales a la Deepak Chopra” is a lot worse than the failure mode “physicists sometimes put too much weight on a researcher’s bad work”.
And it’s a lot worse, at the meta / civilizational level, for humanity to have no true community of rigorous physicists, than for humanity to have rigorous physicists and theologians. Some things aren’t worth giving up in order to prevent the existence of theology.
The main problem with lavishness, IMHO, is not optics per se, but rather that it’s extremely easy for people to trick themselves into believing that spending money on their own comfort/lifestyle/accommodations is net-good-despite-looking-bad (for productivity reasons or whatever). This generalizes to the community level.
(To be clear, this is not to say that we should never follow such reasoning. It’s just a serious pitfall. This is also not original—others have certainly brought this up.)
Also, I imagine having communicated the reasoning behind the purchase publicly before the criticisms would have gone some way in reducing the bad optics, especially for onlookers who were inclined to spend a little bit of time to understand both perspectives. So thinking more about the optics doesn’t necessarily lead you to not do the thing.
Or at least a cheaper one? With better access to public transport? This seems overbudget and public transport is not only better for the environment, it’s also more egalitarian. It would allow people from more impoverished backgrounds to more easily join our community, which—given our demographics—might be something we want to encourage.
EDIT: Yes I’m aware that you could reach the estate via public transport, the connection is just very bad (on the weekend you have to do a 26 minute walk), that’s why I said “better acces” not “at all accessible”.
This is not a comment on the cheapness point, but in case this feels relevant, private vehicles are not necessary to access this venue― from the Oxford rail station you can catch public buses that drop you off about a 2-minute walk from the venue. It’s a 20 minute bus ride, and the buses don’t come super often (every 60 minutes, I think?) but I just wanted to be clear that you can access this space via public transport.
Presumably it would be easy to arrange a conference minibus to shuttle attendees to and from the station. This seems like the least of the project’s problems.
(However, it is very difficult to hire taxis to go to and come back from there, which often takes 30 min). Edit: people can wait up to 1h30 to get a taxi from Wytham, which isn’t super practical.
It would allow people from more impoverished backgrounds to more easily join our community, which—given our demographics—might be something we want to encourage.
I would be surprised if public transport links were important for accessibility to lower-income demographics, in this specific context. Covering transport costs is common for events, and the last time I went there a train ticket from London to Oxford is pricier than a taxi ride from the station to Wytham.
A typical researcher might make £100,000 a year. £15,000,000 is roughly £1,000,000 a year if invested in the stock market. So you could hire 5 researchers to work full-time, in perpetuity.
Conferences are cool, but do you really think they generate as much research as 5 full time researchers would? As a researcher, I can tell you flat-out the answer is no. I could do much more with 5 people working for me than I could by going to even a thousand conferences.
You can’t always turn more money into more researchers. You need people who can mentor and direct them, and you need to find people who are good fits for the position, and most of the people who are most interesting to you are also interesting to other employers. In general, I don’t think finding salaries for such people was the bottleneck.
Investing money into the stock market and investing money into real estate are similar. In both cases, the value of your capital can rise or fall over time.
The value of both can both rise or fall, but real estate is only an investment when rented out. Otherwise, it’s a durable consumption good. In particular, the EMH* implies the expected value of buying real estate and renting it out must be equal to the expected return on stocks. Otherwise, people would stop sell stocks (driving their price down, and therefore the rate of return up) and then buy real estate to lease it out.
*While it’s entirely plausible the EMH doesn’t hold, no analysis arguing this is presented, and I don’t think that placing bets on certain sectors of the economy is a particularly good idea for a charity. Notably, arguments against the EMH almost all fall on the side of suggesting the housing market is currently overvalued because of structural deficiencies (like the inability to short housing) and subsidies that make buying cheaper for individual homeowners (but not charities)
There’s plenty of real estate investment that does not depend on the real estate being rented out. That’s why laws get passed that require some real estate to be rented out.
One of the attributes of real estate is that it’s a lot less liquid than stocks and economic theory suggests that market participants should pay a premium for liquidity.
Finally, it’s wrong to say that anything with less expected returns than stocks is no investment. People all the time invest money in treasury bonds that have less expected returns.
Thanks, this is indeed helpful. I would also like to know though, what made this property “the most appropriate” out of the three in a bit greater detail if possible. How did its cost compare to the others? Its amenities? I think many people in this thread agree that it might have been worth it to buy some center like this, but still question whether this particular property was the most cost effective one.
Thanks, I appreciate the added information! I’m not sure I’m convinced that this was worthwhile, but I feel like I now have a much better understanding of the case for it.
I did feel a little nervous about the optical effects, but think it’s better to let decisions be guided less by what we think looks good, and more by what we think is good — ultimately this was a decision I felt happy to defend.
I think you’ve overestimated the value of a dedicated conference centre. The important ideas in EA so far haven’t come from conversations over tea and scones at conference centres but are either common sense (“do the most good”, “the future matters”) or have come from dedicated field trials and RCTs.
I also think you’ve underestimated the damage this will do to the EA brand. The hummus and baguettes signal an earnestness. Abbey signals scam.
I’m confident that this will be remembered as one of CEA’s worst decisions.
It’s sad you’re getting downvoted. A manor and 25 acres of nothingness adds nearly nothing to EA when some other space, for instance the hall of a large parish or church, even abandoned ones, could have been (on an as needed basis) rented out / purchased instead— for a fraction of the cost — when conferences or workshops are needed.
Imagine the extent of scrutiny the manor’s purchase would face in early EA. It wouldn’t be pretty.
I think it’s plausible that this purchase saves money, but I strongly disagree with your view of optics.
“think it’s better to let decisions be guided less by what we think looks good, and more by what we think is good”
What looks good has important effects on EA community building, the diffusion of EA ideas and on the ability to promote EA ideas in politics, especially over the longer term.
Whether a decision looks good, i.e, the indirect, long term effects of the decision on EA’s reputation, is a very important factor on determining on whether a decision is good, i.e, approximately maximises expected value.
I’m disappointed that someone at CEA / EV thinks it makes sense to put optics aside and entirely focus on the short-term, direct effects of a decision when calculating expected value—also seems weirdly at odds with longtermist thinking!
How much are electricity, maintenance and property tax for this venue? Historic building may require expensive restoration and are subject to complex regulation.
Hey,
First I want to explain that I think it’s misleading to think of this as a CEA decision (I’ve edited to be more explicit about this). To explain that I need to disambiguate between:
CEA, the project that runs the EA Forum, EA Global, etc.
This is what I think ~everyone usually thinks of when they think of “CEA”, as it’s the group that’s been making public use of that brand
CEA, the former name of a legal entity which hosts lots of projects (including #1)
This is a legacy naming issue …
The name of the legal entity was originally intended as a background brand to house 80,000 Hours and Giving What We Can; other projects have been added since, especially in recent years
Since then the idea of “effective altruism” has become somewhat popular in its own right! And one of the projects within the entity started making good use of the name “CEA”
We’ve now renamed the legal entity to EVF, basically in order to avoid this kind of ambiguity!
Wytham Abbey was bought by #2, and isn’t directly related to #1, except for being housed within the same legal entity. I was the person who owned the early development of the project idea, and fundraised for it. (The funding comes from a grant specifically for this project, and is not FTX-related.) I brought it to the rest of the board of EVF to ask for fiscal sponsorship (i.e. I would direct the funding to EVF and EVF would buy the property and employ staff to work on the project). So EVF made two decisions here: they approved fiscal sponsorship, agreeing to take funds for this new project; and they then followed through and bought the property with the funds that had been earmarked for that. The second of these is technically a decision to buy the building (and was done by a legal entity at the time called CEA), but at that point it was fulfilling an obligation to the donor, so it would have been wild to decide anything else. The first is a real decision, but the decision was to offer sponsorship to a project that would likely otherwise have happened through another vehicle, not to use funds to buy a building rather than for another purpose. Neither of these decisions were made by any staff of the group people generally understand as “CEA”. (All of this ambiguity/confusion is on us, not on readers.)
I’d also like to speak briefly to the “why” — i.e. why I thought this was a good idea. The central case was this:
I’ve personally been very impressed by specialist conference centres. When I was doing my PhD, I think the best workshops I went to were at Oberwolfach, a mathematics research centre funded by the German government. Later I went to an extremely productive workshop on ethical issues in measuring the global burden of disease at the Brocher Foundation. Talking to other researchers, including in other fields, I don’t think my impression was an outlier. Having an immersive environment which was more about exploring new ideas than showing off results was just very good for intellectual progress. In theory this would be possible without specialist venues, but researchers want to spend time thinking about ideas not event logistics. Having a venue which makes itself available to experts hosting events avoids this issue.
In the last few years, I’ve been seeing the rise of what seems to me an extremely important cluster of ideas — around asking what’s most important to do in the world, and taking chains of reasoning from there seriously. I think this can lead to tentative answers like “effective altruism” or “averting existential risk”, but for open-minded intellectual exploration I think it’s better to have the focus on questions than answers. I thought it would be great if we could facilitate more intellectual work of this type, and the specialist-venue model was a promising one to try. We will experiment with a variety of event types.
We had various calculations about costings, which made it look somewhere between “moderately money-saving” and “mildly money-spending” vs renting venues for events that would happen anyway, depending on various assumptions e.g. about usage that we couldn’t get great data on before running the experiment. The main case for the project was not a cost-saving one, but that if it was a success it could generate many more valuable workshops than would otherwise exist. Note that this is a much less expensive experiment than it may look on face value, since we retain the underlying asset of the building.
We wanted to be close to Oxford for easy access to the intellectual communities there. (Property prices weren’t falling off significantly with distance until travel time from Oxford and London had become significantly higher.) We looked at a lot of properties online, and visited the three properties we found for sale with 20+ bedrooms within about 50 minutes of Oxford. These were all “country houses”, which are commonly repurposed as event venues in England. The other two were cheaper (one ~£6M and one ~£9M at the end of a competitive process; compared to a purchase price for Wytham of a bit under £15M) but needed significantly more work before they were usable, which would have added large expense (running into the millions) and delay (likely years). (And renovation expense isn’t obviously recoverable if one sells — it depends on how much the buyers want the same things from the property as you do.)
We thought Wytham had the most long-term potential as a venue because it had multiple large common rooms that could take >40 people. The other properties had one large room each holding perhaps a max of 40, but there would be pressure on this space since it would be wanted as both a dining space and for workshop sessions, and would also reduce flexibility of use for meetings (extra construction might have been able to address this, but it was a big question mark whether you could get planning consent). Wytham also benefited from being somewhat larger (about 27,000 sq ft vs roughly 20,000 sq ft for each of the other two) and a more accessible location. Overall we thought that a combination of factors made it the most appropriate choice.
I did feel a little nervous about the optical effects, but think it’s better to let decisions be guided less by what we think looks good, and more by what we think is good — ultimately this was a decision I felt happy to defend.
On why we hadn’t posted publicly about this before: I’m not a fan of trying to create hype. I thought the natural time to post about the project publicly would be when we were ready to accept public applications to run events, and it felt a bit gauche to post before that. Now that there’s a public discussion, of course, it seemed worth explaining some of the thinking.
I hope this is helpful.
This comment sounds qualitatively reasonable, but it needs a quantitative complement—it could have been made virtually verbatim had the cost been £1.5m or £150m. I would like to hear the case for why it was actually worth £15m.
Also, a lot of people are talking about ‘optics’, with the implication that the irrational public will misunderstand the +EV of such a decision. But ‘bad optics’ don’t come from nowhere—they come from a very reasonable worry that over time, people who influence a lot of money have some risk of, if not becoming corrupt, at least getting carried away with that influence and rationalising away things like this.
I think we should always take such possibilities seriously, not to imply anyone has actually done anything wildly irresponsible, but to insure against anyone doing so—and to keep grey areas as thin as possible. And I’m increasingly worried that CEA are seriously undertransparent in ways that suggest they don’t think such risks could materialise—which increases my credence that they could. So while I could be convinced this was a reasonable use of funds, I think the decision not to ‘hype’ it builds a dangerous precedent.
You can check how many events GPI, FHI, and CEA have run in Oxford, requiring renting hotels, etc., and the associated costs. I know that GPI runs at least a couple such events per year. Given that, I think that over the next 10-20 years, £15m isn’t outside the realm of plausible direct costs saved, especially if it’s available for other groups to rent in order to help cover costs.
That said, the cost-benefit analysis could be more transparent. On the other hand, I don’t think that private donors should be required to justify their decisions, regardless of the vehicle used.
But I do think that CEA is the wrong place for this to be done, given that they aren’t even likely to be a key user of the space.(Edit: Owen’s explanation, that this was done by the parent org of CEA, means I will withdraw the last claim.)I accept that it’s plausibly in the realm, but that’s not very helpful for knowing whether it’s actually worthwhile—plausibility is a very low bar.
This doesn’t seem like a good blanket policy. If private donors can use a charity to buy large luxury goods, it raises worries about that charity becoming a tax haven, or a reward for favours, or any other number of such hard-to-predefine but questionable activities. There are legal implications around charities taking too much of their money from a single donor for exactly that reason.
I don’t think we’re there yet, but, per above, I would like to see more discussion from CEA of the risks associated with moving in that direction.
Agree. I don’t mind the idea of a sort of EA special projects orgs that has relatively high autonomy, but I don’t want that org to also be the face of the community—as I understand it, that’s basically how we ended up with FTXgate. We’d also probably want them to source funding from a wide range of sources to avoid the unilateralists’ curse, which this situation is at least flirting with.
I think we mostly agree, but don’t think this was unilateralists’ curse, and it isn’t even close. Many people were aware of or involved in discussions about this, and having multiple donors doesn’t guarantee not falling into unilateralism.
I agree on what we agree and disagree about :)
Re unilateralism obviously more donors isn’t anything like a guarantee, but is one of hopefully many safeguards. On the other hand, many people approving of it here (assuming they broadly did) doesn’t mean it’s not a form of unilateralism depending on how those people were included in the discussion—if, for eg, they were all major CEA funders and staff, there’s likely to be extreme selection bias in their opinions on the relevant questions.
I’m telling you that, as someone who hasn’t ever worked for or with CEA directly, I spoke with a couple people about this months before it happened. Clearly, plenty of people were aware, and discussed this—and I didn’t know the price tag, but thought that a center for retreats focused on global priorities and related topics near Oxford sounded like a very good idea. I still think it is, and honestly don’t think that it’s unreasonable as a potential investment into priorities research. Of course, given the current post-FTX situation, it would obviously not have been considered if the project was being proposed today.
Can you say who funded the dedicated grant?
I feel like this is pretty important. I think this is basically fine if it’s a billionaire who thinks CEA needs real estate, and less fine if it is incestuous funding from another EA group.
I think the key question is whether the money would counterfactually have been available for another purpose. OCB half-implies it wouldn’t have been by saying that buying the property fulfiled an obligation to the donor, but then I’m confused by the claim “we retain the underlying asset”. If EVF holds the asset subject to an obligation only to use it as a specialist conference centre, it’s unable to realise the value. On the other hand, it would seem surprising if the donation was made on the condition that it must be used to buy the building, but then EVF could do whatever it liked with it (including immediately reselling it).
If EVF is in fact able to resell the building now, then the argument that it was an ear-marked donation is weak, because EVF is making a decision now to hold the asset rather than sell it to raise funds for other EA causes.
I take the silence as a no :(
It was OpenPhil, see here.
And did the purchase come with any conditions, like the right for the billionaire to use the venue as his house between the conferences?
Owen—this sounds totally reasonable to me.
Max Planck Institutes had a dedicated conference center in the Alps (Schloss Ringberg) that is hugely inspirational, and that promotes intensive collaboration, brain-storming, and discussion very effectively.
Likewise for the Center for Advanced Study in the Behavioral Sciences at Stanford—the panoramic views over the San Francisco Bay, the ease of both formal and informal interaction, and the optimal degree of proximity to the main campus (near, but not too near), promote very good high-level thinking and exchange of ideas.
I’ve been to about 70 conferences in my academic career, and I’m noticed that the aesthetics, antiquity, and uniqueness of the venue can have a significant effect on the seriousness with which people take ideas and conversations, and the creativity of their thinking. And, of course, it’s much easier to attract great talent and busy thinkers to attractive venues. Moreover, I suspect that meeting in a building constructed in 1480 might help promote long-termism and multi-century thinking.
It’s hard to quantify the effects that an uplifting, distinctive, and beautiful venue can have on the quality and depth of intellectual and moral collaboration. But I think it’s a real effect. And Wytham Abbey seems, IMHO, to be an excellent choice to capitalize on that effect.
The problem to me seems to be that “being hard to quantify” in this case very easily enables rationalizing spending money on fancy venues. I’m also not convinced that non-EA institutions spending money on fancy venues is a good argument for also doing so or an argument that fancy venues enable better research. These institutions probably just use fancy venues because it is self serving. As they don’t usually promote doing the most good by being effective, I guess that nobody cares much that they do that.
Personally, I think that a certain level of comfort is helpful, e.g. having single / double rooms for everybody so they can sleep well or don’t needing to cook etc. However, I’m very skeptical of anything above that being worth the money.
I don’t want to be adversarial, but I just have to note how much your comment reads to me and other people I spoke to like motivated reasoning. I think it’s very problematic if EA advocates for cost effectiveness on the one hand and then lightly spends a lot of money on fancy stuff which seems self serving.
Agreed. The whole founding insight of the EA movement was the importance of rigorously measuring value for money. The same logic is used to justify every warm and fuzzy but low value charity. And it’s entirely reasonable to be very worried when major figures in the EA movement revert to that kind of reasoning when it’s in their self interest.
Yes. It seems very plausible that conferences are good and also that conferences in attractive venues are better, but it seems surprising that this would be the most effective use of the money.
First of all—I’m really glad you wrote this comment. This is exactly the kind of transparency I want to see from EA orgs.
On the other hand, I want to push back against your now last paragraph (on why you didn’t write about this before). I strongly think that it’s wrong to wait for criticism before you explain big and important decisions (like spending 15 million pounds on a castle). The fact that criticism arose here is basically random, and is a result of outside critics looking in. In a better state of affairs, you want the EA community to know about the things they need to look at and maybe criticise. Otherwise there’s a big chance they’ll miss things.
In other words, I think it’s very important that major EA orgs proactively share the information and reasoning about big decisions.
This sort of comment sounds good in the abstract, but what specific process would you propose that you think would actually achieve this? CEA has to post all project proposals over a certain amount to the EA forum? Are people actually going to read them? What if they only appeal to specific funders? How much of a tax on the time of CEA staff are we willing to pay in order to get this additional transparency?
Personally, I think something like a quarterly report on incoming funds and outgoing expenses, ongoing projects and cost breakdowns, and expected and achieved outcomes would work very well. This is something I’d expect of any chariry or NGO that values effectiveness and empirical backing, and particularly from one that places it at the center of its mission statement, so I struggle to think of it as a “tax” on the time of CEA workers rather than something that should be an accepted and factored in cost of doing business.
The grandparent comment asks for decisions to be explained before criticism appears. Your proposal (which I do think is fairly reasonable) would not have helped in this case: Wytham Abbey would have got a nice explanation in the next quarterly report after it got done, i.e. far too late.
You would instead require ongoing, very proactive transparency on a per-decision basis in order to really pre-empt criticism.
I put a negative framing on it and you put a positive one, but it’s a cost that prevents staff from doing other things with their time and so should be prioritised and not just put onto an unbounded queue of “stuff that should be done”.
I think my broader frame around the issue affected how I read the parent comment. I took it as the problem being a general issue in EA transparency—my general thinking on a lot of the criticisms from within EA was something along the lines of the lack of transparency as a general issue is the larger problem, if EAs knew there would be a report/justification coming, it would not have been such an issue within the community. I do see your point now, although I do think there are some pretty easy ways around it, like determining a reasonably high bar on the basis of CEA’s general spending-per-line-item that would necessitate a kind of “this is a big deal” announcement.
I agree that it is a cost, like all other things. On the point of prioritization, I would argue that because of EA principles being so heavily tied into cost effectiveness and empiricism, treating this as something that can be foregone to give CEA staff to do other stuff that should be done is not only hypocritical, it’s bad epistemically insofar as it implies that EAs (or at least EAs who work at CEA) are not beholden to the same principles of transparency and epistemic rigor that they expect from other similar organizations, ie. “we are above these principles for some reason or other”.
I think this is all pretty reasonable, but also I suspect I might think that existing similar organisations were doing too much of this kind of transparency activity.
Yes, that sounds about it. Although I would add decisions that are not very expensive but are very influential.
What do you mean?
A significant amount. This is well worth it. Although in practice I don’t imagine there are that many decisions of this calibre. I would guess about 2-10 per year?
That’s pretty unclear to me. We are in the position of maximum hindsight bias. An unusual and bad event has happened, that’s the classic point at which people overreact about precautions.
I’ve been writing the same calls for transparency for months. This has nothing to do with FTX.
I disagree with this. The property may well increase in value over time, and be sold at a profit if EAs sell it. I don’t think EAs should publicly discuss every investment they make at the $20M level (except insofar as public discussion is useful for all decisions), and if there’s a potential direct altruistic benefit to the investment then that makes it less important to publicly debate, not more important.
(Analogy: if a person deciding to invest $20 million in a for-profit startup with zero direct altruistic benefit requires no special oversight, then a person deciding to invest $20 million in a for-profit startup that also has potential altruistic benefits suggests even less use for oversight, since we’ve now added a potentially nice and useful feature to an action that was already fine and acceptable beforehand. Altruistic side-effects shouldn’t increase the suspiciousness of an action that already makes sense on its own terms.)
See also Oliver’s point, “Purchase price—resale price will probably end up in the $1-$3MM range.”
I’m not sure about this particular case, but I don’t think the value of the property increasing over time is a generally good argument for why investments need not be publicly discussed. A lot of potential altruistic spending has benefits that accrue over time, where the benefits of money spent earlier outweighs the benefits of money spent later—as has been discussed extensively when comparing giving now vs. giving later.
The whole premise of EA is that resources should be spent in effective ways, and potential altruistic benefits is no excuse for an ineffective spending of money.
Would you still disagree if this were an outright 15M£ expense?
This is a very risky investment. I don’t know what Oliver’s point is based on, but I saw another (equally baseless) opinion online that since they bought it right before a market crash, chances are they’ve already lost millions. I’d probably not feel the same way about some diverse investment portfolio, but millions in a single real estate investment? This does require significant oversight.
Re: your analogy—I both disagree with the claim and with the fact that this is analogous. CEA is not like a person and should not be treated as one; they’re an organisation purporting to represent the entire movement. And when people do something that they hope have a big impact, if it’s important to them that it’s positive, broad oversight is much more important than if it was an investment with no chance of a big impact.
E.g., if EAs overpaid 30M£ for a property that resells at 15M£? I’d be a bit surprised they couldn’t get a better deal, but I wouldn’t feel concerned without knowing more details.
Seems to me that EA tends to underspend on this category of thing far more than they overspend, so I’d expect much more directional bias toward risk aversion than risk-seeking, toward naive virtue signaling over wealth signaling, toward Charity-Navigator-ish overhead-minimizing over inflated salaries, etc. And I naively expect EVF to err in this direction more than a lot of EAs, to over-scrutinize this kind of decision, etc. I would need more information than just “they cared enough about a single property with unusual features to overpay by 15M£” to update much from that prior.
We also have far more money right now than we know how to efficiently spend on lowering the probability that the world is destroyed. We shouldn’t waste that money in large quantities, since efficient ways to use it may open up in the future; but I’d again expect EA to be drastically under-spending on weird-looking ways to use money to un-bottleneck us, as opposed to EA being corrupt country-estate-lovers.
It’s good that there’s nonzero worry about simple corruption, since we want to notice early warning signs in a world where EAs do just become corrupt and money-hungry (and we also want to notice if specific individual EAs or pseudo-EAs acquire influence in the community and try to dishonestly use it for personal gain). But it’s not high on my list of ways EA is currently burning utility, or currently at risk of burning utility.
I’m confused why you wouldn’t feel concerned about EA potentially wasting 15M pounds (talking about your hypothetical example, not the real purchase). I feel that would mean that EA is not living up to its own standards of using evidence and reasoning to help others in the best possible way.
Since EA isn’t optimizing the goal “flip houses to make a profit”, I expect us to often be willing to pay more for properties than we’d expect to sell them for. Paying 2x is surprising, but it doesn’t shock me if that sort of thing is worth it for some reason I’m not currently tracking.
MIRI recently spent a year scouring tens of thousands of properties in the US, trying to find a single one that met conditions like “has enough room to fit a few dozen people”, “it’s legal to modify the buildings or construct a new one on the land if we want to”, and “near but not within an urban center”. We ultimately failed to find a single property that we were happy with, and gave up.
Things might be easier outside the US, but the whole experience updated me a lot about how hard it is to find properties that are both big and flexible / likely to satisfy more than 2-3 criteria at once.
At a high level, seems to me like EA has spent a lot more than 15M£ on bets that are vastly more uncertain and dependent-on-contested-models than “will we want space to house researchers or host meetings?”. Whether discussion and colocation is useful is one of the only things I expect EAs to not disagree about; most other categories of activity depend on much more complicated stories, and are heavily about placing bets on more specific models of how the future is likely to go, what object-level actions to prioritize over other actions, etc.
Can we all just agree that if you’re gonna make some funding decision with horrendous optics, you should be expected to justify the decision with actual numbers and plans?
Would be nice if we actually knew how many conferences/retreats were going to be held at the EA castle.
It might be justifiable (I got a tremendous amount of value being in Berkeley and London offices for 2 month stints), but now we’re here talking about it, and it obviously looks bad to anyone skeptical about EA. Some will take it badly regardless, but come on. Even if other movements/institutions way overspend on bad stuff, let’s not use that as an excuse in EA.
The “EA will justify any purchase for the good of humanity” argument will just continue to pop up. I know many EAs who are aware of this and constantly concerned about overspending and rationalizing a purchase. As much as critics act like this is never a consideration and EAs are just naively self-rationalizing any purchase, it’s certainly not the case for most EAs I’ve met. It’s just that an EA castle with very little communication is easy ammo for critics when it comes to rationalizing purchases.
One failed/bad project is mostly bad for the people involved, but reputational risk is bad for the entire movement. We should not take this lightly.
Justify to who? I would like to have an EA that has some individual initiative, where people can make decisions using their resources to try to seek good outcomes. I agree that when actions have negative externalities, external checks would help. But it’s not obvious to me that those external checks weren’t passed in this case*, and if you want to propose a specific standard we should try to figure out whether or not that standard would actually help with optics.
Like, if the purchase of Wytham Abbey had been posted on the EA forum, and some people had said it was a good idea and some people said it was a bad idea, and then the funders went ahead and bought it, would our optics situation look any different now? Is the idea that if anyone posted that it was a bad idea, they shouldn’t have bought it?
[And we need to then investigate whether or not adding this friction to the process ends up harming it on net; property sales are different in lots of places, but there are some where adding a week to the “should we do this?” decision-making process means implicitly choosing not to buy any reasonably-priced property, since inventory moves too quickly, and only overpriced property stays on the market for more than a week.]
* I don’t remember being consulted about Wytham, but I’m friends with the people running it and broadly trust their judgment, and guess that they checked with people as to whether or not they thought it was a good idea. I wasn’t consulted about the specific place Irena ended up buying, but I was consulted somewhat on whether or not Irena should buy a venue, and I thought she should, going so far as being willing to support it with some of my charitable giving, which ended up not being necessary.
Thanks for sharing your detailed thought process Owen, and I definitely appreciate the penultimate paragraph.
(I edited in a way which changed which paragraph was penultimate. I believe Larks was referring to the content which is now expanded on in paragraphs starting “We wanted …” and “We thought …”.)
Sounds like a reasonable decision to me, but I do wonder why the reasoning behind such large and not immediately obvious decisions isn’t communicated publicly more often.
Totally agree, as long as you give people the opportunity to figure out why you think it’s good.
Anyway, thanks for clarifying!
In general I would agree that it’s better to do what is good rather than what looks good. However, when you are the face of a global movement, optics have a meaningful financial implication. Imagine if this bad press made 1 billionaire 0.1% less likely to get involved with EA. That calculation would dominate any potential efficiency savings from insourcing a service provider.
I used to think this and I increasingly don’t. Doing good thing is what we’re all about. Doing good things even if it looks bad in the tabloid press is good publicity to the people who actually care about doing good, and they’re more important to us than the rest.
I think an EA that was weirder and more unapologetic about doing its stuff attracts more of the right kind of people and can generally get on with things more than an EA that frantically tries to massage it’s optics to appeal to everyone.
I am having a hard time here and speckled throughout the rest of this post with people writing that we are doing the “good thing” and we should do that and not just what looks good with the “good thing” in question being buying a castle and not say, caring about wild animal suffering.
I guess I’ve gone off into the abstract argument about whether we should care about optics or not. I don’t mean to assert that buying Wytham Abbey was a good thing to do, I just think that we should argue about whether it was a good thing to do, not whether it looks like a good thing to do.
I’m arguing that deciding whether or not it is a good thing should include the PR impact (i.e. a weak consequentialist approach). I don’t care if things look bad, unless that perception leads bad outcomes. In this case, I think the perception could lead to bad outcomes that dominate the good outcomes in the expected value calculation
I very much agree with Michael here.
I think this kind of reasoning is difficult to follow in practice, and likely to do more harm than good. Eg, I expect some billionaires are drawn to a movement that says fuck PR and actually tries to do what’s important—what if trying to account for PR has a 0.1% chance of putting off those billionaires? Etc.
At the very least, “do what is actually good rather than just what looks good” seems like a valid philosophy to follow if trying to do good, even after accounting for optics—trying to account for optics can easily be misleading, paralysing, etc.
EA is all about uncertain EV calculations—I don’t see why we should exclude optics when calculating EV. We should just embrace the uncertainty and try our best.
The only part of EA that doesn’t involve super uncertain EV calculations which can be misleading and paralysing is randomista development.
This is fair, and I don’t want to argue that optics don’t matter at all or that we shouldn’t try to think about them.
My argument is more that actually properly accounting for optics in your EV calculations is really hard, and that most naive attempts to do so can easily do more harm than good. And that I think people can easily underestimate the costs of caring less about truth or effectiveness or integrity, and overestimate the costs of being legibly popular or safe from criticism. Generally, people have a strong desire to be popular and to fit in, and I think this can significantly bias thinking around optics! I particularly think this is the case with naive expected value calculations of the form “if there’s even a 0.1% chance of bad outcome X we should not do this, because X would be super bad”. Because it’s easy to anchor on some particularly salient example of X, and miss out on a bunch of other tail risk considerations.
The “annoying people by showing that we care more about style than substance” was an example of a counter-veiling consideration that argues in the opposite direction and could also be super bad.
This argument is motivated by the same reasoning as the “don’t kill people to steal their organs, even if it seems like a really good idea at the time, and you’re confident no one will ever find out” argument.
Thanks, Neel. This is a very helpful comment. I now don’t think our views are too far apart.
Thanks! Glad to hear it. This classic Yudkowsky post is a significant motivator. Key quote:
In general, I agree with you (as I say in my first sentence), but
EV’s objectives are the promotion of EA, i.e. PR is it’s raisin d’etre.
in this case, the benefit seems like a rounding error (maybe you could argue it would save ~£100k p.a.) compared to the PR potential. Even if it’s hard to assess the PR impact (and I acknowledge it could go either way), it’s negligent not to consider it.
A large portion of your rationale is based on the intellectually stimulating effects of being surrounded by nice things. Do you think the people in the building will feel great when there’s such negative media coverage, and they feel the guilt of such an opulent purchase? If I were invited to this place, I’d feel uncomfortable and guilty all the time. There’s already a bunch of negative media coverage. It’s not going to stop. And it’s not going to make the program participants feel inspired.
While I understand this sentiment, optics can sometimes matter much more than you may at first expect. In this specific case, the kneejerk response of many people on social media of this seeming incongruity (a seemingly extravagant purchase by a main EA org) can potentially cement negative sentiment. By itself, maybe it’s not that bad. But in combination with the other previous bad press we have from the FTX debacle, people will get in their heads that “EA = BAD”. I’m literally seeing major philosophers who might otherwise be receptive to EA being completely turned off because of tweets about Wytham Abbey.
This isn’t to say that the purchase shouldn’t have been made. But you specifically said that you think the general rule should be that we make decisions about what we think is good rather than by what looks good. While technically I agree with this, I think that blindly following such a rule puts us in a state of mind where we are at risk of underestimating just how bad optics can become.
I can see this point, but I’m curious—how would you feel about the reverse? Let’s say that CEA chose not to buy it, and instead did conferences the normal way. A few months later, you’re talking to someone from CEA, and they say something like:
Yeah, we were thinking of buying a nice place for these retreats, which would have been cheaper in the long run, but we realised that would probably make us look bad. So we decided to eat the extra cost and use conference halls instead, in order to help EA’s reputation.
Would you be at all concerned by this statement, or would that be a totally reasonable tradeoff to make?
+1 to Jay’s point. I would probably just give up on working with EAs if this sort of reasoning were dominant to that degree? I don’t think EA can have much positive effect on the world if we’re obsessed with reputation-optimizing to that degree; it’s the sort of thing that can sound reasonable to worry about on paper, but in practice tends to cause more harm than good to fixate on in a big way.
(More reputational harm than reputational benefit, of the sort that matters most for EA’s ability to do the most good; and also more substantive harm than substantive benefit.
Being optics-obsessed is not a good look! I think this is currently the largest reputational problem EA currently actually faces: we promote too much of a culture of fearing and obsessing over optics and reputational risks.)
I think a movement is shaped to a rather large degree by its optics/culture, because that is what will determine who joins and to a lesser extent, who stays when things go wrong.
It seems plausible to me that a culture of somewhat spartan frugality, which seems (from my relatively uninformed perspective) like it was a larger part of the movement in the past, would have a larger positive impact on EA conferences than the stimulating-ness of the site. There’s something poetic about working harder in less onerous conditions than others would, forgoing luxury for extra donations, that I would imagine is at least as animating to the types of people in EA as scenery.
Beyond that, preserving core cultural aspects of a movement, even if the cost is substantial, is crucial to the story that the movement aims to tell.
Most people who are EAs today were inspired by the story of scrappy people gathering in whatever way is cheapest and most accessible, cheeks flushed with intellectual passion, figuring out how to stretch their dollars for the greater good. I think this aesthetic differs substantially from that of AI researchers in a castle, in terms of both losing the “slumming it out for the world” vibe and focusing on the reduction of an existential risk in a way that only a few people can understand rather than global development in a way that everyone can understand.
I’m sure the AI researchers are extremely competent and flushed with intellectual passion for the greatest good too, regardless of where they’re working. Maybe even more so in the castles. I am solely critiquing the optics and their potential cultural effect.
I have little formal evidence for this except the interest in and occasional resistance to the shift towards longtermism that seems widespread on the forum and a few external articles on EA. But I strongly suspect that “people with a career relating to longtermism” is an attractive archetypal representation of the epitome EA to far fewer people than “person who argues about the best place to donate, and donates as much as they can”, because the latter is much more relatable and attainable.
Perhaps an EA mostly focused on attracting select candidates for high impact careers will be more impactful than an EA attempting to make a wide, diffuse cultural impact by including many grassroots supporters. However, it seems that this runs the risk of modifying the target audience of EA from “everyone, because nearly everyone can afford at least 1% with a giving pledge” to .1% of the population of developed countries.
To me, it is at least plausible that the sheer cost of losing the grassroots-y story, paid in fewer, perhaps less-ideologically-committed new recruits, and a generally less positive public view of things related to effective altruism and rationality, could swing the net effect in the other direction. I think the mainstream being influenced over time to be more concerned with sentient beings, more concerned with rationality and calculating expected values on all sorts of purchases/donations, etc is a major potential positive impact that a more outward-facing EA could make.
If EA loses hold of the narrative and becomes, in the eye of the public, “sketchy, naive Masonic elites who only care about their own pet projects, future beings and animals”, I believe the cost to both EA and broader society will be high. Anecdotally, I have seen external articles critiquing EA from these angles, but never from the angle “EA worries too much about its own image”.
I refuse to believe that renting out a conference hall would actually have cost more.
Investing £15,000,000 a year would yield roughly £1,000,000 a year on the stock market. If you are spending a million pounds on the venue alone for a 1,000 person conference, you are not doing it right. A convention hall typically runs in the tens of thousands of dollars, not the millions. This is a 100x markup.
This comment suggests that renting conference venues in Oxford can be pretty expensive:
https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/why-did-cea-buy-wytham-abbey?commentId=3yeffQWcRFvmeteqc
Your cost estimates seems to be in the wrong order of magnitude.
The calculations there are completely correct under the assumption that the space is being used 365 days a year, which strikes me as wildly implausible. I was working on the assumption the is space used a few days each year. If this space is actually being occupied 100% of the time, I’d gladly retract my criticism.
The actual usage of the abbey is very likely to be somewhere between these two numbers. Definitely I would expect it to be used far more than for one major conference per year, but I wouldn’t expect 100% usage either.
It depends. In isolation, that statement does seem concerning to me, like they may have been overestimating the potential negative optics.
What matters to me here is whether sufficient thought was put into all the different aspects. Clearly, they thought a lot about the non-optics stuff. I have no way of easily evaluating those kinds of statements, as I have very little experience organizing conferences. But I’m concerned that maybe there wasn’t sufficient thought given to just how bad the optics can get with this sort of thing.
My career has been in communications, so I’m used to thinking about PR risks and advocating for thinking about those aspects. Perhaps I’m posting here with a bias from that point of view. If I were in a room with decision-makers, I’d expect my comments here to be balanced by arguments on the other side.
Even so, my suspicion is that, if you write something like “do what really is good rather than just what seems good”, you’re more likely to be underestimating rather than overestimating PR risks.
FWIW, as someone who also works in communications, I strongly disagree here and think EA spends massively too much of its mental energy thinking about optics.
More specifically:
I tend to criticize virtue ethics and deontology a lot more than I praise them—IMO these are approaches that often go badly wrong. But I think PR (for a community like EA) is an area where deontology-like adherence to “behave honestly and with integrity” and virtue-ethics-like focus on “be the sort of person internally who you would find most admirable and virtuous” tends to have far better consequences than “select the action that naively looks as though it will make others like you the most”.
If you’re an EA and you want to improve EA’s reputation, my main advice to you is going to look very virtue-ethics-flavored: be brave, be thoughtful, be discerning, be honest, be honorable, be fair, be compassionate, be trustworthy; and insofar as you’re not those things, be honest about it (because honesty is on the list, and is paramount to trusting everything else about your apparent virtues); and let your reputation organically follow from the visible signs of those internal traits of yours, rather than being a thing you work hard on optimizing separately from optimizing whether you’re actually an awesome person.
Have integrity, and speak truth even when you’re scared to, and be the sort of person you’d have found inspiring to run into in your early days at EA, if someone could read your mind and see the generators of your behavior.
Do stuff that you feel really and deeply proud of, rather than stuff that you’d be embarrassed by if someone fully understood what you were doing and why, context and all.
I think that for all or nearly-all EAs, that should pretty much be the entire focus of their thoughts about EA’s reputation.
My take is about 90% in agreement with this.
The other 10% is something like: “But sometimes adding time and care to how, when, and whether you say something can be a big deal. It could have real effects on the first impressions you, and the ideas and communities and memes you care about, make on people who (a) could have a lot to contribute on goals you care about; (b) are the sort of folks for whom first impressions matter.”
10% is maybe an average. I think it should be lower (5%?) for an early-career person who’s prioritizing exploration, experimentation and learning. I think it should be higher (20%?) for someone who’s in a high-stakes position, has a lot of people scrutinizing what they say, and would lose the opportunity to do a lot of valuable things if they substantially increased the time they spent clearing up misunderstandings.
I wish it could be 0% instead of 5-20%, and this emphatically includes what I wish for myself. I deeply wish I could constantly express myself in exploratory, incautious ways—including saying things colorfully and vividly, saying things I’m not even sure I believe, and generally ‘trying on’ all kinds of ideas and messages. This is my natural way of being; but I feel like I’ve got pretty unambiguous reasons to think it’s a bad idea.
If you want to defend 0%, can you give me something here beyond your intuition? The stakes are high (and I think “Heuristics are almost never >90% right” is a pretty good prior).
Frankly I would think that there was finally someone with a modicum of sense and understanding of basic PR working in the area. And upgrade my views of the competency of the organisation accordingly.
Also I’d not that “this will save money in the long run” is a fairly big claim that has not been justified. There are literally hundreds of conference venues within a reasonable distance of Oxford, all of which are run by professional event managers who are able to take advantage of specialisation and economies of scale. Making it difficult to believe
Optics is real. We live in the real world. Optics factor into QUALYs or any other metric. Why would the reverse be true, that we ignore reputation-related effects, even if they are fully real?
I feel a bit awkward quoting the Bible, but there’s one part that’s super relevant to this discussion from a secular perspective. It’s Corinthians 8:6 to 8:13, and is basically like, “hey, we know doing X isn’t bad, but anyone seeing us doing X they’d think we’re casting away our principles, which would cause them to do wrong, so we’re not going to do X.” Here’s the quote,
yet for us there is one God, the Father, from whom are all things and for whom we exist, and one Lord, Jesus Christ, through whom are all things and through whom we exist. However, not all possess this knowledge. But some, through former association with idols, eat food as really offered to an idol, and their conscience, being weak, is defiled. Food will not commend us to God. We are no worse off if we do not eat, and no better off if we do. But take care that this right of yours does not somehow become a stumbling block to the weak. For if anyone sees you who have knowledge eating in an idol’s temple, will he not be encouraged, if his conscience is weak, to eat food offered to idols? And so by your knowledge this weak person is destroyed, the brother for whom Christ died. Thus, sinning against your brothers and wounding their conscience when it is weak, you sin against Christ. Therefore, if food makes my brother stumble, I will never eat meat, lest I make my brother stumble.
Here’s an explanation of some of the reasons it’s often harmful for a community to fixate on optics, even though optics is real: https://www.lesswrong.com/posts/Js34Ez9nrDeJCTYQL/politics-is-way-too-meta
It also comes off as quite manipulative and dishonest, which puts people off. There are many people who’ll respect you if you disagree with them but state your opinion plainly and clearly, without trying to hide the weird or objectionable parts of your view. There are relatively few who will respect you if they find out you tried to manipulate their opinion of you, prioritizing optics over substance.
And this seems especially harmful for EA, whose central selling point is “we’re the people who try to actually do the most good, not just signal goodness or go through the motions”. Most public conversations about EA optics are extremely naive on this point, treating it as a free action for EAs to spend half their time publicly hand-wringing about their reputations.
What sort of message do you think that sends to people who come to the EA Forum for the first time, interested in EA, and find the conversations dominated by reputation obsession, panicky glances at the news cycle, complicated strategies to toss first-order utility out the window for the sake of massaging outsiders’ views of EA, etc.? Is that the best possible public face you could pick for EA?
In fact, I don’t think that we should adopt the stance “be so terrified of PR risks that you refuse to talk about PR”. I think EA should blurt far more than it currently does, and this will inevitably mean talking at least a little about people’s emotional fears re looking weird to others, being embarrassed to do something, etc.
But recognizing the deep PR costs of EA’s long-standing public obsession with reputation management is at least a first step in the direction of unraveling the obsession for some people, I’d hope.
Yeah I totally agree. I’d agree with the statement “it’s helpful to take optics into account, but not let it dominate our decision making process”. My original comment was in response to the idea that ‘actually doing good is more important than looking like doing good’ which I would argue is an oversimplification of the real world and not a good principle. I don’t think that it’s helpful to care entirely about optics or never care about optics. It’s more nuanced.
I also think it could help to break down the term “optics” a bit. I think the purchase is bad for first impressions, which is one particular type of optics.
Anyways this whole discussion about optics is kind of a red herring. People will be shocked by the purchase because it was by a charity and was pretty exorbitant, and in fact it was (by that one guy’s admission… I’m on a phone and don’t want to look up his name in the comment above) purchased for to make conference participants feel inspired and was not made as a cost savings mechanism. Appearance (not being frugal) reflects reality in this case, at least based on that comment I read by that one guy (and if I’m wrong just let me be wrong at this point, I have work to do and don’t care to debate this further).
But yeah I agree about let’s not wholly concentrate on optics. Of course.
Let’s say we had one charitable person who has a reputation for being charitable, and another charitable person who has a reputation for hurting others. Someone needing charity avoid the latter, even though the latter is also beneficial.
There’s a big difference between trying to represent yourself in an accurate or an inaccurate way. In either case you’re caring about what people think about you, but if we assume the perceiver is acting in their self interest, then the accurate representation will benefit them, and the inaccurate representation may harm them.
I’m not disagreeing with what you wrote. I’m adding to it that “caring about optics” can actually be more honest. It’s possible to care about optics so that you’re represented honestly, too.
SBF caused damage not because he virtue signaled with his cheap car and lack of style, but because he was misrepresenting himself and being a dick.
It makes sense for people to talk about not wanting to be misrepresented, and if I were a new visitor to the forum and I saw people upset about being misrepresented, I’d probably be sympathetic to them. I also might think they were cry babies and too obsessed with their image, which is what you’re saying could be the case, and I agree with that.
Also just by the way, I guess the ideal would be to care what other people think but be strong enough to do what one thinks is right. I think there’s a psychological element to all this. I’ve lived in some places where I was looked down on, even though I was working hard for their benefit, and it did suck psychologically. It would’ve been better for everyone if people had known how much I cared about them, but yeah it can be important to not worry too much about what other people think, as you wrote.
Some related stuff I’ve said:
And:
The main problem with lavishness, IMHO, is not optics per se, but rather that it’s extremely easy for people to trick themselves into believing that spending money on their own comfort/lifestyle/accommodations is net-good-despite-looking-bad (for productivity reasons or whatever). This generalizes to the community level.
(To be clear, this is not to say that we should never follow such reasoning. It’s just a serious pitfall. This is also not original—others have certainly brought this up.)
Also, I imagine having communicated the reasoning behind the purchase publicly before the criticisms would have gone some way in reducing the bad optics, especially for onlookers who were inclined to spend a little bit of time to understand both perspectives. So thinking more about the optics doesn’t necessarily lead you to not do the thing.
“I did feel a little nervous about the optical effects”
Was there no less-extravagant-looking conference space for sale?
Or at least a cheaper one? With better access to public transport?
This seems overbudget and public transport is not only better for the environment, it’s also more egalitarian. It would allow people from more impoverished backgrounds to more easily join our community, which—given our demographics—might be something we want to encourage.
EDIT: Yes I’m aware that you could reach the estate via public transport, the connection is just very bad (on the weekend you have to do a 26 minute walk), that’s why I said “better acces” not “at all accessible”.
This is not a comment on the cheapness point, but in case this feels relevant, private vehicles are not necessary to access this venue― from the Oxford rail station you can catch public buses that drop you off about a 2-minute walk from the venue. It’s a 20 minute bus ride, and the buses don’t come super often (every 60 minutes, I think?) but I just wanted to be clear that you can access this space via public transport.
Presumably it would be easy to arrange a conference minibus to shuttle attendees to and from the station. This seems like the least of the project’s problems.
(However, it is very difficult to hire taxis to go to and come back from there, which often takes 30 min). Edit: people can wait up to 1h30 to get a taxi from Wytham, which isn’t super practical.
I would be surprised if public transport links were important for accessibility to lower-income demographics, in this specific context. Covering transport costs is common for events, and the last time I went there a train ticket from London to Oxford is pricier than a taxi ride from the station to Wytham.
My understanding is the retreats will be mostly for academics working in the relevant fields, not the EA community, so I’m not sure this applies.
A typical researcher might make £100,000 a year. £15,000,000 is roughly £1,000,000 a year if invested in the stock market. So you could hire 5 researchers to work full-time, in perpetuity.
Conferences are cool, but do you really think they generate as much research as 5 full time researchers would? As a researcher, I can tell you flat-out the answer is no. I could do much more with 5 people working for me than I could by going to even a thousand conferences.
You can’t always turn more money into more researchers. You need people who can mentor and direct them, and you need to find people who are good fits for the position, and most of the people who are most interesting to you are also interesting to other employers. In general, I don’t think finding salaries for such people was the bottleneck.
Investing money into the stock market and investing money into real estate are similar. In both cases, the value of your capital can rise or fall over time.
The value of both can both rise or fall, but real estate is only an investment when rented out. Otherwise, it’s a durable consumption good. In particular, the EMH* implies the expected value of buying real estate and renting it out must be equal to the expected return on stocks. Otherwise, people would stop sell stocks (driving their price down, and therefore the rate of return up) and then buy real estate to lease it out.
*While it’s entirely plausible the EMH doesn’t hold, no analysis arguing this is presented, and I don’t think that placing bets on certain sectors of the economy is a particularly good idea for a charity. Notably, arguments against the EMH almost all fall on the side of suggesting the housing market is currently overvalued because of structural deficiencies (like the inability to short housing) and subsidies that make buying cheaper for individual homeowners (but not charities)
.
There’s plenty of real estate investment that does not depend on the real estate being rented out. That’s why laws get passed that require some real estate to be rented out.
One of the attributes of real estate is that it’s a lot less liquid than stocks and economic theory suggests that market participants should pay a premium for liquidity.
Finally, it’s wrong to say that anything with less expected returns than stocks is no investment. People all the time invest money in treasury bonds that have less expected returns.
What do you think about MSRI (https://www.msri.org/web/cms) and Simons Institute (https://simons.berkeley.edu/), btw?
Not sure, I don’t know all that much about them, unfortunately.
Thanks, this is indeed helpful. I would also like to know though, what made this property “the most appropriate” out of the three in a bit greater detail if possible. How did its cost compare to the others? Its amenities? I think many people in this thread agree that it might have been worth it to buy some center like this, but still question whether this particular property was the most cost effective one.
I’ve edited my reply to add a bit more detail on this point.
Thanks, I appreciate the added information! I’m not sure I’m convinced that this was worthwhile, but I feel like I now have a much better understanding of the case for it.
Thanks for explaining!
I like this point.
I think this was a terrible idea
I think you’ve overestimated the value of a dedicated conference centre. The important ideas in EA so far haven’t come from conversations over tea and scones at conference centres but are either common sense (“do the most good”, “the future matters”) or have come from dedicated field trials and RCTs.
I also think you’ve underestimated the damage this will do to the EA brand. The hummus and baguettes signal an earnestness. Abbey signals scam.
I’m confident that this will be remembered as one of CEA’s worst decisions.
It’s sad you’re getting downvoted. A manor and 25 acres of nothingness adds nearly nothing to EA when some other space, for instance the hall of a large parish or church, even abandoned ones, could have been (on an as needed basis) rented out / purchased instead— for a fraction of the cost — when conferences or workshops are needed.
Imagine the extent of scrutiny the manor’s purchase would face in early EA. It wouldn’t be pretty.
I think it’s plausible that this purchase saves money, but I strongly disagree with your view of optics.
“think it’s better to let decisions be guided less by what we think looks good, and more by what we think is good”
What looks good has important effects on EA community building, the diffusion of EA ideas and on the ability to promote EA ideas in politics, especially over the longer term.
Whether a decision looks good, i.e, the indirect, long term effects of the decision on EA’s reputation, is a very important factor on determining on whether a decision is good, i.e, approximately maximises expected value.
I’m disappointed that someone at CEA / EV thinks it makes sense to put optics aside and entirely focus on the short-term, direct effects of a decision when calculating expected value—also seems weirdly at odds with longtermist thinking!
How much are electricity, maintenance and property tax for this venue? Historic building may require expensive restoration and are subject to complex regulation.
I’m not qualified to comment on the calculations but did you hire a real estate consultant and venue manager to advise?