Does he still endorse the retraction? It’s just idle curiosity on my part but it wasn’t clear from the comments
Liam_Donovan
A few thoughts this post raised for me (not directed at OP specifically):
1. Does RAISE/the Hotel have a standardized way to measure the progress of people self-studying AI? If so, especially if it’s been vetted by AI risk organizations, it seems like that would go a long ways towards resolving this issue.
2. Does “ea organisations are unwilling to even endorse the hotel” refer to RAISE/Rethink Charity (very surprising & important evidence!), or other EA organizations without direct ties to the Hotel?
3. I would be curious what the marginal cost of adding a new resident is: if high, this would be a good reason to leave rooms unoccupied rather than funding “tragic” projects.
4. Strongly agreed: the EV post seemed like an overly complex toy model that was unlikely to predict real-world outcomes. I think high-level heuristics for evaluating impact would be much more useful/convincing (e.g. the framework laid out here)
5. In general, donors who take a “hits-based giving” approach to funding speculative projects in their personal network are likely to become associated with failed projects regardless of personal competence, so I don’t think this is evidence against the case the EA hotel makes. My relatively uninformed inside view is that the founder of the Kernel project should be associated with its failure, rather than Greg, and I think the outside view agrees.
6. I wonder how different the fundraising situation would be if it had started during the burst of initial enthusiasm/publicity surrounding the hotel?
re signal boost: any particular reason why?
Did the report consider increasing access to medical marijuana as an alternative to opioids? If so, what was the finding? (I didn’t see any mention while skimming it) My impression was that many leaders in communities affected by opioid abuse see access to medical marijuana as the most effective intervention. One (not particularly good) example
Are you saying there are groups who go around inflicting PR damage on generic communities they perceive as vulnerable, or that there are groups who are inclined to attack EA in particular, but will only do so if we are percieved as vulnerable (or something else I’m missing)? I’m having a hard time understanding the mechanism through which this occurs.
The law school example seems like weak evidence to me, since the topics mentioned are essential to practicing law, whereas most of the suggested “topics to avoid” are absolutely irrelevant to EA. Women who want to practice law are presumably willing to engage these topics as a necessary step towards achieving their goal. However, I don’t see why women who want to effectively do good would be willing to (or expected to) engage with irrelevant arguments they find uncomfortable or toxic.
I like the idea of profiting-to-give as a way to strengthen the community and engage people outside of the limited number of direct work EA jobs; however, I don’t see how an “EA certification” effectively accomplishes this goal.
I do think there would be a place for small EA-run businesses in fields with:
a lot of EAs
low barriers to entry
sharply diminishing returns to scale
Such a business might plausibly be able to donate at least much money as its employees were previously donating individually by virtue of their competitive success in the marketplace (i.e. without relying on EA branding or an EA customer base). By allowing EAs to work together for a common cause, it would also reduce value drift and improve morale.
More speculatively, it might improve recruitment of new EAs and reduce hiring costs for EA organizations by making it easier to find and evaluate committed candidates. If the business collectively decided how to donate its profits, it could also efficiently fufill a function similar to donor lotteries, freeing up more money for medium-size grants. Lastly, by focusing solely on maximizing profit, “profiting-to-give” would avoid the pitfalls of social benefit companies Peter_Hurford mentions while providing fulfilling work to EtG EAs.
That’s a good point, but I don’t think my argument was brittle in this sense (perhaps it was poorly phrased). In general, my point is that climate change amplifies the probabilities of each step in many potential chains of catastrophic events. Crucially, these chains have promoted war/political instability in the past and are likely to in the future. That’s not the same as saying that each link in a single untested causal chain is likely to happen, leading to a certain conclusion, which is my understanding of a “brittle argument”
On the other hand, I think it’s fair to say that e.g. “Climate change was for sure the primary cause of the Syrian civil war” is a brittle argument
I’d previously read that there was substantial evidence linking climate change-->extreme weather-->famine--> Syrian civil war (a major source of refugees). One example: https://journals.ametsoc.org/doi/10.1175/WCAS-D-13-00059.1 This paper claims the opposite though: https://www.sciencedirect.com/science/article/pii/S0962629816301822.
“The Syria case, the article finds, does not support ‘threat multiplier’ views of the impacts of climate change; to the contrary, we conclude, policymakers, commentators and scholars alike should exercise far greater caution when drawing such linkages or when securitising climate change.”
I’ll have to investigate more since I was highly confident of such a ‘threat multiplier’ view.
On your other two points, I expect the idea of anthropogenic global warming to continue to be associated with the elite; direct evidence of the climate changing is likely to convince people that climate change is real, but not necessarily that humans caused it. Concern over AGW is currently tied with various beliefs (including openness to immigration) and cultural markers predominantly shared by a subsection of the educated and affluent. I expect increasing inequality to calcify tribal barriers, which would make it very difficult to create widespread support for commonly proposed solutions to AGW.
PS: how do I create hyperlinks?
I don’t think this is indirect and unlikely at all; in fact, I think we are seeing this effect already. In particular, some of the 2nd-order effects of climate change (such as natural catastrophe-->famine-->war/refugees) are already warping politics in the developed world in ways that will make it more difficult to fight climate change (e.g. strengthening politicians who believe climate change is a myth). As the effects of climate change intensify, so will the dangers to other x-risks.
In particular, a plausible path is climate change immiserates poor/working class + elite attempts to stop climate change hurting working class (eg war on coal) --> even higher inequality --> broad-based resentment against elite initiatives. X-risk reduction is likely to be one of those elite initiatives simply because most X-risks are uninutitive and require time/energy/specialized knowledge to evaluate, which few non-elites have
1. A system that will imprison a black person but not an otherwise-identical white person can be accurately described as “a racist systsem”
2. One example of such a system is employing a ML algorithm that uses race as a predictive factor to determine bond amounts and sentencing
3. White people will tend to be biased towards more positive evaluations of a racist system because they have not experienced racism, so their evaluations should be given lower weight
4. Non-white people tend to evaluate racist systems very negatively, even when they improve predictive accuracy
To me, the rational conclusion is to not support racist systems, such as the use of this predictive algorithm.
It seems like many EAs disagree, which is why I’ve tried to break down my thinking to identify specific points of disagreement. Maybe people believe that #4 is false? I’m not sure where to find hard data to prove it (custom Google survey maybe?). I’m ~90% sure it’s true, and would be willing to bet money on it, but if others’ credences are lower that might explain the disagreement.
Edit: Maybe an implicit difference is epistemic modesty regarding moral theories—you could frame my argument in terms of “white people misestimating the negative utility of racial discrimination”, but I think it’s also possible for demographic characteristics to bias one’s beliefs about morality. There’s no a priori reason to expect your demographic group to have more moral insight than others; one obvious example is the correlation between gender and support for utilitarianism. I don’t see any reason why men would have more moral insight, so as a man I might want to reduce my credence in utilitarianism to correct for this bias.
Similarly, I expect the disagreement between a white EA who likes race-based sentencing and a random black person who doesn’t to be a combination of disagreement about facts (e.g. the level of harm caused by racism) and moral beliefs (e.g. importance of fairness). However, *both* disagreements could stem from bias on the EA’s part, and so I think the EA ought not discount the random guy’s point of view by assigning 0 probability to the chance that fairness is morally important.
How did Dylan Matthews become associated with EA? This is a serious question—based on the articles of his I’ve read, he doesn’t seem to particularly care about some core EA values, such as epistemic rationality and respect for “odd-sounding” opinions.
I suspect Greg/the manager would not be able to filter projects particularly well based on personal interviews; since the point of the hotel is basically ‘hits-based giving’, I think a blanket ban on irreversible projects is more useful (and would satisfy most of the concerns in the fb comment vollmer linked)
Following on vollmer’s point, it might be reasonable to have a blanket rule against policy/PR/political/etc work—anything that is irreversible and difficult to evaluate. “Not being able to get funding from other sources” is definitely a negative signal, so it seems worthwhile to restrict guests to projects whose worst possible outcome is unproductively diverting resources.
On the other hand, I really can’t imagine what harm research projects could do; I guess the worst case scenario is someone so persuasive they can convince lots of EAs of their ideas but so bad at research their ideas are all wrong, which doesn’t seem very likely. (why not ‘malicious & persuasive people’? the community can probably identify those more easily by the subjects they write about)
Furthermore, guests’ ability to engage in negative-EV projects will be constrained by the low stipend and terrible location (if I wanted to engage in Irish republican activism, living at the EA hotel wouldn’t help very much). I think the largest danger to be alert for is reputation risk, especially from bad popularizations of EA, since this is easier to do remotely (one example is Intentional Insights, the only negative-EV EA project I know of)
From my perspective, the manager should
Not (necessarily) be an EA
Be paid more (even if this trades off against capacity, etc)
Not also be a community mentor
One of the biggest possible failure modes for this project seems to be hiring a not-excellent manager; even a small increase in competence could make a big difference between the project failing and succeeding. Thus, the #1 consideration ought to be “how to maximize the manager’s expected skill”. Unfortunately, the combination of undesirable location, only hiring EAs, and the low salary seem to restrict the talent pool enormously. My (perhaps totally wrong) impression is that some of these decisions are made on the basis of a vague idea of how things ought to be, rather than a conscious attempt to maximize success.
Brief arguments/responses:
Not only are EAs disproportionately unlikely to have operations skills (as 80K points out), but I suspect that the particular role of hotel manager requires even less of the skills we tend to have (such as a flair for optimization), and even more of the skills we tend not to have (consistency, hotel-related metis). I’m unsure of this but it’s an important question to evaluate.
The manager will only be at the ground floor of a new organization if it doesn’t fail. I think failure is more likely than expansion, but it’s reasonable to be risk averse considering this is the first project of its kind in EA (diminishing marginal benefit). Consequently, optimizing for initial success seems more important than optimizing for future expansion.
The best feasible EA candidate is likely to have less external validation of managerial capability than a similarly qualified external candidate, who might be a hotel manager already! Thus, it’ll be harder to actually identify the strong EA candidates, even if they exist.
The manager will get free room/board and live in low-CoL Blackpool, but I think this is outweighted by the necessity of moving to an undesirable location, and not being able to choose where you stay/eat. On net, I expect you’d need to offer a higher salary to attract the same level of talent as in, say, Oxford (though with more variance depending on how people perceive Blackpool).
You might be able to hire an existing hotel manager in Blackpool, which would reduce risk of turnover and guarantee a reasonable level of competence. This would obviously require separating the hotel manager and the community mentor, but I’m almost certain that doing would maximize the chances of success either way (division of labor!). I’m also not sure what exactly the cost is: the community mentor could just be an extroverted guest working on a particularly flexible project.
Presumably many committed and outgoing EAs (i.e. the people you’d want as managers) are already able to live with/near other EAs; moving to Blackpool would just take away their ability to choose who to live with.
Of course, there could already be exceptional candidates expressing interest, but I don’t understand why the default isn’t hiring a non-EA with direct experience.
It’s pretty simple: just get EAs to move in and don’t advertise vacancies the rest of the time. That might sound sketchy, but I think it’s essentially what the old owners did—they let friends/long-time guests stay but didn’t rent out the rest of the rooms. It might not fly in, like, Tahiti, but Blackpool has an enormous glut of accomodation. The impression I got from Greg is that lots of hotel owners there are already restricting occupancy to friends/family; a de-facto restriction to EAs shouldn’t be a major problem, especially since (at least in the US) non-EAs are not a protected class.
Furthermore, if some random person really wants to stay there at inflated rates despite the complete lack of advertising, that would be a net benefit for the hotel, as Greg mentions in his post.
But would this view have predicted we’d only get 13% matched, well below the EA consensus prediction?
Hopefully...since it’s a zero-sum game though, I’m not necessarily convinced that we can improve efficiency and learn from our mistakes more than other groups. In fact, I’d expect the %matched to go down next year, as the % of the matching funds directed by the EA community was far larger than the % of total annual donations made by EAs (and so we’re likely to revert to the mean)
Maybe JPAL-IPA field research qualifies in some sense?
Can you elaborate on which areas of EA might tend towards each extreme? Specific examples (as vague as needed) would be awesome too, but I understand if you can’t give any