Are there, in fact, any such trips organized by EA charities?
Liam_Donovan
Wouldn’t this be an issue with or without an explanation? It seems like an AI can reasonably infer from other actions humans in general, or Alexey in particular, take that they are highly motivated to argue against being exterminated. IDK if I’m missing something obvious—I don’t know much about AI safety.
This doesn’t make sense either: for example, your questions could be selected in a biased manner to manipulate the AI, and you could be being disingenuous when dealmaking. Generally, it seems like good epistemic practice to discount arguments of any form, including questions, when the person making them is existentially biased towards one side of the discussion
Maybe JPAL-IPA field research qualifies in some sense?
Hopefully...since it’s a zero-sum game though, I’m not necessarily convinced that we can improve efficiency and learn from our mistakes more than other groups. In fact, I’d expect the %matched to go down next year, as the % of the matching funds directed by the EA community was far larger than the % of total annual donations made by EAs (and so we’re likely to revert to the mean)
But would this view have predicted we’d only get 13% matched, well below the EA consensus prediction?
It’s pretty simple: just get EAs to move in and don’t advertise vacancies the rest of the time. That might sound sketchy, but I think it’s essentially what the old owners did—they let friends/long-time guests stay but didn’t rent out the rest of the rooms. It might not fly in, like, Tahiti, but Blackpool has an enormous glut of accomodation. The impression I got from Greg is that lots of hotel owners there are already restricting occupancy to friends/family; a de-facto restriction to EAs shouldn’t be a major problem, especially since (at least in the US) non-EAs are not a protected class.
Furthermore, if some random person really wants to stay there at inflated rates despite the complete lack of advertising, that would be a net benefit for the hotel, as Greg mentions in his post.
From my perspective, the manager should
Not (necessarily) be an EA
Be paid more (even if this trades off against capacity, etc)
Not also be a community mentor
One of the biggest possible failure modes for this project seems to be hiring a not-excellent manager; even a small increase in competence could make a big difference between the project failing and succeeding. Thus, the #1 consideration ought to be “how to maximize the manager’s expected skill”. Unfortunately, the combination of undesirable location, only hiring EAs, and the low salary seem to restrict the talent pool enormously. My (perhaps totally wrong) impression is that some of these decisions are made on the basis of a vague idea of how things ought to be, rather than a conscious attempt to maximize success.
Brief arguments/responses:
Not only are EAs disproportionately unlikely to have operations skills (as 80K points out), but I suspect that the particular role of hotel manager requires even less of the skills we tend to have (such as a flair for optimization), and even more of the skills we tend not to have (consistency, hotel-related metis). I’m unsure of this but it’s an important question to evaluate.
The manager will only be at the ground floor of a new organization if it doesn’t fail. I think failure is more likely than expansion, but it’s reasonable to be risk averse considering this is the first project of its kind in EA (diminishing marginal benefit). Consequently, optimizing for initial success seems more important than optimizing for future expansion.
The best feasible EA candidate is likely to have less external validation of managerial capability than a similarly qualified external candidate, who might be a hotel manager already! Thus, it’ll be harder to actually identify the strong EA candidates, even if they exist.
The manager will get free room/board and live in low-CoL Blackpool, but I think this is outweighted by the necessity of moving to an undesirable location, and not being able to choose where you stay/eat. On net, I expect you’d need to offer a higher salary to attract the same level of talent as in, say, Oxford (though with more variance depending on how people perceive Blackpool).
You might be able to hire an existing hotel manager in Blackpool, which would reduce risk of turnover and guarantee a reasonable level of competence. This would obviously require separating the hotel manager and the community mentor, but I’m almost certain that doing would maximize the chances of success either way (division of labor!). I’m also not sure what exactly the cost is: the community mentor could just be an extroverted guest working on a particularly flexible project.
Presumably many committed and outgoing EAs (i.e. the people you’d want as managers) are already able to live with/near other EAs; moving to Blackpool would just take away their ability to choose who to live with.
Of course, there could already be exceptional candidates expressing interest, but I don’t understand why the default isn’t hiring a non-EA with direct experience.
Following on vollmer’s point, it might be reasonable to have a blanket rule against policy/PR/political/etc work—anything that is irreversible and difficult to evaluate. “Not being able to get funding from other sources” is definitely a negative signal, so it seems worthwhile to restrict guests to projects whose worst possible outcome is unproductively diverting resources.
On the other hand, I really can’t imagine what harm research projects could do; I guess the worst case scenario is someone so persuasive they can convince lots of EAs of their ideas but so bad at research their ideas are all wrong, which doesn’t seem very likely. (why not ‘malicious & persuasive people’? the community can probably identify those more easily by the subjects they write about)
Furthermore, guests’ ability to engage in negative-EV projects will be constrained by the low stipend and terrible location (if I wanted to engage in Irish republican activism, living at the EA hotel wouldn’t help very much). I think the largest danger to be alert for is reputation risk, especially from bad popularizations of EA, since this is easier to do remotely (one example is Intentional Insights, the only negative-EV EA project I know of)
I suspect Greg/the manager would not be able to filter projects particularly well based on personal interviews; since the point of the hotel is basically ‘hits-based giving’, I think a blanket ban on irreversible projects is more useful (and would satisfy most of the concerns in the fb comment vollmer linked)
How did Dylan Matthews become associated with EA? This is a serious question—based on the articles of his I’ve read, he doesn’t seem to particularly care about some core EA values, such as epistemic rationality and respect for “odd-sounding” opinions.
1. A system that will imprison a black person but not an otherwise-identical white person can be accurately described as “a racist systsem”
2. One example of such a system is employing a ML algorithm that uses race as a predictive factor to determine bond amounts and sentencing
3. White people will tend to be biased towards more positive evaluations of a racist system because they have not experienced racism, so their evaluations should be given lower weight
4. Non-white people tend to evaluate racist systems very negatively, even when they improve predictive accuracy
To me, the rational conclusion is to not support racist systems, such as the use of this predictive algorithm.
It seems like many EAs disagree, which is why I’ve tried to break down my thinking to identify specific points of disagreement. Maybe people believe that #4 is false? I’m not sure where to find hard data to prove it (custom Google survey maybe?). I’m ~90% sure it’s true, and would be willing to bet money on it, but if others’ credences are lower that might explain the disagreement.
Edit: Maybe an implicit difference is epistemic modesty regarding moral theories—you could frame my argument in terms of “white people misestimating the negative utility of racial discrimination”, but I think it’s also possible for demographic characteristics to bias one’s beliefs about morality. There’s no a priori reason to expect your demographic group to have more moral insight than others; one obvious example is the correlation between gender and support for utilitarianism. I don’t see any reason why men would have more moral insight, so as a man I might want to reduce my credence in utilitarianism to correct for this bias.
Similarly, I expect the disagreement between a white EA who likes race-based sentencing and a random black person who doesn’t to be a combination of disagreement about facts (e.g. the level of harm caused by racism) and moral beliefs (e.g. importance of fairness). However, *both* disagreements could stem from bias on the EA’s part, and so I think the EA ought not discount the random guy’s point of view by assigning 0 probability to the chance that fairness is morally important.
I don’t think this is indirect and unlikely at all; in fact, I think we are seeing this effect already. In particular, some of the 2nd-order effects of climate change (such as natural catastrophe-->famine-->war/refugees) are already warping politics in the developed world in ways that will make it more difficult to fight climate change (e.g. strengthening politicians who believe climate change is a myth). As the effects of climate change intensify, so will the dangers to other x-risks.
In particular, a plausible path is climate change immiserates poor/working class + elite attempts to stop climate change hurting working class (eg war on coal) --> even higher inequality --> broad-based resentment against elite initiatives. X-risk reduction is likely to be one of those elite initiatives simply because most X-risks are uninutitive and require time/energy/specialized knowledge to evaluate, which few non-elites have
I’d previously read that there was substantial evidence linking climate change-->extreme weather-->famine--> Syrian civil war (a major source of refugees). One example: https://journals.ametsoc.org/doi/10.1175/WCAS-D-13-00059.1 This paper claims the opposite though: https://www.sciencedirect.com/science/article/pii/S0962629816301822.
“The Syria case, the article finds, does not support ‘threat multiplier’ views of the impacts of climate change; to the contrary, we conclude, policymakers, commentators and scholars alike should exercise far greater caution when drawing such linkages or when securitising climate change.”
I’ll have to investigate more since I was highly confident of such a ‘threat multiplier’ view.
On your other two points, I expect the idea of anthropogenic global warming to continue to be associated with the elite; direct evidence of the climate changing is likely to convince people that climate change is real, but not necessarily that humans caused it. Concern over AGW is currently tied with various beliefs (including openness to immigration) and cultural markers predominantly shared by a subsection of the educated and affluent. I expect increasing inequality to calcify tribal barriers, which would make it very difficult to create widespread support for commonly proposed solutions to AGW.
PS: how do I create hyperlinks?
That’s a good point, but I don’t think my argument was brittle in this sense (perhaps it was poorly phrased). In general, my point is that climate change amplifies the probabilities of each step in many potential chains of catastrophic events. Crucially, these chains have promoted war/political instability in the past and are likely to in the future. That’s not the same as saying that each link in a single untested causal chain is likely to happen, leading to a certain conclusion, which is my understanding of a “brittle argument”
On the other hand, I think it’s fair to say that e.g. “Climate change was for sure the primary cause of the Syrian civil war” is a brittle argument
I like the idea of profiting-to-give as a way to strengthen the community and engage people outside of the limited number of direct work EA jobs; however, I don’t see how an “EA certification” effectively accomplishes this goal.
I do think there would be a place for small EA-run businesses in fields with:
a lot of EAs
low barriers to entry
sharply diminishing returns to scale
Such a business might plausibly be able to donate at least much money as its employees were previously donating individually by virtue of their competitive success in the marketplace (i.e. without relying on EA branding or an EA customer base). By allowing EAs to work together for a common cause, it would also reduce value drift and improve morale.
More speculatively, it might improve recruitment of new EAs and reduce hiring costs for EA organizations by making it easier to find and evaluate committed candidates. If the business collectively decided how to donate its profits, it could also efficiently fufill a function similar to donor lotteries, freeing up more money for medium-size grants. Lastly, by focusing solely on maximizing profit, “profiting-to-give” would avoid the pitfalls of social benefit companies Peter_Hurford mentions while providing fulfilling work to EtG EAs.
The law school example seems like weak evidence to me, since the topics mentioned are essential to practicing law, whereas most of the suggested “topics to avoid” are absolutely irrelevant to EA. Women who want to practice law are presumably willing to engage these topics as a necessary step towards achieving their goal. However, I don’t see why women who want to effectively do good would be willing to (or expected to) engage with irrelevant arguments they find uncomfortable or toxic.
Are you saying there are groups who go around inflicting PR damage on generic communities they perceive as vulnerable, or that there are groups who are inclined to attack EA in particular, but will only do so if we are percieved as vulnerable (or something else I’m missing)? I’m having a hard time understanding the mechanism through which this occurs.
Did the report consider increasing access to medical marijuana as an alternative to opioids? If so, what was the finding? (I didn’t see any mention while skimming it) My impression was that many leaders in communities affected by opioid abuse see access to medical marijuana as the most effective intervention. One (not particularly good) example
Considering that most people would be unhappy to be told that they’re more likely to be a rapist because of their race, we should have a strong prior that many Effective Altruists would feel the same way. What strong evidence do you have that, in fact, minorities in EA are just fine with being told their race makes them more likely to be rapists? Seems like a very strange assumption.
Apart from Lila’s argument, this “non-white people are more likely to be rapists” is a terrible line of thinking because (IMO) it’s likely to build racist modes of thought: assigning negative characteristics to minorities based on dubious evidence seems very likely to strengthen bad cognitive patterns and weaken good judgement around related issues.
If the evidence were incontrovertible, this might be acceptable, but it’s nowhere near the required standard of proof to overcome the strong prior that humans are equally likely to commit crimes regardless of race (among other reasons, because race is largely a social construct). Additionally, the long history of using false statistics and “science” to bolster white supremacy should make one more skeptical of numbers like this.