There is an opportunity cost in not having a better backdrop.
It’s possible I’m wrong. I find it unlikely that veganism wasn’t influenced by existing political arguments for veganism. I find it unlikely that a focus on institutional decision making wasn’t influenced by existing political zeitgist around the problems with democracy and capitalism. I find it unlikely that the global poverty focus wasn’t influenced by the existing political zeitgeist around inequality.
All this stuff is in the water supply, the arguments and positions have been refined by different political parties moral intuitions and battle with the opposition. This causes problems when there’s opposition to EA values, sure, but it also provides the backdrop from which EAs are reasoning from.
It may be that EAs have somehow thrown off all of the existing arguments, cultural milleu, and basic stances and assumptions that have been honed for the past few generations, but that to me represents more of a failure of EA if true than anything else.
I haven’t seen any examples of cause areas or conclusions that were discovered because of political antipathy towards EA.
Veganism is probably a good example here. Institutional decisionmaking might be another. I don’t think that political antipathy is the right way to view this, but rather just the general political climate shaping the thinking of EAs. Political antipathy is a consequence of the general system that produces both positive effects on EA thought, and political antipathy towards certain aspects of EA.
Internal debate within the EA community is far better at reaching truthful conclusions than whatever this sort of external pressure can accomplish. Empirically, it has not been the case that such external pressure has yielded benefits for EAs’ understanding of the world.
It can be the case that external pressure is helpful in shaping directions EVEN if EA has to reach conclusions internally. I would put forward that this pressure has been helpful to EA already in reaching conclusions and finding new cause areas, and will continue to be helpful to EA in the future.
Rethink Priorities seems to be the obvious organization focused on this.
An implicit problem with this sort of analysis is that it assumes the critiques are wrong, and that the current views of Effective Altruism are correct.
For instance, if we assume that systemic change towards anti-capitalist ideals actually is correct, or that taking refugees does actually have long run bad effects on culture, then the criticism of these views and the pressure on the community from political groups to adopt these views is actually a good thing, and provides a net-positive benefit for EA in the long term by providing incentives to adopt the correct views.
I think there is something going on in this comment that I wouldn’t put in the category of “outside view”. Instead I would put it in the category of “perceiving something as intuitively weird, and reacting to it”.
I think there’s two things going on here.
The first is that weirdness and outside view are often deeply correlated, although not the same thing. In many ways the feeling of weirdness is a schelling fence. It protects people from sociopaths, joining cults, and other things that are a bad idea but they can’t quite articulate in words WHY it’s a bad idea.
I think you’re right that the best interventions will many times be weird, so in this case its’ a schelling fence that you have to ignore if you want to make any progress from an inside view… but it’s still worth noting that weirdness is there and good data.
The second thing going on is that it seems like many EA institutions have adopted the neoliberal stategy of gaining high status, infiltrating academia, and using that to advance EA values. From this perspective, it’s very important to avoid an aura of weirdness for the movement as a whole, even if any given individual weird intervention might have high impact. This is hard to talk about because being too loud about the strategy makes it less effective, which means that sometimes people have to say things like “outside view” when what they really mean is “you’re threatening our long term strategy but we can’t talk about it.” Although obviously in this particular case the positive impact on this strategy outweighs the potential negative impact of the weirdness aura.
I feel comfortable stating this because it’s a random EA forum post and I’m not in a position of power at an EA org, but were I in that position, I’d feel much less comfortable posting this.
You can often get the timing to work late in the game by stalling the company that gave you the offer, and telling other companies that you already have an offer so you need an accelerated process.
It matters less if you time your offers so you have multiple at the same time.
Last I looked at the data for job negotiations, the rescission rate is actually much higher for jobs, around 10%.
There’s another good post on salary negotiation in an ETG context here: https://www.lesswrong.com/posts/Z6dmoLyfBdmo6HEss/maximizing-your-donations-via-a-job
Back when I was a career coaching, I used to run a popular workshop on salary negotiation for the local job search meetup. It broke down salary negotiation into a set of 6 skills you could practice such as timing your offers, deferring salary negotiation, overcoming objections etc.
The great thing about this was that after the initial presentation, job seekers could practice the skills with each other meaning I wasn’t the bottleneck. Presentation is here: https://www.evernote.com/shard/s8/sh/d45a92cc-a8d0-4b3b-adf5-53b53e1efbc6/a423ecc18c4d0a386b5fe1d2284680f7
I could see a similar idea of a practice group working for EAs.
I wasn’t asking for examples from EA, just the type of projects we’d expect from EAs.
Do you think intentional insights did a lot of damage? I’d say it was recognized by the community and pretty well handled whole doing almost no damage.
Do we have examples of this? I mean, there are obviously wrong examples like socialist countries, but I’m more interested in examples of the types of EA projects we would expect to see causing harm. I tend to think the risk of this type of harm is given too much weight
My main point is that, even if the EA hotel is the best way of supporting/incenting productive EAs in the beginning of their careers, it doesn’t solve the problem of selecting the best projects
What do you think about the argument of using the processes in the hotel to filter projects? I tend to think that one way to cross the chasm is “just try as many projects as possible, but have tight feedback loops so you don’t waste too many resources.”
Unless something has changed in the last few years, there are still plenty of startups with plausible ideas that don’t get funded by Y Combinator or anything similar. Y Combinator clearly evaluates a lot more startups than I’m willing or able to evaluate, but it’s not obvious that they’re being less selective than I am about which ones they fund.
I think the EA hotel is trying to do something different from Y-Combinator—Y-Combinator is much more like EA grants, and the EA hotel is doing something different. Y-Combinator basically plays the game of get status and connections, increase deal-flow, and then choose from the cream of the crop.
It’s useful to have something like that, but a game of “use tight feedback loops to find diamonds in the rough” seems to be useful as well. Using both strategies is more effective than just one.
I agree with Robin that this is a criminally neglected cause areas. Especially for people who put strong probability on AGI, Bioweapons, and other technological risks, more research into institutions that can make better decisions and outcompete our current institutions seems to be important.
Has anyone been asked to leave the EA Hotel because they weren’t making enough progress, or because their project didn’t turn out very well?
Not yet (I don’t think. Maybe Toon or Greg can chime in here), but the hotel has noticed this and is working on procedures to have better feedback loops.
If not, do you think the people responsible for making that decision have some idea of when doing so would be correct?
As I understand it, the trustees are currently working to develop standards for this.
but I find it somewhat odd that he starts with arguments that seem weak to me, and only in the middle did he get around to claims that are relevant to whether the hotel is better than a random group of EAs.
The post is organized by dependency not by strength of argument. First people have to convinced that funding projects make sense at all (given that there’s so much grant money already in EA) before we can talk about the way in which to fund them.
I think my crux is something like “this is a question to be dissolved, rather than answered”
To me, trying to figure out whether a goal is egoistic or altruistic is like trying to figure out whether a whale is a mammal or a fish—it depends heavily on my framing and why I’m asking the question, and points to two different useful maps that are both correct in different situations, rather than something in the territory.
Another useful map might be something like “is this eudomonic or hedonic egoism” which I think can get less squirrely answers than the “egoic or altruistic” frame. Another useful one might be the “Rational Compassion” frame of “Am I working to rationally optimize the intutions that my feelings give me?”
Sure, but if one has the value of actually helping other people, that distinction disappears, yes?
As an example of a famous egoist, I think someone like Ayn Rand would say that fooling yourself about your values is doing it wrong.