This is a great answer. I would have said something like “leadership” in that EA has leaders but few of them are people you would march into battle and die for. I feel like there’s almost no one in EA proper and only a couple people on the edges (mostly because their cause area was taken up by EA, and they didn’t come from within EA) who has demonstrated something like the 10x skill of leadership and motivation.
Put more colloquially, EA needs a Steve Jobs, an FDR, a Winston Churchill, an Oda Nobunaga.
I’m always of mixed opinion about organ donation. Yes, it seems straightforwardly beneficial, but it’s also at odds with surprising things. For example, I’m signed up for cryonics, and this means it’s very import I not be an organ donor both because my organs would be unusable after perfusion and because if I were an organ donor and was willing to accept a lower quality preservation by possibly not having my regularly circulatory system in place to help with cooling, it would still be a bad deal because doctors would hold on to my body for an unspecified amount of time in not necessarily ideal preservation conditions for my brain before maybe releasing me to the cryonics team hours or days later.
This would effectively mean pitting organ donation and life extension, at least in part, against each other within EA. Not necessarily a blocker if people think more organ donation among people who don’t sign up for cryonics is worth it in expectation over, say, getting more people signed up for cryonics, but it’s worth factoring into the calculation.
I really like this. We can be effective, but we can’t do that if we’re all sad and depressed because we tie our sense of self worth to something unattainable. I also enjoyed the fun stylistic choices!
I always find these sentiments strange because I find what I love about email and dislike about other forms of online communication is that email is strongly asynchronous and puts me in a lot of control of how I choose to interact with it. Slack, IRC, and other more synchronous forms of communication (even if they are supposedly asynchronous in some cases they are designed and used with synchronous use in mind) are much harder for me to be in control of how I use them because there are stronger cues to use them in interrupt-driven ways. Email can, of course, degenerate in this way, and it seems that’s what happens in some cultures (offices, etc.), but then the problem is the culture, not the tool.
If you dislike a particular email (or Slack or in-person) culture you don’t like, change it, not the tools. If you don’t, you’ll just end up unhappy on a different tool.
I think there is enough difficulty in achieving specialization that you are better off ignoring coordination concerns here in favor of choosing based on personal inclination. It’s hard to put in all the time it takes to become an expert in something, it’s even harder when you don’t love that something for its own sake, and my own suspicion is that without that love you will never achieve to the highest level of expertise, so best to look for the confluence of what you most love and what is most useful than to worry about coordinating over usefulness. You and everyone else is not sufficiently interchangeable when it comes to developing sufficient specialization to be helpful to EA causes.
I very much like this approach to finding ways to deal with credentialism, however I’m also unsure how much of an impact credentials are having on current EA hiring. That is, my impression is that current EA orgs are hiring folks more based on work experience rather than credentials, and in fact EA orgs are unusually willing to consider candidates without traditional credentials (EA-orgs within universities being an exception due to their hiring processes being tied to those of the host institution). This suggests your premise may not apply (EA orgs not hiring folks because they lack credentials), but regardless I think your solutions apply anyway because they also address the issue where candidates lack experience rather than credentials.
This seems to miss the point of my question, because it already seems to be the case that the people who could do something already don’t much engage in these discussions. Rather it’s primarily the folks who are causing the feelings of alienation and do not themselves feel alienated that are starting and engaging in discussions that cause feelings of alienation for others, and presuming they do so because they either do not consider their actions to be contrary to the purpose of inclusiveness or because they do not value inclusiveness, what actions can those who are alienated or value inclusiveness take that would address this issue. That is, if you feel there are things being done and said that cause alienation, how do you get that to stop other than just hoping that other people decide on their own not to do it anymore?
Identifying this is a start, but it remains unclear to me that this post is likely to result in any action that will change anything (realizing some people may disagree that this is the experience of some people in the community or that their experience of alienation matters). But supposing you agree that this post describes a real problem and the problem deserves solving, what are things we might do as a community to be more inclusive?
I’m thinking here of asking for specific, actionable ideas and not just generic stuff like “spread awareness”, and additionally these need to be actions that will be carried about by the people who care about this to make the community different than it is today and not demands for people who are in the alienator group to change because that’s also unlikely to be an effective strategy. I imagine most actions that would work well would be of the form “I want EA to be more inclusive, and to make it that way I’m going to do X”. What is X?
Hmm, I wonder why there were some downvotes. This seems like a rather creative idea to me to find a way towards creating for-profit endeavors that may help soak up excess talent and generate additional revenue for EA projects (not to mention some of these EA-corps might directly do work that has benefit to people; Wave comes to mind as a possible example of such an existing organization).
Having some experience with hiring, it might be some consolation that you did actually provide value to the EA orgs you applied to by giving them:
more practice hiring
more exposure to candidates to figure out who they want
a better intuitive grasp of the talent landscape
It’s unfortunate that this has such a large opportunity cost and that you bore so much of it, but the unfortunate reality on the hiring side for any org is that we often need to interview additional candidates we know we won’t hire because if we don’t we won’t know enough to trust the people we do hire are the right people. Of course we don’t know which candidates we will and won’t hire in advance (otherwise if we already knew it would be extremely unfair to everyone to do the interviews), but at least in this case your interview time helped EA focused organizations gain that knowledge rather than other orgs who you may be less aligned with values on and so less value their institutional learning from additional interviews that don’t result in hires.
This project sounds pretty exciting! My time is occupied by a lot of other things right now, but if you would like I’d be happy to talk about operations things (especially as they relate to organizational development and culture) at the camp. Depending on timing and cost I might not be able to show up in person, but happy to do something over Skype. This seems like a great opportunity to share what I’ve learned about this stuff so that it can help others as they contribute to effective causes.
I think much of the difficulty is that tech work is not usually able to be done with minimal context the way, say, being a volunteer where your main skill is being a human rather than being a professional. For example, it’s pretty easy to piece together volunteers to do things like fill a receptionist roll, assist with construction, or perform some other labor that requires minimal training. There’s not much easily identifiable tech work that could be knocked out in an hour or two such that the person assisting can then just forget about it and the org needing it will be able to easily take advantage of the work done. This means tech volunteering is going to require a sustained commitment from someone, and that’s much harder to arrange.
An important question is going to be what features you want of your organization. For example, do you want it to make possible tax-exempt giving (many countries let you deduct giving to recognized charitable organizations against your income tax burden up to some limit)? Do you want to just avoid lots of bureaucracy (generally not compatible with being a charitable organization)? Do you want low or no tax burden? And, of course, as you note, can you create the organization on your own or do you need a local sponsor? I think trying to answer those will help you explore the space and narrow down the options.
I don’t know enough about options it seems for this to feel like it’s giving me a useful additional way to think about moral weight of future patients relative to present ones, but I expect if I did know enough about them this would feel like a useful model to employ intuitions about options to explain an issue in ethics, similar to the way preference theory is often useful for helping to make sense of some questions related to values.
This seems connected to a perennial question in EA: should organizations be means-focused or ends-focused. By that I mean should an EA-aligned org focus primarily on methods or primarily on outcomes. For example, when it comes to community building, an ends-focused approach would suggest we should grow as large as possible and get as many people as possible to give effectively, even if you have to lie to them to do it. A means-focused approach to community building would look more like what we have now, where there is a heavy focus on keeping EA true to its values even if it comes at a cost of convincing some people to give money effectively that could be had by methods that go against EA values like careful epistemics.
So far it seems EA orgs have decided to be primarily means-focused and accept giving up some of the gains possible via an ends-focus since it would risk diluting EA values and missions, and folks in the community have been pretty vocal when they feel orgs list too close to become ends-focused if they compromise too much on holding to EA values. I don’t know if that will continue in the future or if everyone in EA is on board with such a choice, but it’s at least what I’ve observed happening. Given that mean EAs are consequentialists I expect we’ll always see some version of this conversation happening so long as EA exists.
Right. For comparison software engineers (of all kinds, including ML engineers) at early-stage startups generally add between $500k and $1mm to the company’s valuation, i.e. investors believe these employees make the company worth buying/selling for that much additional money. There’s a lot that goes into where that number comes from, but it does at least suggest that O($1mm) is reasonable.
I believe they are set at 300 words per minute, so a 1000 word post would either show as 3 or 4 minute read (depending on how it rounds). There was some discussion of this feature on LW recently if you wanted to join the conversation there about the feature as it’s implemented upstream.
Context: I asked a more narrow version of this question on LW about the connection between EA and the rationality movement.
So I’ve not written extensively on this (and in some cases not at all), but I’m a virtue ethicist and I care about animal welfare and x-risks (or just future-folks more generally) as an expression of compassion. That is, satisfying the virtue of compassion that I aspire to (now drawn from Buddhist notions of compassion, but originally from a more folky notion of it that I learned to adopt as a virtue from my upbringing in secular Protestant America) encourages me to give consideration to the welfare of animals and future folks, and this results in my choice to eat almost-only plants and to work on addressing AI-related x-risks.
This is not exactly something you can cite nor a full-fledge argument, but figured you might find it worthwhile to hear more from someone in EA who has one of these non-consequentialist moral views. I expect there are a number of crypto-Kantians around (crypto only in the sense though that they just don’t bring it up much because it’s not part of normal EA conversation to reason via deontology) and a decent number of contractualist given that position’s affinity with libertarian ethics and the number of libertarian EAs drawn from the rationalist community.