This project sounds pretty exciting! My time is occupied by a lot of other things right now, but if you would like I’d be happy to talk about operations things (especially as they relate to organizational development and culture) at the camp. Depending on timing and cost I might not be able to show up in person, but happy to do something over Skype. This seems like a great opportunity to share what I’ve learned about this stuff so that it can help others as they contribute to effective causes.
I think much of the difficulty is that tech work is not usually able to be done with minimal context the way, say, being a volunteer where your main skill is being a human rather than being a professional. For example, it’s pretty easy to piece together volunteers to do things like fill a receptionist roll, assist with construction, or perform some other labor that requires minimal training. There’s not much easily identifiable tech work that could be knocked out in an hour or two such that the person assisting can then just forget about it and the org needing it will be able to easily take advantage of the work done. This means tech volunteering is going to require a sustained commitment from someone, and that’s much harder to arrange.
An important question is going to be what features you want of your organization. For example, do you want it to make possible tax-exempt giving (many countries let you deduct giving to recognized charitable organizations against your income tax burden up to some limit)? Do you want to just avoid lots of bureaucracy (generally not compatible with being a charitable organization)? Do you want low or no tax burden? And, of course, as you note, can you create the organization on your own or do you need a local sponsor? I think trying to answer those will help you explore the space and narrow down the options.
I don’t know enough about options it seems for this to feel like it’s giving me a useful additional way to think about moral weight of future patients relative to present ones, but I expect if I did know enough about them this would feel like a useful model to employ intuitions about options to explain an issue in ethics, similar to the way preference theory is often useful for helping to make sense of some questions related to values.
This seems connected to a perennial question in EA: should organizations be means-focused or ends-focused. By that I mean should an EA-aligned org focus primarily on methods or primarily on outcomes. For example, when it comes to community building, an ends-focused approach would suggest we should grow as large as possible and get as many people as possible to give effectively, even if you have to lie to them to do it. A means-focused approach to community building would look more like what we have now, where there is a heavy focus on keeping EA true to its values even if it comes at a cost of convincing some people to give money effectively that could be had by methods that go against EA values like careful epistemics.
So far it seems EA orgs have decided to be primarily means-focused and accept giving up some of the gains possible via an ends-focus since it would risk diluting EA values and missions, and folks in the community have been pretty vocal when they feel orgs list too close to become ends-focused if they compromise too much on holding to EA values. I don’t know if that will continue in the future or if everyone in EA is on board with such a choice, but it’s at least what I’ve observed happening. Given that mean EAs are consequentialists I expect we’ll always see some version of this conversation happening so long as EA exists.
Right. For comparison software engineers (of all kinds, including ML engineers) at early-stage startups generally add between $500k and $1mm to the company’s valuation, i.e. investors believe these employees make the company worth buying/selling for that much additional money. There’s a lot that goes into where that number comes from, but it does at least suggest that O($1mm) is reasonable.
I believe they are set at 300 words per minute, so a 1000 word post would either show as 3 or 4 minute read (depending on how it rounds). There was some discussion of this feature on LW recently if you wanted to join the conversation there about the feature as it’s implemented upstream.
Context: I asked a more narrow version of this question on LW about the connection between EA and the rationality movement.
So I’ve not written extensively on this (and in some cases not at all), but I’m a virtue ethicist and I care about animal welfare and x-risks (or just future-folks more generally) as an expression of compassion. That is, satisfying the virtue of compassion that I aspire to (now drawn from Buddhist notions of compassion, but originally from a more folky notion of it that I learned to adopt as a virtue from my upbringing in secular Protestant America) encourages me to give consideration to the welfare of animals and future folks, and this results in my choice to eat almost-only plants and to work on addressing AI-related x-risks.
This is not exactly something you can cite nor a full-fledge argument, but figured you might find it worthwhile to hear more from someone in EA who has one of these non-consequentialist moral views. I expect there are a number of crypto-Kantians around (crypto only in the sense though that they just don’t bring it up much because it’s not part of normal EA conversation to reason via deontology) and a decent number of contractualist given that position’s affinity with libertarian ethics and the number of libertarian EAs drawn from the rationalist community.
I would just ask them questions, although to be transparent I care only some about their answers and a lot about how they answer, since I believe that to be the place where most of the information I use to make the assessment comes from. I say this because I want to make clear I don’t know how to assess this in a scalable and repeatable way I can teach others, though that might be possible although I suspect that it’s not short of teaching you to be substantially more like me in several dimensions.
I expect the disposition to take responsibility can be developed, since at least for myself I didn’t always do it and now I do, but I only learned to do it after some significant psychology development (what I would call making the 3-4 transition in Kegan/CDT terminology), although I’m not sure how tied it is to that (haven’t spent much time thinking about what enables the disposition to total responsibility). I’m not sure how to test it but I’m fairly confident I could suss out whether someone has the disposition in an interview, though I’m not sure with what level of precision, specially being unsure how many false negatives I would generate in my assessment.
context: I work in an operational capacity for a startup and have for several years
To me this misses what I consider the most important thing to success in operational roles: total responsibility. Just about anyone can learn to do stuff, and lots of times operations is treated as the function of the organization that does the stuff no one else wants to do, but to me this isn’t exactly right. It’s more about being responsible, probably heroically so, and being willing to do whatever you have to do to take care of the things you care about. Another person I know describes it has “holding parental mind” for something, which I think helps point at the breadth and depth of what operations is really all about.
This is not to say all these other things are not important, you can probably get along okay with someone who can just do stuff, and no amount of responsibility can overcome all other skill deficiencies, but to my mind great operational competence only arises when a person takes radical, total responsibility for the thing they are charged to protect.
Nice, this matches my intuition that most people will give more if you make the reasons to give at the near construal level rather than far. I do wonder how much this generalizes, though: I would expect the effect to be much smaller if, say, you exposed people to the two stories and then one week later asked them to make the giving decision.
My guess is that philosophy is good for convincing the sort of people who will be convinced of a point in general, and narratives are great for use during specific asks, like during a fundraising event. But I’m pretty sure most people who do fundraising for non-profits already know this even if they didn’t have proof; now they have a little more.
Since I’m doing direct work part-time, I view saving as a kind of donating since saving directly translates into increased flexibility for me to devote more time to direct work, and specifically on the work I think has the highest differential impact based on my own assessment of the information (to abuse terms, we might call this my “alpha”). For example, if I save enough I could stop working a full-time job while I look for funding, and in the mean time it means I can spend more effort on AI risk work without worrying too much that the impact on my day job will result in something I can’t weather. I’m not sure if I would make the same assessment if I weren’t doing direct work, though, so I’ve not thought as much about advising saving as a general strategy, although I generally prefer for myself having more runway so it seems reasonable to suggest others might also like having it.
I agree that Pinker’s chapter on existential risk is an extremely bad interlude in an otherwise excellent book.
Having not read the book, I’m curious why you think this chapter is exceptional rather than perhaps revealing a problem with the other chapters you didn’t notice, perhaps because you’re not so close to the topic that you can notice the way Pinker makes mistakes?
Although not everyone will agree, I think focusing on things that are effective (likely to have a large positive impact on the world as you assess it), personally motivating, and fits well with your talent/comparative advantage is the best bet. Starting late seems not a major concern; if anything EA is sometimes hurt by the fact that so many people in EA are young and inexperienced and so waste time on things or make mistakes that a more wisened person would not. Having your life cut short is tragic, but losing a decade at the end of life seems not particularly relevant to EA; whatever good you can do in the time you have will help the world.
Given your health concerns, you might be particular interested in anti-aging research or cryonics, which I think of as effective good-doing since they have the ability in expectation to dramatically increase our capacity for positively-interpreted subjective experiences (although only in combination with other things). But I think whatever you chose to work on you have the potential to have a valuable impact.
Maybe an alternative way to look at this is, why is rationality not more a part of EA community building? Rationality as a project likely can’t stand on its own because it’s not trying to do anything; it’s just a bunch of like-minded folks with a similar interest in improving their ability to apply epistemology. The cases where the rationality “project” has done well, like building up resources to address AI risk, were more like cases where the project needed rationality for an instrumental purpose and then built up LW and CFAR in the service of that project. Perhaps EA can more strongly include rationality in that role as part of what it considers essential for training/recruiting in EA and building a strong community that is able to do the things it wants to do. This wouldn’t really mean rationality is a cause area, more an aspect of effective EA community building.
I’m curious why this got downvoted. Although I personally suspect there’s very little impact to be had by addressing Crohn’s disease, this post seems an okay starting point for discussing the issue. I don’t see anything here that makes me think this is low-quality content, just maybe discussing an issue that is below the threshold of concern for many EAs? I’m really not sure.
My heuristic here is to check to see if I need to do the activity (maybe it’s something I’ve just incorrectly thought I had to do because of some incorrect, unexamined assumption) and if I want to do the activity. If I’m not excited about doing it then I check to see how much it would cost me to get someone else to do it, and if it costs less than some number at which I value the time during which I would have to do the task, I pay for it instead, otherwise I become happy to do it because I generate value with the time over what I would have had otherwise.