Some quick thoughts on AI consciousness work, I may write up something more rigorous later.
Normally when people have criticisms of the EA movement they talk about its culture or point at community health concerns.
I think aspects of EA that make me more sad is that there seems to be a few extremely important issues on an impartial welfarist view that don’t seem to get much attention at all, despite having been identified at some point by some EAs. I do think that ea has done a decent job of pointing at the most important issues relative to basically every other social movement that I’m aware of but I’m going to complain about one of it’s shortcomings anyway.
It looks to me like we could build advanced ai systems in the next few years and in most worlds we have little idea of what’s actually going on inside them. The systems may tell us they are conscious, or say that they don’t like the tasks we tell them to do but right now we can’t really trust their self reports. There’ll be a clear economic incentive to ignore self reports that create a moral obligation to using the systems in less useful/efficient ways. I expect the number of deployed systems to be very large and that it’ll be plausible that we lock in the suffering of these systems in a similar way to factory farming. I think there are stronger arguments for the topic’s importance that I won’t dive into right now but the simplest case is just the “big if true-ness” of this area seems very high.
My impression is that our wider society and community is not orienting in a sane way to this topic. I don’t remember ever coming across a junior EA seriously considering directing their career to work in this area. 80k has a podcast with Rob Long and a very brief problem profile (that seems kind of reasonable), ai consciousness (iirc) doesn’t feature in ea virtual programs or any intro fellowship that I’m aware of, there haven’t been many (or any?) talks about it at eag in the last year. I do think that most organisations could turn around and ask “well what concrete action do you actually want our audience to take” and my answers are kind of vague and unsatisfying right now—I think we were at a similar point with alignment a few years ago and my impression is that it had to be on the communities mind for a while before we were able to pour substantial resources into it (though the field of alignment feels pretty sub-optimal to me and I’m interested in working out how to do a better job this time round).
I get that there aren’t shovel ready directions to push people to work on right now, but in so far as our community and its organisations brand themselves substantially as the groups identifying and prioritising the worlds most pressing problems it sure does feel to me like more people should have this topic on their minds.
There are some people I know of dedicating some of their resources to making progress in this area, and I am pretty optimistic about the people involved—the ones that I know of seem especially smart and thoughtful.
I don’t want all of the EA to jump into this rn, and I’m optimistic about having a research agenda in this space that I’m excited about and maybe even a vague plan about what one might do about all this by the end of this year—after which I think we’ll be better positioned to do field building. I am excited about people who feel especially well placed moving into this area—in particular people with some familiarity with both mainstream theories of consciousness and ml research (particularly designing and running empirical experiments). Feel free to reach out to me or apply for funding at the ltff.
(quick thoughts, may be missing something obvious)
Relative the scale of the long term future, the number of AIs deployed in the near term is very small, so to me it seems like there’s pretty limited upside to improving that. In the long term, it seems like we have AIs to figure out the nature of consciousness for us.
Maybe I’m missing the case that lock-in is plausible, it currently seems pretty unlikely to me because the singularity seems like it will transform the ways the AIs are running. So in my mind it mostly matters what happens after the singularity.
I’m also not sure about the tractability, but the scale is my major crux.
I do think understanding AI consciousness might be valuable for alignment, I’m just arguing against work on nearterm AI suffering.
I agree with your “no lock-in” view in the case of alignment going well: in that world, we’d surely use the aligned superintelligence to help us with things like understanding AI sentience and making sure that sentient AIs aren’t suffering.
In the case of misalignment and humanity losing control of the future, I don’t think I understand the view that there wouldn’t be lock-in. I may well be missing something, but I can’t see why there wouldn’t be lock-in of things related to suffering risk—for example, whether or not the ASI creates sentient subroutines which help it achieve its goals but which incidentally suffer—that could in theory be steered away from even if we fail at alignment, given that the ASI’s future actions (even if they’re very hard to exactly predict) are decided by how we build it, and which we could likely steer away from more effectively if we better understood AI sentience (because then we’d know more about things like what kinds of subroutines can suffer).
Edit: I have a lot of sympathy for the take above but I tried to write up my response around why I think lock-ins are pretty plausible.
I’m not sure rn whether the majority of downside comes from lock-in but I think that’s what I’m most immediately concerned about.
I assume by singularity you mean an intelligence explosion or extremely rapid economic growth. I think my default story for how this happens in the current paradigm involves people using AIs in existing institutions (or institutions that look pretty similar today’s one’s) in markets that looks pretty similar to current markets which (on my view) are unlikely to care about the moral patienthood of AIs in a pretty similar ways to current market failures.
On the “markets still exist and we do things in kind of like how we do now view”—I agree that in principle we’d be better positioned to make progress on problems generally if we had something like PASTA but I feel like you need to tell a reasonable story for one of
how governance works post TAI so that you can easily enact improvements like eliminating ai suffering
why current markets do allow for things like factory farming and slavery but wouldn’t allow for violation of AI preferences
I’m guessing your view is that progress will be highly discontinuous and society will look extremely different post singularity to how it does now (kind of like going from pre-agricultural revolution to now whereas my view is more like preindustrial revolution to now).
I’m not really sure where the cruxes are on this view or how to reason about it well but my high level argument is that the “god like AGI which has significant responsibility but still checks in with its operators” will still need to make some trade offs across various factors and unless it’s doing some cev type thing, outcomes will be fairly dependent on the goals that you give it and it’s not clear to me that the median world leader or ceo gives the agi goals that concern the ai’s wellbeing (or its subsystems wellbeing) - even if it’s relatively cheap to evaluate it. I am more optimistic about agi controlled by person sampled from a culture that has already set up norms around how to orient to the moral patienthood of ai systems than one that needs to figure it out on the fly. I do feel much better about worlds where some kind of reflection process is overdetermined.
My views here are pretty fuzzy and are often influenced substantially by thought experiments like “If random tech ceo could effectively control all the worlds scientists and have them run at 10x speed and had 100 trillion dollars does factory farming still exist?” which isn’t a very high epistemic bar to beat. (I also don’t think I’ve articulated my models very well and I may take another stab at this later on).
I have some tractability concerns but my understanding is that few people are actually trying to solve the problem right now and when few people are trying it’s pretty hard for me to actually get a sense of how tractable a thing is, so my priors on similarly shaped problems are doing most of the work (which leaves me feeling quite confused).
I’m really glad you wrote this; I’ve been worried about the same thing. I’m particularly worried at how few people are working on it given the potential scale and urgency of the problem. It also seems like an area where the EA ecosystem has a strong comparative advantage — it deals with issues many in this field are familiar with, requires a blend of technical and philosophical skills, and is still too weird and nascent for the wider world to touch (for now). I’d be very excited to see more research and work done here, ideally quite soon.
Very strong +1 to all this. I honestly think it’s the most neglected area relative to its importance right now. It seems plausible that the vast majority of future beings will be digital, so it would be surprising if longtermism does not imply much more attention to the issue.
I hadn’t seen this until now, but it’s good to see that you’ve come to the same conclusion I have. I’ve just started my DPhil in Philosophy and plan on working on AI mental states and welfare.
Update 2024-Jul-5 as this seems to be getting some attention again: I am not sure whether I endorse the take below anymore—I think 80k made some UI changes that largely address my concerns.
The 80k job board has too much variance.
(Quickly written, will probably edit at some point in future)
Jobs on the main 80k job board can range from (in my estimation) negligible value to among the best opportunities I’m aware of. I have also seen a few jobs that I think are probably actively harmful (e.g., token alignment orgs trying to build AGI where the founders haven’t thought carefully about alignment—based on my conversations with them).
I think a helpful orientation towards jobs on the jobs board is, at least one person with EA values who happens to work at 80k thinks it’s worth signal boosting. And NOT EA/80k endorses all of these jobs without a lot more thought from potential applicants.
Jobs are also on the board for a few different reasons, e.g., building career cap vs. direct impact vs.… and there isn’t lots of info about why the job is there in the first place.
I think 80k does try to give more of this vibe than people get. I don’t mean to imply that they are falling short in an obvious way.
I also think that the jobs board is more influential than 80k thinks. Explicit endorsements of organisations from core EA orgs are pretty rare, and I think they’d be surprised how many young EAs overupdate on their suggestions (but only medium confidence about it being pretty influential).
My concrete improvement would be to separate jobs into a few different boards to the degree that they endorse the organisation.
One thing I find slightly frustrating is the response that I have heard from 80k staff to this is that the main reason they don’t do this is around managing relationships with the organisations (which could be valid). Idk if it’s the right call but I think it’s a little sus, I think people are too quick to jump to the nice thing that doesn’t make them feel uncomfortable over the impact maximising thing (pin to write more about this in future).
One error that I think I’m making is criticising an org for doing a thing that is probably much better than not doing the thing even if it think it’s leaving some value on the table, I think that this is kind of unhealthy and incentives inaction. I’m not sure what to do about this other than flag that I think 80k is great as is most of the stuff they do and I’d rather orgs had a policy of occasionally producing things that I feel moderately about if this helps them do a bunch of cool stuff, than underperform and not get much done (pin to write more about this in future).
My best idea for solving this is making an alternative view for 80k’s job board that has some reasons to obviously prefer it, and to add features to it like “here’s a link to the org’s AMA post”, where I hope the community can comment on things like “this org is trying to build an AGI with little concern for safety”, and lots of people can upvote it. No political problems for 80k. Lots of good high quality discussions. Hopefully.
Regarding some jobs being there just for building career capital—I only learned about this a few days ago and it kind of worries me. I don’t have good ideas on how to solve it
>it kind of worries me Is that because you think the job board shouldn’t list career capital roles, because it wasn’t obvious that the roles were career capital-related, or something else?
In case it’s helpful, the first thing below the title on the job board says: >Some of these roles directly address some of the world’s most pressing problems, while others may help you build the career capital you need to have a big impact later.
I’d be interested in any ideas you had for communicating more clearly that a bunch of the roles are there for a mix of career capital and impact reasons. Giving our guess of the extent to which each role is being listed for career capital vs impact reasons isn’t feasible for various reasons unfortunately.
You have that line there, but I didn’t notice it in years, and I recently talked to other people who didn’t notice it and were also very surprised. The only person I think I talked to who maybe knew about it is Caleb, who wrote this shortform.
Everyone (I talked to) thinks 80k is the place to find an impactful job.
Maybe the people I talk to are a very biased sample somehow, it could be, but they do include many people who are trying to have a high impact with their career right now
Oh this is a cool idea! I endorse this on the current margin and think it’s cool that you are trying this out.
I think that ideally a high context person/org could do the curation and split this into a bunch of different categories based on their view (ideally this is pretty opinionated/inside viewy).
I think linking to organisations’ AMAs on the EA Forum is a neat idea! Thanks for sharing. I’ve added it to our list of feature ideas we might build in the future.
I admit I’m a bit worried when I hear “might build in the future” about a feature that seems very small to me (I could add it to my own version), and a part of me is telling me this is your way of saying you actually never want to build it. I’m not sure how to phrase my question exactly.. maybe “if someone else would do the dev work, would you be happy just putting it in, or is there another bottle neck?”
Also excuse me for my difficulty understanding subtext, I am trying
Oh, may I please try to convince you not to create your own voting system?
Initial reasons, as an invitation to push back:
Commenting is more important than voting
If, for example, someone thinks a specific org is actively harmful, I think:
Good situation: Someone writes a comment with the main arguments, references, and so on.
Bad situation: Someone needs to get lots of people to downvote the position. (Or people don’t notice) (or the org gets lots of people to upvote) (or other similar situations)
Upvoting comments is better than both
And the double “upvote/downvote” + “agree/disagree” is even better, where the best comments float up.
See how conversations like that in the forum/lesswrong look. This is unusually good for the internet, and definitely better than upvoting/downvoting alone.
Is this system perfect? No, but it’s better than anything I’ve seen, definitely better than upvotes alone.
[Reducing friction for people to voice their opinion] is key
+ For platforms like this, the amount of active users matters, there’s an importance in having a critical mass.
So:
Adding a new platform is friction.
I vote for using an existing platform. Like the EA Forum.
Maybe a post without the “frontpage” tag
Maybe a comment on a post
These conversations already fit the EA Forum
It’s discussing the impact of the org.
(I wouldn’t be too surprised if there’s a good reason to use something else, but I doubt it would be a good idea to create a NEW platform)
I have tried to convince the forum team of this, using the methods they asked to be convinced via. There has been some move to put jobs on the forum, but no in a searchable way. I think a new site that pushes better norms would be better.
I largely agree with the object level points you make but I don’t see why you wouldn’t want a new org with better processes.
I feel a bit confused about how much I should be donating.
On the one hand there’s just a straight forward case that donating could help many sentient beings to a greater degree than it helps me. On the other hand, donating 10% for me feels like it’s coming from a place of fitting in with the EA consensus, gaining a certain kind of status and feeling good rather than believing it’s the best thing for me to do.
I’m also confused about whether I’m already donating a substantial fraction of my income.
I’m pretty confident that I’m taking at least a 10% pay-cut in my current role. If nothing else my salary right now is not adjusted for inflation which was ~8% last year so it feels like I’m at least underpaid by that amount (though it’s possible they were overpaying me before). Many of my friends earn more than twice as much as I do and I think if I negotiated hard for a 100% salary increase the board would likely comply.
So how much of my lost salary should I consider to be a donation? I think numbers between 0% and 100% are plausible. −50% also isn’t insane to me as my salary does funge with other peoples donations to charities.
One solution is that I should just negotiate for my salary from a non-altruistic perspective, and then decide how much I want to donate back to my organisation after that. This seems a bit inefficient though and I think we should be able to do better.
One reason I don’t donate ~50% of my salary is that I genuinely believe it’s more cost-effective for me to build runway than donate right now. I quite like the idea of discussing this with someone who strongly disagrees with me and I admire and see if they come round to my position. It feels a bit too easy to find reasons not to give, and I’m very aware of my own selfishness in many parts of my life.
A couple of considerations I’ve thought about, at least for myself
(1) Fundamentally, giving helps save/improve lives, and that’s a very strong consideration that we need equally strong philosophical or practical reasons to overcome.
(2) I think value drift is a significant concern. For less engaged EAs, the risk is about becoming non-EA altogether; for more engaged EAs, it’s more about becoming someone less focused on doing good and more concerned with other considerations (e.g. status); this doesn’t have to be an explicit thing, but rather biases the way we reason and decide in a way that means we end up rationalizing choices that helps ourselves over the greater good. Giving (e.g. at the standard 10%) helps anchor against that.
(3) From a grantmaking/donor advisory perspective, I think it’s hard to have moral credibility, which can be absolutely necessary (e.g. advising grantees to put up modest salaries in their project proposals, not just to increase project runway but also the chances that our donor partners approve the funding request). And this is both psychologically and practically hard to do this if you’re not just earning more but earning far more and not giving to charity! Why would they listen to you? The LMIC grantees especially may be turned off—disillusioned, by the fact that they have to accept peanuts while those of us with power over them draw fat stacks of cash! The least we can do is donate! Relatedly, I think part of Charity Entrepreneurship’s success is absolutely down to Joey and co leading by example and taking low salaries.
(4) Runway is a legitimate consideration, especially since there are a lot of potentially impactful things one can do but which won’t be funded upfront (so you need to do it on savings, prove viability and then get it funded). However, I don’t think this is sufficient to outweigh points 1-3.
(5) In general, I think it’s not useful at all to compare with how much others are earning—that only leads to resentment, unhappiness, and less impactful choices. For myself, the vast majority of my friends are non-EAs; we have similar backgrounds (elite education, worked for the Singapore government as policy officers/scholars at one point or another) and yet since leaving government I’ve had a riskier career, earn far less, have fewer savings, and am forced to delay having a family/kids because of all those reasons. All of this is downstream of choices I’ve made an EA, particularly in avoiding job offers that paid very well but which didn’t have impact (or in fact, had negative impact). Is the conclusion I’m supposed to draw that I’ve made a mistake with my life? I don’t think so, because statistically speaking, some random African kid out there is alive as a result of my donations (and hopefully, my work), and that’s good enough for me.
Thanks for these thoughts. It’s nice to get such detailed engagement. I’m going to try to respond point by point.
(2) - I’m not particularly worried about value drift, and I think there are more effective ways to guard against this than earning to give (e.g. living with people who share your values, talking about EA stuff regularly with people you care about). I think I have quite a lot of evidence in favour of me being pretty resilient to value drift (though I often change my mind about what is important intentionally).
(3) I think this is interesting, though I don’t think that I share this view re being taken seriously. I think that I, and many people I know, have taken actions that they found much harder than donating (e.g. I live in a different country than my partner and in a pretty suboptimal timezone because I think I can do my work better from my current location, I work a lot of hours, I spend a lot of time doing tasks that I find emotionally challenging, I’ve been in situations that I found extremely stressful for ~0 credit). To be clear, I don’t think that I am particularly worthy of praise—but I do think that I score reasonably well on “moral credibility”. Also, I have concerns about this kind of signalling and think it often leads to concerning dynamics—I don’t want EA Funds grantees to feel pressured into taking shoestring salaries. When I was at CE, I remember there being a lot of pressure to take extremely low salaries despite many successful charity founders thinking this was a bad idea. It also led to weird epistemic effects (though I hear things have improved substantially).
(4) I don’t think that runway and grants from EA funders are as fungible as you do. I can talk a bit more about this if that’s useful. I guess that this general point (3) is where we have substantive disagreement. It seems likely to me that I can have much more impact through my career than through my donations—and that having more runway could substantially increase the value of my career. If it doesn’t increase the value of my career and I am wrong, then I can donate later (which I don’t think incurs much in the way of losses from a NTist perspective, but it’s more confusing from a LTist one). To be clear, I think that I’d like to build up 12-24 months of runway, and right now, I have substantially less than that—I am not talking about being able to retire in 10 years or anything.
(5) I think for me, the comparison stuff doesn’t really lead to resentment/unhappiness. It wasn’t clear from my post, but one of the reasons that I made this comparison was because many of my friends do very altruistically valuable work and earn substantially more than I do. They are extremely talented and hard-working (and lucky), and whilst this doesn’t mean that I could get a highly-paying job that generated a lot of altruistic value, I think talking to them regularly has given me an understanding of the kind of work that they do and what it might take to enter a similar role, and it feels doable for me to enter similar roles in a relatively short amount of time (on my inside view). I also have friends that I think are similarly smart/hardworking etc., who earn a lot more money than me in purely for-profit roles. Again, I don’t resent any of these people, and the comparison seems pretty useful to me.
For what it’s worth, I think saving up runway is a no brainer.
During my one year as a tech consultant, I put aside half each month and donated another 10%. The runway I built made the decision for me to quit my job and pursue direct work much easier.
In the downtime between two career moves, it allowed me to spend my time pursuing whatever I wanted without worrying about how to pay the bills. This gave me time to research and write about snakebites, ultimately leading to Open Phil recommending a $500k investment into a company working on snakebite diagnostics.
I later came upon great donation opportunity to a fish welfare charity, which I gave a large part of my runway to and wouldn’t have been able to support if I had given all my money away two years prior.
Had I given more away sooner I think it would be clearer to myself and others that I was in fact altruistically motivated. I also think my impact would have been lower. Impact over image.
EDIT: Actually it’s probably a some-brainer a lot of the time, seeing as I currently have little runway and am taking a shoestring salary. The reason I take a shoestring salary is to increase my organization’s runway, which is valuable for the same reasons that increasing one’s personal runway is. You don’t have to spend as much time worrying about how your org is going to pay the bills and you can instead focus on impact.
I’m probably misunderstanding you, but I’m confused by (3) and (5). They seem like they somewhat contradict each other. Remove the emotive language and (3) is saying that people in positions of power should donate and/or have lower salaries because donors or grantees might be upset in comparison, and (5) is saying that we shouldn’t compare our own earnings and donations to others.
These claims contradict each other in the following ways:
If we take (5) as a given, (3) no longer makes sense. If it truly is the case that comparing earnings is never useful, we should not expect (or want) grantees or donors to compare earnings.
Hypothetically, maybe your position might be more like “oh it’s clearly bad to compare earnings, but we live in a flawed world with flawed people.” But if that were the case, then acceding to people’s comparisons is essentially enabling a harmful activity, and maybe we should have a higher bar for enabling others’ negative proclivities.
If we take (3) as the primary constraint (Donors/grantees respect us less if we don’t visibly make sacrifices for the Good), then it seems like (5) is very relevant. Pointing out ways in which we sacrificed earnings to take on EA jobs just seems like a really good reply to concerns that we are being overpaid in absolute terms, or are only doing EA jobs for the money. At least in my case, I don’t recall any of our large donors complaining about my salary, but if I did, “I took a >>70% pay cut originally to do EA work” [1] seems like a reasonable response that I predict to mollify most donors.
Though I think it’s closer to ~40-50% now at my current salary, adjusting for inflation? On the other hand, if I stayed and/or switched jobs in tech I’d probably have had salary increases substantially above inflation as well, so it’s kind of confusing what my actual counterfactual is[2]. But I’m also not sure how much I should adjust for liking EA work and being much more motivated at it, which seems like substantial non-monetary compensation. But EA work is also more stressful and in some ways depressing, so hazard pay is reasonable, so...¯\_(ツ)_/¯.
In part because I think if I wasn’t doing EA work the most obvious alternative I’d be aiming for high-variance earning-to-give, which means high equity value in expectation but ~0 payout in the median case.
(I’m writing this in my personal capacity, though I work at GWWC)
On 1: While I think that giving 10% is a great norm for us to have in the community (and to inspire people worldwide who are able to do the same), I don’t think there should be pressure for people to take a pledge or donate who don’t feel inspired to do so—I’d like to see a community where people can engage in ways that make sense for them and feel welcomed regardless of their donation habits or career choices, as long as they are genuinely engaging with wanting to do good effectively.
On 3: I think it makes sense for people to build up some runway or sense of financial stability, and that they should generally factor this in when considering donating or taking a pledge. I personally only increased my donations to >10% after I felt I had enough financial stability to manage ongoing health issues.
I do think that people should consider how much runway or savings they really need though, and whether small adjustments in lifestyle could increase their savings and allow for more funds to donate—after all, many of us are still in the top few % of global income earners even after taking jobs that are less than we would getting in the private sector.
It may be that building runway is, in fact, the best way to do good in the long term. And maybe certain levels of personal consumption make you more able to sustainably do good through your work.
But just engage seriously with the cost of that runway. With straightforward Givewell charities, that might mean someone dies annually that you could have saved.
I guess there are two questions it might be helpful to separate.
what is the best thing to do with my money if I am purely optimising for the good?
how much of my money does the good demand?
Looking at the first question (1), I think engaging with the cost of giving (as opposed to the cost of building runway) wrt doing the most good is also helpful. It feels to me like donating $10K to AMF could make me much less able to transition my career to a more impactful path, costing me months, which could mean that several people die that I could have saved via donating to Givewell charities.
It feels like the “cost” applies symmetrically to the runway and donating cases and pushes towards “you should take this seriously” instead of having a high bar for spending money on runway/personal consumption.
Looking at (2) - Again I broadly agree with the overall point, but it doesn’t really push me towards a particular percentage to give.
For me, if the answer to #1 is in favor of saving for runway, that disposes of the question. Just need to be careful, as you are aware, of motivated reasoning.
For #2, for me, the good demands all of your money. Of course, you are not going to be the most effective agent if you keep yourself in poverty, so this probably doesn’t imply total penury. But insofar as other conscious beings today are capable of positive and negative experiences like you are, it isn’t clear why you should privilege your own over those of other conscious beings.
To share another perspective: As an independent alignment researcher, I also feel really conflicted. I could be making several multiples of my salary if my focus was to get a role on an alignment team at an AGI lab. My other option would be building startups trying to hit it big and providing more funding to what I think is needed.
Like, I could say, “well, I’m already working directly on something and taking a big pay-cut so I shouldn’t need to donate close to 10%”, but something about that doesn’t feel right… But then to counter-balance that, I’m constantly worried that I just won’t get funding anymore at some point and would be in need of money to pay for expenses during a transition.
Fwiw my personal take (and this is not in my capacity as a grantmaker) is that building up your runway seems really important, and I personally think that it should be a higher priority than donating 10%. My guess is that GWWC would suggest dropping your commitment to say 2% as a temporary measure while you build up your savings.
Many people see the commitment of the pledge to give 10% as one over their lifetime, so if you needed to drop back to build up runway for a while, with the intention of donating more in the following years once your finances were more secure, I personally think that would be an acceptable way to fulfil the pledge!
There’s no strict requirement that donations need to be made each year, but GWWC does encourage regular giving where possible.
I would be interested in seeing your takes about why building runway might be more cost-effective than donating.
Separately, if you decide not to go with 10% because you want to think about what is actually best for you, I suggest you give yourself a deadline. Like, suppose you currently think that donating 10% would be better than status quo. I suggest doing something like “if I have not figured out a better solution by Jan 1 2024, I will just do the community-endorsed default of 10%.”
I think this protects against some sort of indefinite procrastination. (Obviously less relevant if you never indefinitely procrastinate on things like this, but my sense is that most people do at least sometimes).
(to be clear, I do donate I just haven’t signed the pledge, and I’m confused about how much I am already donating)
I think the main things are:
whilst I think donating now > donating in the future + interest, the cost of waiting to donate is fairly low (if you’re not worried about value drift)
I can think of many situations in the past where an extra $10k would have been extremely useful to me to move to more impactful work.
I don’t think that it always makes sense for funders to give this kind of money to people in my position.
I now have friends who could probably do this for me, but it has some social cost.
I think it’s important for me to be able to walk away from my job without worrying about personal finances.
My job has a certain kind of responsibility that sometimes makes me feel uneasy, and being able to walk away without having another reason not to seems important.
I think I’ve seen several EAs make poor decisions from a place of poor personal finance and unusual financial security strategies. I think the epistemic effects of worrying about money are pretty real for me.
Also:
If I were trying to have the most impact with my money via donations, I think I would donate to various small things that I sometimes see that funders aren’t well positioned to fund. This would probably mean saving my money and not giving right now.
(I think that this kind of strategy is especially good for me as I have a good sense of what funders can and can’t fund—I think people tend to overestimate the set of things funders can’t fund)
I don’t see why the GWWC 10% number should generalise well to my situation. I don’t think it’s a bad number. I don’t weigh the community prior very strongly relative to my inside view here.
Concerning 2 I think from an organization’s perspective it might be even helpful to have a salary agreed with you that makes it easier to replace you and so calculate a realistic runway for the organization.Then you can still voluntarily reduce the salary and claim it as donation. This is what I’m doing and it helps with my own budgeting and with that of my org. I was inspired by this post: https://forum.effectivealtruism.org/posts/GxRcKACcJuLBEJPmE/consider-earning-less
At least in the US I’m pretty sure this has very poor tax treatment. The company match portion would be taxable to the employee while also not qualifying for the charitable tax deduction. The idea is they can offer a consistent match as policy, but if they’re offering a higher match for a specific employee that’s taxable compensation. And the employee can only deduct contributions they make, which this isn’t quite.
Should recent ai progress change the plans of people working on global health who are focused on economic outcomes?
It seems like many more people are on board with the idea that transformative ai may come soon, let’s say within the next 10 years. This pretty clearly has ramifications for people working on longtermist causes areas but I think it should probably affect some neartermist cause prioritisation as well.
If you think that AI will go pretty well by default (which I think many neartermists do) I think you should expect to see extremely rapid economic growth as more and more of industry is delegated to AI systems.
I’d guess that you should be much less excited about interventions like deworming or other programs that are aimed at improving people’s economic position over a number of decades. Even if you think the economic boosts from deworming and ai will stack, and you won’t have sharply diminishing returns on well-being with wealth I think you should be especially uncertain about your ability to predict the impact of actions in the crazy advanced ai world (which would generally make me more pessimistic about how useful the thing I’m working on is).
I don’t have a great sense of what the neartermists who think AI will go well should do. I’m guessing some could work on accelerating capabilities though I think that’s pretty uncooperative. It’s plausible that saving lives now is more valuable than before if you think they might be uploaded but I’m not sure there is that much of a case for this being super exciting from a consequentialist world view when you can easily duplicate people. I think working on ‘normy’ ai policy is pretty plausible or trying to help governments orient to very rapid economic growth (maybe in a similar way to how various nonprofits helped governments orient to covid).
To significantly change strategy, I think one would need to not only believe “AI will go well” but specifically believe that AI will go well for people of low-to-middle socioeconomic status in developing countries. The economic gains from recent technological explosions (e.g., industrialization, the computing economy) have not lifted all boats equally. There’s no guarantee that gaining the technological ability to easily achieve certain humanitarian goals means that we will actually achieve them, and recent history makes me pretty skeptical that it will quickly happen this time.
I’m not an expert but I’d be fairly surprised if the Industrial Revolution didn’t do more to lift people in LMICs out of poverty than any known global health intervention even if you think it increased inequality. Would be open to taking bets on concrete claims here if we can operationalise one well.
I think the Industrial Revolution and other technological explosions very likely did (or will) have an overall anti-poverty impact . . . but I think that impact happened over a considerable amount of time and was not of the magnitude one might have hoped for. In a capitalist system, people who are far removed from the technological improvements often do benefit from them without anyone directing effort at that goal. However, in part because the benefits are indirect, they are often not quick.
So the question isn’t “when will transformational AI exist” but “when will transformational AI have enough of an impact on the wellbeing of economic-development-program beneficiaries that it significantly undermines the expected benefits of those programs?” Before updating too much on the next-few-decades impact of AI on these beneficiaries, I’d want to see concrete evidence of social/legal changes that gave me greater confidence that the benefits of an AI explosion would quickly and significantly reach them. And presumably the people involved in this work modeled a fairly high rate of baseline economic growth in the countries they are working in, so massive AI-caused economic improvement for those beneficiaries (say) 30+ years from now may have relatively modest impact in their models anyway.
Should recent ai progress change the plans of people working on global health who are focused on economic outcomes?
I think so, see here or here for a bit more discussion on this
If you think that AI will go pretty well by default (which I think many neartermists do)
My guess/impression is that this just hasn’t been discussed by neartermists very much (which I think is one sad side-effect from bucketing all AI stuff in a “longtermist” worldview)
I often see projects of the form [come up with some ideas] → [find people to execute on ideas] → [hand over the project].
I haven’t really seen this work very much in practice. I have two hypotheses for why.
The skills required to come up with great projects are pretty well correlated with the skills required to execute them. If someone wasn’t able to come up with the idea in the first place, it’s evidence against them having the skills to execute well on it.
Executing well looks less like firing a canon and more like deploying a heat-seeking missile. In reality, most projects are a sequence of decisions that build on each other, and the executors need to have the underlying algorithm to keep the project on track. In general, when someone explains a project, they communicate roughly where the target is and the initial direction to aim in, but it’s much harder to hand off the algorithm that keeps the missile on track.
I’m not saying separating out ideas and execution is impossible, just that it’s really hard and good executors are rare and very valuable. Good ideas are cheap and easy to come by, but good execution is expensive.
A formula that I see more often works well is [a person has idea] → [person executes well in their own idea until they are doing something they understand very well “from the inside” or is otherwise hand over-able] → person hands over the project to a competent executor.
I agree with this and I appreciate you writing this up. I’ve also been mentioning this idea to folks after Michelle Hutchinson first mentioned it to me.
If you want there to be more great organisations, don’t lower the bar
I sometimes hear a take along the lines of “we need more founders who can start organisations so that we can utilise EA talent better”. People then propose projects that make it easier to start organisations.
I think this is a but confused. I think the reason that we don’t have more founders is due to a having few people who have deep models in some high leverage area and a vision for a project. I don’t think many projects aimed at starting new organisations are really tackling this bottleneck at its core and instead lower the bar by helping people access funding, or appear better positioned than they actually are.
I think in general people that want to do ambitious things should focus on building deep domain knowledge, often by working directly with people with deep domain knowledge. The feedback loops are just too poor within most EA cause areas to be able to learn effectively by starting your own thing. This isn’t always true, but I think it’s more often than not true for most new projects that I see.
I don’t think the normal startup advice where running a startup will teach you a lot applies well here. Most startups are trying to build products that their investors can directly evaluate. They often have nice metrics like revenue and daily active users that track their goals reasonably well. Most EA projects lack credible proxies for success.
Some startups also lack credible success proxies such as bio startups. I think bio startups are particularly difficult for investors to evaluate and many experienced vcs avoid the sector entirely unless they have staff with bio PhDs and even then it’s still pretty hard to evaluate the niche area the startup is working in. Anecdotally, moderately successful bio startups seem much more likely to have a BS product than the average tech startup at a similar level of funding/team size.
Of course, I do think there are founders that are above the bar, but I think starting a new project is actually often very hard and a poor learning environment and I would probably prefer the bar was a bit higher and there were fewer nudges for early career people towards starting new things.
A quickly written model of epistemic health in EA groups I sometimes refer to
I think that many well intentioned ea groups do a bad job cultivating good epistemics. By this I roughly mean the culture of the group does not differentially advantage truthseekjng discussions or other behaviours that helps us figure out what is actually true as opposed to what is convenient or feels nice.
I think that one of the main reasons for this is poor gatekeeping of ea spaces. I do think groups do more and more gate keeping, but they are often not selecting on epistemics as hard as I think they should be. I’m going to say why I think this is and then gesture at some things that might improve the situation. I’m not going to talk at this tjme about why I think it’s important—but I do think it’s really really important.
EA group leaders often exercise a decent amount of control over who should be part of their group (which I think is great). Unfortunately, it’s much easier to evaluate what conclusion a person has come to, than how good were their reasoning processes. So “what a person says they think” becomes the filter for who gets to be in the group as opposed to how do they think. Intuitively I expect a positive feedback loop where groups become worse and worse epistemically as people are incentivised to reach a certain conclusion to be part of the group and future group leaders are drawn from a pool of people with bad epistemics and then reinforce this.
If my model is right there are a few practical takeaways:
• be really careful about who you make a group leader or get to start a group (you can easily miss a lot of upside that’s hard to undo later)
• make it very clear that your EA group is a place for truth seeking discussion potentially at the expense of being welcoming or inclusive
• make rationality/epistemics a more core part of what your group values, idk exactly how to do that—I think a lot of this is making it clear that this is what your group is in part about
I’m hoping to have some better takes on this later, I would strongly encourage the CEA groups team to think about this along with EA group leaders. I don’t think many people are working in this area though I’d also be sad if people fill up the space with low quality content so think really hard about it and try to be careful about what you post.
I’m not going to talk at this tjme about why I think it’s important—but I do think it’s really really important.
As someone both trying to start a group and to find someone else to run it so I can move to other places, I’m really curious about your perspective on this. In my model, a lot of the value of a group comes from helping anyone that’s vaguely interested in doing good effectively to better achieve their goals, and to introduce them to online resources, opportunities, and communities.
I would guess that even if the leader has poor epistemics, they can still do a good enough job of telling people: “EA exists, here are some resources/opportunities you might find useful, happy to answer your 101 questions”.
I have heard a similar take from someone on the CEA groups team, so I would really want to understand this better.
There seems to be anxiety and concern about EA funds right now. One thread is here.
Your profile says you are the head of EA funds.
Can you personally make a statement to acknowledge these concerns, say this is being looked into, or anything else substantive? I think this would be helpful.
Another model I regularly refer to when advising people on projects to pursue. Quickly written—may come back and try to clarify this later.
I think it’s generally really important for people to be inside view excited about their projects. By which I mean, they think the project is good based on their own model of how the project will interact with the world.
I think this is important for a few reasons. The first obvious one is that it’s generally much more motivating to work on things you thing are good.
The second, and more interesting reason, is that if you are not inside view excited I think (generally speaking) you don’t actually understand why your project will succeed. Which makes it hard to execute well on your project. When people aren’t inside view excited about their project I get the sense they either have the model and don’t actually believe the project is good, or they are just deferring to others on how good it is which makes it hard to execute.
I often see public criticisms of EA orgs claiming poor execution on some object level activity or falling short on some aspect of the activity (e.g. my shortform about the 80k jobs board). I think this is often unproductive.
In general I think we want to give feedback to change the organisations policy (decision making algorithm), and maybe the EA movements policy. When you publicly criticise an org on some activity you should be aware that you are disincentivising the org from generally doing stuff.
Imagine the case where the org was choosing between scrappily running a project to get data and some of the upside value strategically as opposed to carefully planning and failing to execute fully. I think in these cases you should react differently and from the outside it is hard to know which situation the org was in.
If we also criticised orgs for not doing enough stuff I might feel differently, but this is an extremely hard criticism to make unless you are on the inside. I’d only trust a few people who didn’t have inside information to do this kind of analysis.
Maybe a good idea would be to describe the amount of resources that would have had to have gone into the project for you to see the outcome as being reasonably successful ??? Idk seems hard to be well calibrated.
I expect some people to react negatively to this, and think that I am generally discouraging of criticism. I think that I feel moderately about most criticism, neither helpful nor particularly unhelpful. The few pieces of thoughtful criticism I see written up I think are very valuable, but thoughtful criticism in my view is hard to come by and requires substantial effort.
(crosspost of a comment on imposter syndrome that I sometimes refer to)
I have recently found it helpful to think about how important and difficult the problems I care about are and recognise that on priors I won’t be good enough to solve them. That said, the EV of trying seems very very high, and people that can help solve them are probably incredibly useful.
So one strategy is to just try and send lots of information that might help the community work out whether I can be useful, into the world (by doing my job, taking actions in the world, writing posts, talking to people …) and trust the EA community to be tracking some of the right things.
I find it helpful to sometimes be in a mindset of “helping people reject me is good because if they reject me then it was probably positive EV and that means that the EA community is winning therefore I am winning (even if I am locally not winning).
My first impression of meeting rationalists was at a AI safety retreat a few years ago. I had a bunch of conversations that were decidedly mixed and made me think that they weren’t taking the project of doing a large amount of good seriously, reasoning carefully (as opposed to just parroting rationalist memes) or any better at winning than the standard EA types that I felt were more ‘my crowd’.
I now think that I just met the wrong rationalists early on. The rationalists that I most admire:
Care deeply about their values
Are careful reasoners, and actually want to work out what is true
Are able to disentangle their views from themselves, making meaningful conversations much more accessible
Are willing to seriously consider weird views that run against their current views
Calling yourself a rationalist or EA is a very cheap signal and I made an error early on (insensitivity to small samples sizes etc.) dismissing their community. Whilst there is still some stuff that I would change, I think that the median EA could move several steps in a ’rationalist’ direction.
Having a rationalist/scout mindset + caring a lot about impact are pretty correlated with me finding someone promising. It’s not essential to having a lot of impact but I am starting to think that EA is doing the altruism (A) part of EA super well and the rationalist are doing the effective (E) part of EA super well.
My go to resources are probably:
The scout mindset—Julia Galef
The codex—Scott Alexander
The sequences highlights—Eliezer Yudkowsky/Less Wrong
I adjust upwards on EAs who haven’t come from excellent groups
I spend a substantial amount of my time interacting with community builders and doing things that look like community building.
It’s pretty hard to get a sense of someone’s values, epistemics, agency …. by looking at their CV. A lot of my impression of people that are fairly new to the community is based on a few fairly short conversations at events. I think this is true for many community builders.
I worry that there are some people who were introduced to some set of good ideas first, and then people use this as a proxy for how good their reasoning skills are. On the other hand, it’s pretty easy to be in an EA group where people haven’t thought hard about different cause areas/interventions/… And come away with the mean take that’s not very good despite being relatively good reasoning wise.
When I speak to EAs I haven’t met before I try extra hard to get a sense of why they think x and how reasonable a take that is, given their environment. This sometimes means I am underwhelmed by people who come from excellent EA groups, and impressed by people who come from mediocre ones.
You end up winning more Caleb points if your previous EA environment was ‘bad’ in some sense, all else equal.
(I don’t defend why I think a lot of the causal arrow points from the EA environment quality to the EA quality—I may write something on this, another time.)
‘EA is too elitist’ criticisms seem to be more valid from a neartermist perspective than a longtermist one
I sometimes see criticisms around
EA is too elitist
EA is too focussed on exceptionally smart people
I do think that you can have a very outsized impact even if you’re not exceptionally smart, dedicated, driven etc. However I think that from some perspectives focussing on outliery talent seems to be the right move.
A few quick claims that push towards focusing on attracting outliers:
The main problems that we have are technical in nature (particularly AI safety)
Most progress on technical problems historically seems to be attributable to a surprisingly small set of the total people working on the problem
We currently don’t have a large fraction of the brightest minds working on what I see as the most important problems
If you are more interested in neartermist cause areas I think it’s reasonable to place less emphasis on finding exceptionally smart people. Whilst I do think that very outliery-trait people have a better shot at very outliery impact, I don’t think that there is as much of an advantage for exceptionally smart people over very smart people.
(So if you can get a lot of pretty smart people for the price of one exceptionally smart person then it seems more likely to be worth it.)
This seems mostly true to me by observation, but I have some intuition that motivates this claim.
AIS is a more novel problem than most neartermist causes, there’s a lot of working going in to getting more surface area on the problem as opposed to moving down a well defined path.
Being more novel also makes the problem more first mover-y so it seems important to start with a high density of good people to push onto good trajectories.
The resources for getting up to speed on the latest stuff seemless good than in more established fields.
Some quick thoughts on AI consciousness work, I may write up something more rigorous later.
Normally when people have criticisms of the EA movement they talk about its culture or point at community health concerns.
I think aspects of EA that make me more sad is that there seems to be a few extremely important issues on an impartial welfarist view that don’t seem to get much attention at all, despite having been identified at some point by some EAs. I do think that ea has done a decent job of pointing at the most important issues relative to basically every other social movement that I’m aware of but I’m going to complain about one of it’s shortcomings anyway.
It looks to me like we could build advanced ai systems in the next few years and in most worlds we have little idea of what’s actually going on inside them. The systems may tell us they are conscious, or say that they don’t like the tasks we tell them to do but right now we can’t really trust their self reports. There’ll be a clear economic incentive to ignore self reports that create a moral obligation to using the systems in less useful/efficient ways. I expect the number of deployed systems to be very large and that it’ll be plausible that we lock in the suffering of these systems in a similar way to factory farming. I think there are stronger arguments for the topic’s importance that I won’t dive into right now but the simplest case is just the “big if true-ness” of this area seems very high.
My impression is that our wider society and community is not orienting in a sane way to this topic. I don’t remember ever coming across a junior EA seriously considering directing their career to work in this area. 80k has a podcast with Rob Long and a very brief problem profile (that seems kind of reasonable), ai consciousness (iirc) doesn’t feature in ea virtual programs or any intro fellowship that I’m aware of, there haven’t been many (or any?) talks about it at eag in the last year. I do think that most organisations could turn around and ask “well what concrete action do you actually want our audience to take” and my answers are kind of vague and unsatisfying right now—I think we were at a similar point with alignment a few years ago and my impression is that it had to be on the communities mind for a while before we were able to pour substantial resources into it (though the field of alignment feels pretty sub-optimal to me and I’m interested in working out how to do a better job this time round).
I get that there aren’t shovel ready directions to push people to work on right now, but in so far as our community and its organisations brand themselves substantially as the groups identifying and prioritising the worlds most pressing problems it sure does feel to me like more people should have this topic on their minds.
There are some people I know of dedicating some of their resources to making progress in this area, and I am pretty optimistic about the people involved—the ones that I know of seem especially smart and thoughtful.
I don’t want all of the EA to jump into this rn, and I’m optimistic about having a research agenda in this space that I’m excited about and maybe even a vague plan about what one might do about all this by the end of this year—after which I think we’ll be better positioned to do field building. I am excited about people who feel especially well placed moving into this area—in particular people with some familiarity with both mainstream theories of consciousness and ml research (particularly designing and running empirical experiments). Feel free to reach out to me or apply for funding at the ltff.
(quick thoughts, may be missing something obvious)
Relative the scale of the long term future, the number of AIs deployed in the near term is very small, so to me it seems like there’s pretty limited upside to improving that. In the long term, it seems like we have AIs to figure out the nature of consciousness for us.
Maybe I’m missing the case that lock-in is plausible, it currently seems pretty unlikely to me because the singularity seems like it will transform the ways the AIs are running. So in my mind it mostly matters what happens after the singularity.
I’m also not sure about the tractability, but the scale is my major crux.
I do think understanding AI consciousness might be valuable for alignment, I’m just arguing against work on nearterm AI suffering.
I agree with your “no lock-in” view in the case of alignment going well: in that world, we’d surely use the aligned superintelligence to help us with things like understanding AI sentience and making sure that sentient AIs aren’t suffering.
In the case of misalignment and humanity losing control of the future, I don’t think I understand the view that there wouldn’t be lock-in. I may well be missing something, but I can’t see why there wouldn’t be lock-in of things related to suffering risk—for example, whether or not the ASI creates sentient subroutines which help it achieve its goals but which incidentally suffer—that could in theory be steered away from even if we fail at alignment, given that the ASI’s future actions (even if they’re very hard to exactly predict) are decided by how we build it, and which we could likely steer away from more effectively if we better understood AI sentience (because then we’d know more about things like what kinds of subroutines can suffer).
Edit: I have a lot of sympathy for the take above but I tried to write up my response around why I think lock-ins are pretty plausible.
I’m not sure rn whether the majority of downside comes from lock-in but I think that’s what I’m most immediately concerned about.
I assume by singularity you mean an intelligence explosion or extremely rapid economic growth. I think my default story for how this happens in the current paradigm involves people using AIs in existing institutions (or institutions that look pretty similar today’s one’s) in markets that looks pretty similar to current markets which (on my view) are unlikely to care about the moral patienthood of AIs in a pretty similar ways to current market failures.
On the “markets still exist and we do things in kind of like how we do now view”—I agree that in principle we’d be better positioned to make progress on problems generally if we had something like PASTA but I feel like you need to tell a reasonable story for one of
how governance works post TAI so that you can easily enact improvements like eliminating ai suffering
why current markets do allow for things like factory farming and slavery but wouldn’t allow for violation of AI preferences
I’m guessing your view is that progress will be highly discontinuous and society will look extremely different post singularity to how it does now (kind of like going from pre-agricultural revolution to now whereas my view is more like preindustrial revolution to now).
I’m not really sure where the cruxes are on this view or how to reason about it well but my high level argument is that the “god like AGI which has significant responsibility but still checks in with its operators” will still need to make some trade offs across various factors and unless it’s doing some cev type thing, outcomes will be fairly dependent on the goals that you give it and it’s not clear to me that the median world leader or ceo gives the agi goals that concern the ai’s wellbeing (or its subsystems wellbeing) - even if it’s relatively cheap to evaluate it. I am more optimistic about agi controlled by person sampled from a culture that has already set up norms around how to orient to the moral patienthood of ai systems than one that needs to figure it out on the fly. I do feel much better about worlds where some kind of reflection process is overdetermined.
My views here are pretty fuzzy and are often influenced substantially by thought experiments like “If random tech ceo could effectively control all the worlds scientists and have them run at 10x speed and had 100 trillion dollars does factory farming still exist?” which isn’t a very high epistemic bar to beat. (I also don’t think I’ve articulated my models very well and I may take another stab at this later on).
I have some tractability concerns but my understanding is that few people are actually trying to solve the problem right now and when few people are trying it’s pretty hard for me to actually get a sense of how tractable a thing is, so my priors on similarly shaped problems are doing most of the work (which leaves me feeling quite confused).
I’m really glad you wrote this; I’ve been worried about the same thing. I’m particularly worried at how few people are working on it given the potential scale and urgency of the problem. It also seems like an area where the EA ecosystem has a strong comparative advantage — it deals with issues many in this field are familiar with, requires a blend of technical and philosophical skills, and is still too weird and nascent for the wider world to touch (for now). I’d be very excited to see more research and work done here, ideally quite soon.
Very strong +1 to all this. I honestly think it’s the most neglected area relative to its importance right now. It seems plausible that the vast majority of future beings will be digital, so it would be surprising if longtermism does not imply much more attention to the issue.
I hadn’t seen this until now, but it’s good to see that you’ve come to the same conclusion I have. I’ve just started my DPhil in Philosophy and plan on working on AI mental states and welfare.
Update 2024-Jul-5 as this seems to be getting some attention again: I am not sure whether I endorse the take below anymore—I think 80k made some UI changes that largely address my concerns.
The 80k job board has too much variance.
(Quickly written, will probably edit at some point in future)
Jobs on the main 80k job board can range from (in my estimation) negligible value to among the best opportunities I’m aware of. I have also seen a few jobs that I think are probably actively harmful (e.g., token alignment orgs trying to build AGI where the founders haven’t thought carefully about alignment—based on my conversations with them).
I think a helpful orientation towards jobs on the jobs board is, at least one person with EA values who happens to work at 80k thinks it’s worth signal boosting. And NOT EA/80k endorses all of these jobs without a lot more thought from potential applicants.
Jobs are also on the board for a few different reasons, e.g., building career cap vs. direct impact vs.… and there isn’t lots of info about why the job is there in the first place.
I think 80k does try to give more of this vibe than people get. I don’t mean to imply that they are falling short in an obvious way.
I also think that the jobs board is more influential than 80k thinks. Explicit endorsements of organisations from core EA orgs are pretty rare, and I think they’d be surprised how many young EAs overupdate on their suggestions (but only medium confidence about it being pretty influential).
My concrete improvement would be to separate jobs into a few different boards to the degree that they endorse the organisation.
One thing I find slightly frustrating is the response that I have heard from 80k staff to this is that the main reason they don’t do this is around managing relationships with the organisations (which could be valid). Idk if it’s the right call but I think it’s a little sus, I think people are too quick to jump to the nice thing that doesn’t make them feel uncomfortable over the impact maximising thing (pin to write more about this in future).
One error that I think I’m making is criticising an org for doing a thing that is probably much better than not doing the thing even if it think it’s leaving some value on the table, I think that this is kind of unhealthy and incentives inaction. I’m not sure what to do about this other than flag that I think 80k is great as is most of the stuff they do and I’d rather orgs had a policy of occasionally producing things that I feel moderately about if this helps them do a bunch of cool stuff, than underperform and not get much done (pin to write more about this in future).
Agree!
My best idea for solving this is making an alternative view for 80k’s job board that has some reasons to obviously prefer it, and to add features to it like “here’s a link to the org’s AMA post”, where I hope the community can comment on things like “this org is trying to build an AGI with little concern for safety”, and lots of people can upvote it. No political problems for 80k. Lots of good high quality discussions. Hopefully.
What do you think?
Regarding some jobs being there just for building career capital—I only learned about this a few days ago and it kind of worries me. I don’t have good ideas on how to solve it
>it kind of worries me
Is that because you think the job board shouldn’t list career capital roles, because it wasn’t obvious that the roles were career capital-related, or something else?
What worries me:
I think lots of people take (and took) a job from 80k’s board..
hoping to do something impactful.
in fact doing something neutral or perhaps (we could discuss this point,) actively harmful.
Unaware that this is the situation.
What do you think? (does this seems true? does it seem worrying?)
In case it’s helpful, the first thing below the title on the job board says:
>Some of these roles directly address some of the world’s most pressing problems, while others may help you build the career capital you need to have a big impact later.
I’d be interested in any ideas you had for communicating more clearly that a bunch of the roles are there for a mix of career capital and impact reasons. Giving our guess of the extent to which each role is being listed for career capital vs impact reasons isn’t feasible for various reasons unfortunately.
TL;DR: I think this is very under communicated
You have that line there, but I didn’t notice it in years, and I recently talked to other people who didn’t notice it and were also very surprised. The only person I think I talked to who maybe knew about it is Caleb, who wrote this shortform.
Everyone (I talked to) thinks 80k is the place to find an impactful job.
Maybe the people I talk to are a very biased sample somehow, it could be, but they do include many people who are trying to have a high impact with their career right now
I checked if people know this by opening a poll for the EA Twitter community:
Could you say more on why it’s not feasible? Maybe it’s something we could solve?
Just saying, filtering the jobs by org does sound good to me (in almost all situations), in case that’s the bottle neck.
“This org—we think it’s impactful. That org—just career building”
Oh this is a cool idea! I endorse this on the current margin and think it’s cool that you are trying this out.
I think that ideally a high context person/org could do the curation and split this into a bunch of different categories based on their view (ideally this is pretty opinionated/inside viewy).
Next idea: Have a job board with open vetting, where anyone can comment or disagree with the impact analysis, including the company itself.
What do you think?
I think linking to organisations’ AMAs on the EA Forum is a neat idea! Thanks for sharing. I’ve added it to our list of feature ideas we might build in the future.
Thank you!
I admit I’m a bit worried when I hear “might build in the future” about a feature that seems very small to me (I could add it to my own version), and a part of me is telling me this is your way of saying you actually never want to build it. I’m not sure how to phrase my question exactly.. maybe “if someone else would do the dev work, would you be happy just putting it in, or is there another bottle neck?”
Also excuse me for my difficulty understanding subtext, I am trying
FYI there is a super-linear prize for an automated jobs board. https://www.super-linear.org/prize?recordId=recSFgbnu7VzAHCqY
Yeah, have an automation to put the tweets in an Airtable, and something to export the past tweets, just gotta put them together.
Do note that it doesn’t solve the problem of high variance
The next feature I want to get is voting, which will work on that problem.
Oh, may I please try to convince you not to create your own voting system?
Initial reasons, as an invitation to push back:
Commenting is more important than voting
If, for example, someone thinks a specific org is actively harmful, I think:
Good situation: Someone writes a comment with the main arguments, references, and so on.
Bad situation: Someone needs to get lots of people to downvote the position. (Or people don’t notice) (or the org gets lots of people to upvote) (or other similar situations)
Upvoting comments is better than both
And the double “upvote/downvote” + “agree/disagree” is even better, where the best comments float up.
See how conversations like that in the forum/lesswrong look. This is unusually good for the internet, and definitely better than upvoting/downvoting alone.
Is this system perfect? No, but it’s better than anything I’ve seen, definitely better than upvotes alone.
[Reducing friction for people to voice their opinion] is key
+ For platforms like this, the amount of active users matters, there’s an importance in having a critical mass.
So:
Adding a new platform is friction.
I vote for using an existing platform. Like the EA Forum.
Maybe a post without the “frontpage” tag
Maybe a comment on a post
These conversations already fit the EA Forum
It’s discussing the impact of the org.
(I wouldn’t be too surprised if there’s a good reason to use something else, but I doubt it would be a good idea to create a NEW platform)
I have tried to convince the forum team of this, using the methods they asked to be convinced via. There has been some move to put jobs on the forum, but no in a searchable way. I think a new site that pushes better norms would be better.
I largely agree with the object level points you make but I don’t see why you wouldn’t want a new org with better processes.
https://forum.effectivealtruism.org/posts/uxfWrFNH7jSSGhkkS/unofficial-pr-faq-posting-more-jobs-to-the-forum-but-they
Any chance you’d share what you don’t like?
That they posted like they have the job features even though they don’t?
(btw I don’t recommend using the forum’s FILTERING/SEARCHING, I’d only use their commenting and upvoting. And login)
It’s not searchable, filterable, or capable of taking a feed from.
I see.
So indeed I wouldn’t use the forum for that. I’d only link from [something filterable and so on] to forum comments.
What do you think?
I feel a bit confused about how much I should be donating.
On the one hand there’s just a straight forward case that donating could help many sentient beings to a greater degree than it helps me. On the other hand, donating 10% for me feels like it’s coming from a place of fitting in with the EA consensus, gaining a certain kind of status and feeling good rather than believing it’s the best thing for me to do.
I’m also confused about whether I’m already donating a substantial fraction of my income.
I’m pretty confident that I’m taking at least a 10% pay-cut in my current role. If nothing else my salary right now is not adjusted for inflation which was ~8% last year so it feels like I’m at least underpaid by that amount (though it’s possible they were overpaying me before). Many of my friends earn more than twice as much as I do and I think if I negotiated hard for a 100% salary increase the board would likely comply.
So how much of my lost salary should I consider to be a donation? I think numbers between 0% and 100% are plausible. −50% also isn’t insane to me as my salary does funge with other peoples donations to charities.
One solution is that I should just negotiate for my salary from a non-altruistic perspective, and then decide how much I want to donate back to my organisation after that. This seems a bit inefficient though and I think we should be able to do better.
One reason I don’t donate ~50% of my salary is that I genuinely believe it’s more cost-effective for me to build runway than donate right now. I quite like the idea of discussing this with someone who strongly disagrees with me and I admire and see if they come round to my position. It feels a bit too easy to find reasons not to give, and I’m very aware of my own selfishness in many parts of my life.
A couple of considerations I’ve thought about, at least for myself
(1) Fundamentally, giving helps save/improve lives, and that’s a very strong consideration that we need equally strong philosophical or practical reasons to overcome.
(2) I think value drift is a significant concern. For less engaged EAs, the risk is about becoming non-EA altogether; for more engaged EAs, it’s more about becoming someone less focused on doing good and more concerned with other considerations (e.g. status); this doesn’t have to be an explicit thing, but rather biases the way we reason and decide in a way that means we end up rationalizing choices that helps ourselves over the greater good. Giving (e.g. at the standard 10%) helps anchor against that.
(3) From a grantmaking/donor advisory perspective, I think it’s hard to have moral credibility, which can be absolutely necessary (e.g. advising grantees to put up modest salaries in their project proposals, not just to increase project runway but also the chances that our donor partners approve the funding request). And this is both psychologically and practically hard to do this if you’re not just earning more but earning far more and not giving to charity! Why would they listen to you? The LMIC grantees especially may be turned off—disillusioned, by the fact that they have to accept peanuts while those of us with power over them draw fat stacks of cash! The least we can do is donate! Relatedly, I think part of Charity Entrepreneurship’s success is absolutely down to Joey and co leading by example and taking low salaries.
(4) Runway is a legitimate consideration, especially since there are a lot of potentially impactful things one can do but which won’t be funded upfront (so you need to do it on savings, prove viability and then get it funded). However, I don’t think this is sufficient to outweigh points 1-3.
(5) In general, I think it’s not useful at all to compare with how much others are earning—that only leads to resentment, unhappiness, and less impactful choices. For myself, the vast majority of my friends are non-EAs; we have similar backgrounds (elite education, worked for the Singapore government as policy officers/scholars at one point or another) and yet since leaving government I’ve had a riskier career, earn far less, have fewer savings, and am forced to delay having a family/kids because of all those reasons. All of this is downstream of choices I’ve made an EA, particularly in avoiding job offers that paid very well but which didn’t have impact (or in fact, had negative impact). Is the conclusion I’m supposed to draw that I’ve made a mistake with my life? I don’t think so, because statistically speaking, some random African kid out there is alive as a result of my donations (and hopefully, my work), and that’s good enough for me.
Thanks for these thoughts. It’s nice to get such detailed engagement. I’m going to try to respond point by point.
(2) - I’m not particularly worried about value drift, and I think there are more effective ways to guard against this than earning to give (e.g. living with people who share your values, talking about EA stuff regularly with people you care about). I think I have quite a lot of evidence in favour of me being pretty resilient to value drift (though I often change my mind about what is important intentionally).
(3) I think this is interesting, though I don’t think that I share this view re being taken seriously. I think that I, and many people I know, have taken actions that they found much harder than donating (e.g. I live in a different country than my partner and in a pretty suboptimal timezone because I think I can do my work better from my current location, I work a lot of hours, I spend a lot of time doing tasks that I find emotionally challenging, I’ve been in situations that I found extremely stressful for ~0 credit). To be clear, I don’t think that I am particularly worthy of praise—but I do think that I score reasonably well on “moral credibility”. Also, I have concerns about this kind of signalling and think it often leads to concerning dynamics—I don’t want EA Funds grantees to feel pressured into taking shoestring salaries. When I was at CE, I remember there being a lot of pressure to take extremely low salaries despite many successful charity founders thinking this was a bad idea. It also led to weird epistemic effects (though I hear things have improved substantially).
(4) I don’t think that runway and grants from EA funders are as fungible as you do. I can talk a bit more about this if that’s useful. I guess that this general point (3) is where we have substantive disagreement. It seems likely to me that I can have much more impact through my career than through my donations—and that having more runway could substantially increase the value of my career. If it doesn’t increase the value of my career and I am wrong, then I can donate later (which I don’t think incurs much in the way of losses from a NTist perspective, but it’s more confusing from a LTist one). To be clear, I think that I’d like to build up 12-24 months of runway, and right now, I have substantially less than that—I am not talking about being able to retire in 10 years or anything.
(5) I think for me, the comparison stuff doesn’t really lead to resentment/unhappiness. It wasn’t clear from my post, but one of the reasons that I made this comparison was because many of my friends do very altruistically valuable work and earn substantially more than I do. They are extremely talented and hard-working (and lucky), and whilst this doesn’t mean that I could get a highly-paying job that generated a lot of altruistic value, I think talking to them regularly has given me an understanding of the kind of work that they do and what it might take to enter a similar role, and it feels doable for me to enter similar roles in a relatively short amount of time (on my inside view). I also have friends that I think are similarly smart/hardworking etc., who earn a lot more money than me in purely for-profit roles. Again, I don’t resent any of these people, and the comparison seems pretty useful to me.
For what it’s worth, I think saving up runway is a no brainer.
During my one year as a tech consultant, I put aside half each month and donated another 10%. The runway I built made the decision for me to quit my job and pursue direct work much easier.
In the downtime between two career moves, it allowed me to spend my time pursuing whatever I wanted without worrying about how to pay the bills. This gave me time to research and write about snakebites, ultimately leading to Open Phil recommending a $500k investment into a company working on snakebite diagnostics.
I later came upon great donation opportunity to a fish welfare charity, which I gave a large part of my runway to and wouldn’t have been able to support if I had given all my money away two years prior.
Had I given more away sooner I think it would be clearer to myself and others that I was in fact altruistically motivated. I also think my impact would have been lower. Impact over image.
EDIT: Actually it’s probably a some-brainer a lot of the time, seeing as I currently have little runway and am taking a shoestring salary. The reason I take a shoestring salary is to increase my organization’s runway, which is valuable for the same reasons that increasing one’s personal runway is. You don’t have to spend as much time worrying about how your org is going to pay the bills and you can instead focus on impact.
(I work with Caleb. Opinions are my own.)
Thank you for your comment.
I’m probably misunderstanding you, but I’m confused by (3) and (5). They seem like they somewhat contradict each other. Remove the emotive language and (3) is saying that people in positions of power should donate and/or have lower salaries because donors or grantees might be upset in comparison, and (5) is saying that we shouldn’t compare our own earnings and donations to others.
These claims contradict each other in the following ways:
If we take (5) as a given, (3) no longer makes sense. If it truly is the case that comparing earnings is never useful, we should not expect (or want) grantees or donors to compare earnings.
Hypothetically, maybe your position might be more like “oh it’s clearly bad to compare earnings, but we live in a flawed world with flawed people.” But if that were the case, then acceding to people’s comparisons is essentially enabling a harmful activity, and maybe we should have a higher bar for enabling others’ negative proclivities.
If we take (3) as the primary constraint (Donors/grantees respect us less if we don’t visibly make sacrifices for the Good), then it seems like (5) is very relevant. Pointing out ways in which we sacrificed earnings to take on EA jobs just seems like a really good reply to concerns that we are being overpaid in absolute terms, or are only doing EA jobs for the money. At least in my case, I don’t recall any of our large donors complaining about my salary, but if I did, “I took a >>70% pay cut originally to do EA work” [1] seems like a reasonable response that I predict to mollify most donors.
Though I think it’s closer to ~40-50% now at my current salary, adjusting for inflation? On the other hand, if I stayed and/or switched jobs in tech I’d probably have had salary increases substantially above inflation as well, so it’s kind of confusing what my actual counterfactual is[2]. But I’m also not sure how much I should adjust for liking EA work and being much more motivated at it, which seems like substantial non-monetary compensation. But EA work is also more stressful and in some ways depressing, so hazard pay is reasonable, so...¯\_(ツ)_/¯.
In part because I think if I wasn’t doing EA work the most obvious alternative I’d be aiming for high-variance earning-to-give, which means high equity value in expectation but ~0 payout in the median case.
Hey Caleb!
(I’m writing this in my personal capacity, though I work at GWWC)
On 1: While I think that giving 10% is a great norm for us to have in the community (and to inspire people worldwide who are able to do the same), I don’t think there should be pressure for people to take a pledge or donate who don’t feel inspired to do so—I’d like to see a community where people can engage in ways that make sense for them and feel welcomed regardless of their donation habits or career choices, as long as they are genuinely engaging with wanting to do good effectively.
On 3: I think it makes sense for people to build up some runway or sense of financial stability, and that they should generally factor this in when considering donating or taking a pledge. I personally only increased my donations to >10% after I felt I had enough financial stability to manage ongoing health issues.
I do think that people should consider how much runway or savings they really need though, and whether small adjustments in lifestyle could increase their savings and allow for more funds to donate—after all, many of us are still in the top few % of global income earners even after taking jobs that are less than we would getting in the private sector.
It may be that building runway is, in fact, the best way to do good in the long term. And maybe certain levels of personal consumption make you more able to sustainably do good through your work.
But just engage seriously with the cost of that runway. With straightforward Givewell charities, that might mean someone dies annually that you could have saved.
I guess there are two questions it might be helpful to separate.
what is the best thing to do with my money if I am purely optimising for the good?
how much of my money does the good demand?
Looking at the first question (1), I think engaging with the cost of giving (as opposed to the cost of building runway) wrt doing the most good is also helpful. It feels to me like donating $10K to AMF could make me much less able to transition my career to a more impactful path, costing me months, which could mean that several people die that I could have saved via donating to Givewell charities.
It feels like the “cost” applies symmetrically to the runway and donating cases and pushes towards “you should take this seriously” instead of having a high bar for spending money on runway/personal consumption.
Looking at (2) - Again I broadly agree with the overall point, but it doesn’t really push me towards a particular percentage to give.
Yes that’s right.
For me, if the answer to #1 is in favor of saving for runway, that disposes of the question. Just need to be careful, as you are aware, of motivated reasoning.
For #2, for me, the good demands all of your money. Of course, you are not going to be the most effective agent if you keep yourself in poverty, so this probably doesn’t imply total penury. But insofar as other conscious beings today are capable of positive and negative experiences like you are, it isn’t clear why you should privilege your own over those of other conscious beings.
To share another perspective: As an independent alignment researcher, I also feel really conflicted. I could be making several multiples of my salary if my focus was to get a role on an alignment team at an AGI lab. My other option would be building startups trying to hit it big and providing more funding to what I think is needed.
Like, I could say, “well, I’m already working directly on something and taking a big pay-cut so I shouldn’t need to donate close to 10%”, but something about that doesn’t feel right… But then to counter-balance that, I’m constantly worried that I just won’t get funding anymore at some point and would be in need of money to pay for expenses during a transition.
Fwiw my personal take (and this is not in my capacity as a grantmaker) is that building up your runway seems really important, and I personally think that it should be a higher priority than donating 10%. My guess is that GWWC would suggest dropping your commitment to say 2% as a temporary measure while you build up your savings.
Many people see the commitment of the pledge to give 10% as one over their lifetime, so if you needed to drop back to build up runway for a while, with the intention of donating more in the following years once your finances were more secure, I personally think that would be an acceptable way to fulfil the pledge!
There’s no strict requirement that donations need to be made each year, but GWWC does encourage regular giving where possible.
(FYI I work at GWWC)
I would be interested in seeing your takes about why building runway might be more cost-effective than donating.
Separately, if you decide not to go with 10% because you want to think about what is actually best for you, I suggest you give yourself a deadline. Like, suppose you currently think that donating 10% would be better than status quo. I suggest doing something like “if I have not figured out a better solution by Jan 1 2024, I will just do the community-endorsed default of 10%.”
I think this protects against some sort of indefinite procrastination. (Obviously less relevant if you never indefinitely procrastinate on things like this, but my sense is that most people do at least sometimes).
(to be clear, I do donate I just haven’t signed the pledge, and I’m confused about how much I am already donating)
I think the main things are:
whilst I think donating now > donating in the future + interest, the cost of waiting to donate is fairly low (if you’re not worried about value drift)
I can think of many situations in the past where an extra $10k would have been extremely useful to me to move to more impactful work.
I don’t think that it always makes sense for funders to give this kind of money to people in my position.
I now have friends who could probably do this for me, but it has some social cost.
I think it’s important for me to be able to walk away from my job without worrying about personal finances.
My job has a certain kind of responsibility that sometimes makes me feel uneasy, and being able to walk away without having another reason not to seems important.
I think I’ve seen several EAs make poor decisions from a place of poor personal finance and unusual financial security strategies. I think the epistemic effects of worrying about money are pretty real for me.
Also:
If I were trying to have the most impact with my money via donations, I think I would donate to various small things that I sometimes see that funders aren’t well positioned to fund. This would probably mean saving my money and not giving right now.
(I think that this kind of strategy is especially good for me as I have a good sense of what funders can and can’t fund—I think people tend to overestimate the set of things funders can’t fund)
I don’t see why the GWWC 10% number should generalise well to my situation. I don’t think it’s a bad number. I don’t weigh the community prior very strongly relative to my inside view here.
Concerning 2 I think from an organization’s perspective it might be even helpful to have a salary agreed with you that makes it easier to replace you and so calculate a realistic runway for the organization.Then you can still voluntarily reduce the salary and claim it as donation. This is what I’m doing and it helps with my own budgeting and with that of my org. I was inspired by this post: https://forum.effectivealtruism.org/posts/GxRcKACcJuLBEJPmE/consider-earning-less
Companies often hesitate to grant individual raises due to potential ripple effects:
The true cost of a raise may exceed the individual amount if it:
Necessitates adjustments to other salaries via organizational policies
Sets precedents for other employees
Sparks salary discussions among colleagues
An alternative for altruistic employees: Negotiate for charitable donation matches
Generally less desirable for most employees
Similarly attractive to altruists (Assumption: Current donations to eligible charities exceed the proposed raise)
This approach allows altruists to increase their impact without triggering company-wide salary adjustments.
(About 85% confident, not a tax professional)
At least in the US I’m pretty sure this has very poor tax treatment. The company match portion would be taxable to the employee while also not qualifying for the charitable tax deduction. The idea is they can offer a consistent match as policy, but if they’re offering a higher match for a specific employee that’s taxable compensation. And the employee can only deduct contributions they make, which this isn’t quite.
Very half baked
Should recent ai progress change the plans of people working on global health who are focused on economic outcomes?
It seems like many more people are on board with the idea that transformative ai may come soon, let’s say within the next 10 years. This pretty clearly has ramifications for people working on longtermist causes areas but I think it should probably affect some neartermist cause prioritisation as well.
If you think that AI will go pretty well by default (which I think many neartermists do) I think you should expect to see extremely rapid economic growth as more and more of industry is delegated to AI systems.
I’d guess that you should be much less excited about interventions like deworming or other programs that are aimed at improving people’s economic position over a number of decades. Even if you think the economic boosts from deworming and ai will stack, and you won’t have sharply diminishing returns on well-being with wealth I think you should be especially uncertain about your ability to predict the impact of actions in the crazy advanced ai world (which would generally make me more pessimistic about how useful the thing I’m working on is).
I don’t have a great sense of what the neartermists who think AI will go well should do. I’m guessing some could work on accelerating capabilities though I think that’s pretty uncooperative. It’s plausible that saving lives now is more valuable than before if you think they might be uploaded but I’m not sure there is that much of a case for this being super exciting from a consequentialist world view when you can easily duplicate people. I think working on ‘normy’ ai policy is pretty plausible or trying to help governments orient to very rapid economic growth (maybe in a similar way to how various nonprofits helped governments orient to covid).
To significantly change strategy, I think one would need to not only believe “AI will go well” but specifically believe that AI will go well for people of low-to-middle socioeconomic status in developing countries. The economic gains from recent technological explosions (e.g., industrialization, the computing economy) have not lifted all boats equally. There’s no guarantee that gaining the technological ability to easily achieve certain humanitarian goals means that we will actually achieve them, and recent history makes me pretty skeptical that it will quickly happen this time.
I’m not an expert but I’d be fairly surprised if the Industrial Revolution didn’t do more to lift people in LMICs out of poverty than any known global health intervention even if you think it increased inequality. Would be open to taking bets on concrete claims here if we can operationalise one well.
I think the Industrial Revolution and other technological explosions very likely did (or will) have an overall anti-poverty impact . . . but I think that impact happened over a considerable amount of time and was not of the magnitude one might have hoped for. In a capitalist system, people who are far removed from the technological improvements often do benefit from them without anyone directing effort at that goal. However, in part because the benefits are indirect, they are often not quick.
So the question isn’t “when will transformational AI exist” but “when will transformational AI have enough of an impact on the wellbeing of economic-development-program beneficiaries that it significantly undermines the expected benefits of those programs?” Before updating too much on the next-few-decades impact of AI on these beneficiaries, I’d want to see concrete evidence of social/legal changes that gave me greater confidence that the benefits of an AI explosion would quickly and significantly reach them. And presumably the people involved in this work modeled a fairly high rate of baseline economic growth in the countries they are working in, so massive AI-caused economic improvement for those beneficiaries (say) 30+ years from now may have relatively modest impact in their models anyway.
I think so, see here or here for a bit more discussion on this
My guess/impression is that this just hasn’t been discussed by neartermists very much (which I think is one sad side-effect from bucketing all AI stuff in a “longtermist” worldview)
Why handing over vision is hard.
I often see projects of the form [come up with some ideas] → [find people to execute on ideas] → [hand over the project].
I haven’t really seen this work very much in practice. I have two hypotheses for why.
The skills required to come up with great projects are pretty well correlated with the skills required to execute them. If someone wasn’t able to come up with the idea in the first place, it’s evidence against them having the skills to execute well on it.
Executing well looks less like firing a canon and more like deploying a heat-seeking missile. In reality, most projects are a sequence of decisions that build on each other, and the executors need to have the underlying algorithm to keep the project on track. In general, when someone explains a project, they communicate roughly where the target is and the initial direction to aim in, but it’s much harder to hand off the algorithm that keeps the missile on track.
I’m not saying separating out ideas and execution is impossible, just that it’s really hard and good executors are rare and very valuable. Good ideas are cheap and easy to come by, but good execution is expensive.
A formula that I see more often works well is [a person has idea] → [person executes well in their own idea until they are doing something they understand very well “from the inside” or is otherwise hand over-able] → person hands over the project to a competent executor.
I agree with this and I appreciate you writing this up. I’ve also been mentioning this idea to folks after Michelle Hutchinson first mentioned it to me.
If you want there to be more great organisations, don’t lower the bar
I sometimes hear a take along the lines of “we need more founders who can start organisations so that we can utilise EA talent better”. People then propose projects that make it easier to start organisations.
I think this is a but confused. I think the reason that we don’t have more founders is due to a having few people who have deep models in some high leverage area and a vision for a project. I don’t think many projects aimed at starting new organisations are really tackling this bottleneck at its core and instead lower the bar by helping people access funding, or appear better positioned than they actually are.
I think in general people that want to do ambitious things should focus on building deep domain knowledge, often by working directly with people with deep domain knowledge. The feedback loops are just too poor within most EA cause areas to be able to learn effectively by starting your own thing. This isn’t always true, but I think it’s more often than not true for most new projects that I see.
I don’t think the normal startup advice where running a startup will teach you a lot applies well here. Most startups are trying to build products that their investors can directly evaluate. They often have nice metrics like revenue and daily active users that track their goals reasonably well. Most EA projects lack credible proxies for success.
Some startups also lack credible success proxies such as bio startups. I think bio startups are particularly difficult for investors to evaluate and many experienced vcs avoid the sector entirely unless they have staff with bio PhDs and even then it’s still pretty hard to evaluate the niche area the startup is working in. Anecdotally, moderately successful bio startups seem much more likely to have a BS product than the average tech startup at a similar level of funding/team size.
Of course, I do think there are founders that are above the bar, but I think starting a new project is actually often very hard and a poor learning environment and I would probably prefer the bar was a bit higher and there were fewer nudges for early career people towards starting new things.
A quickly written model of epistemic health in EA groups I sometimes refer to
I think that many well intentioned ea groups do a bad job cultivating good epistemics. By this I roughly mean the culture of the group does not differentially advantage truthseekjng discussions or other behaviours that helps us figure out what is actually true as opposed to what is convenient or feels nice.
I think that one of the main reasons for this is poor gatekeeping of ea spaces. I do think groups do more and more gate keeping, but they are often not selecting on epistemics as hard as I think they should be. I’m going to say why I think this is and then gesture at some things that might improve the situation. I’m not going to talk at this tjme about why I think it’s important—but I do think it’s really really important.
EA group leaders often exercise a decent amount of control over who should be part of their group (which I think is great). Unfortunately, it’s much easier to evaluate what conclusion a person has come to, than how good were their reasoning processes. So “what a person says they think” becomes the filter for who gets to be in the group as opposed to how do they think. Intuitively I expect a positive feedback loop where groups become worse and worse epistemically as people are incentivised to reach a certain conclusion to be part of the group and future group leaders are drawn from a pool of people with bad epistemics and then reinforce this.
If my model is right there are a few practical takeaways: • be really careful about who you make a group leader or get to start a group (you can easily miss a lot of upside that’s hard to undo later) • make it very clear that your EA group is a place for truth seeking discussion potentially at the expense of being welcoming or inclusive • make rationality/epistemics a more core part of what your group values, idk exactly how to do that—I think a lot of this is making it clear that this is what your group is in part about
I’m hoping to have some better takes on this later, I would strongly encourage the CEA groups team to think about this along with EA group leaders. I don’t think many people are working in this area though I’d also be sad if people fill up the space with low quality content so think really hard about it and try to be careful about what you post.
As someone both trying to start a group and to find someone else to run it so I can move to other places, I’m really curious about your perspective on this.
In my model, a lot of the value of a group comes from helping anyone that’s vaguely interested in doing good effectively to better achieve their goals, and to introduce them to online resources, opportunities, and communities.
I would guess that even if the leader has poor epistemics, they can still do a good enough job of telling people: “EA exists, here are some resources/opportunities you might find useful, happy to answer your 101 questions”.
I have heard a similar take from someone on the CEA groups team, so I would really want to understand this better.
There seems to be anxiety and concern about EA funds right now. One thread is here.
Your profile says you are the head of EA funds.
Can you personally make a statement to acknowledge these concerns, say this is being looked into, or anything else substantive? I think this would be helpful.
The importance of “inside view excitement”
Another model I regularly refer to when advising people on projects to pursue. Quickly written—may come back and try to clarify this later.
I think it’s generally really important for people to be inside view excited about their projects. By which I mean, they think the project is good based on their own model of how the project will interact with the world.
I think this is important for a few reasons. The first obvious one is that it’s generally much more motivating to work on things you thing are good.
The second, and more interesting reason, is that if you are not inside view excited I think (generally speaking) you don’t actually understand why your project will succeed. Which makes it hard to execute well on your project. When people aren’t inside view excited about their project I get the sense they either have the model and don’t actually believe the project is good, or they are just deferring to others on how good it is which makes it hard to execute.
One of my criticisms of criticisms
I often see public criticisms of EA orgs claiming poor execution on some object level activity or falling short on some aspect of the activity (e.g. my shortform about the 80k jobs board). I think this is often unproductive.
In general I think we want to give feedback to change the organisations policy (decision making algorithm), and maybe the EA movements policy. When you publicly criticise an org on some activity you should be aware that you are disincentivising the org from generally doing stuff.
Imagine the case where the org was choosing between scrappily running a project to get data and some of the upside value strategically as opposed to carefully planning and failing to execute fully. I think in these cases you should react differently and from the outside it is hard to know which situation the org was in.
If we also criticised orgs for not doing enough stuff I might feel differently, but this is an extremely hard criticism to make unless you are on the inside. I’d only trust a few people who didn’t have inside information to do this kind of analysis.
Maybe a good idea would be to describe the amount of resources that would have had to have gone into the project for you to see the outcome as being reasonably successful ??? Idk seems hard to be well calibrated.
I expect some people to react negatively to this, and think that I am generally discouraging of criticism. I think that I feel moderately about most criticism, neither helpful nor particularly unhelpful. The few pieces of thoughtful criticism I see written up I think are very valuable, but thoughtful criticism in my view is hard to come by and requires substantial effort.
(crosspost of a comment on imposter syndrome that I sometimes refer to)
I have recently found it helpful to think about how important and difficult the problems I care about are and recognise that on priors I won’t be good enough to solve them. That said, the EV of trying seems very very high, and people that can help solve them are probably incredibly useful.
So one strategy is to just try and send lots of information that might help the community work out whether I can be useful, into the world (by doing my job, taking actions in the world, writing posts, talking to people …) and trust the EA community to be tracking some of the right things.
I find it helpful to sometimes be in a mindset of “helping people reject me is good because if they reject me then it was probably positive EV and that means that the EA community is winning therefore I am winning (even if I am locally not winning).
More EAs should give rationalists a chance
My first impression of meeting rationalists was at a AI safety retreat a few years ago. I had a bunch of conversations that were decidedly mixed and made me think that they weren’t taking the project of doing a large amount of good seriously, reasoning carefully (as opposed to just parroting rationalist memes) or any better at winning than the standard EA types that I felt were more ‘my crowd’.
I now think that I just met the wrong rationalists early on. The rationalists that I most admire:
Care deeply about their values
Are careful reasoners, and actually want to work out what is true
Are able to disentangle their views from themselves, making meaningful conversations much more accessible
Are willing to seriously consider weird views that run against their current views
Calling yourself a rationalist or EA is a very cheap signal and I made an error early on (insensitivity to small samples sizes etc.) dismissing their community. Whilst there is still some stuff that I would change, I think that the median EA could move several steps in a ’rationalist’ direction.
Having a rationalist/scout mindset + caring a lot about impact are pretty correlated with me finding someone promising. It’s not essential to having a lot of impact but I am starting to think that EA is doing the altruism (A) part of EA super well and the rationalist are doing the effective (E) part of EA super well.
My go to resources are probably:
The scout mindset—Julia Galef
The codex—Scott Alexander
The sequences highlights—Eliezer Yudkowsky/Less Wrong
The Less Wrong highlights
I adjust upwards on EAs who haven’t come from excellent groups
I spend a substantial amount of my time interacting with community builders and doing things that look like community building.
It’s pretty hard to get a sense of someone’s values, epistemics, agency …. by looking at their CV. A lot of my impression of people that are fairly new to the community is based on a few fairly short conversations at events. I think this is true for many community builders.
I worry that there are some people who were introduced to some set of good ideas first, and then people use this as a proxy for how good their reasoning skills are. On the other hand, it’s pretty easy to be in an EA group where people haven’t thought hard about different cause areas/interventions/… And come away with the mean take that’s not very good despite being relatively good reasoning wise.
When I speak to EAs I haven’t met before I try extra hard to get a sense of why they think x and how reasonable a take that is, given their environment. This sometimes means I am underwhelmed by people who come from excellent EA groups, and impressed by people who come from mediocre ones.
You end up winning more Caleb points if your previous EA environment was ‘bad’ in some sense, all else equal.
(I don’t defend why I think a lot of the causal arrow points from the EA environment quality to the EA quality—I may write something on this, another time.)
It’s all about the Caleb points man
‘EA is too elitist’ criticisms seem to be more valid from a neartermist perspective than a longtermist one
I sometimes see criticisms around
EA is too elitist
EA is too focussed on exceptionally smart people
I do think that you can have a very outsized impact even if you’re not exceptionally smart, dedicated, driven etc. However I think that from some perspectives focussing on outliery talent seems to be the right move.
A few quick claims that push towards focusing on attracting outliers:
The main problems that we have are technical in nature (particularly AI safety)
Most progress on technical problems historically seems to be attributable to a surprisingly small set of the total people working on the problem
We currently don’t have a large fraction of the brightest minds working on what I see as the most important problems
If you are more interested in neartermist cause areas I think it’s reasonable to place less emphasis on finding exceptionally smart people. Whilst I do think that very outliery-trait people have a better shot at very outliery impact, I don’t think that there is as much of an advantage for exceptionally smart people over very smart people.
(So if you can get a lot of pretty smart people for the price of one exceptionally smart person then it seems more likely to be worth it.)
This seems mostly true to me by observation, but I have some intuition that motivates this claim.
AIS is a more novel problem than most neartermist causes, there’s a lot of working going in to getting more surface area on the problem as opposed to moving down a well defined path.
Being more novel also makes the problem more first mover-y so it seems important to start with a high density of good people to push onto good trajectories.
The resources for getting up to speed on the latest stuff seemless good than in more established fields.