Talk to me about cost benefit analysis !
Charlie_Guthmann
If I want EA to become less decentralized and have some sort of internal political system, what can I do?
I have 0 power or status or ability to influence people outside of persuasive argumentation. On the other hand, McCaskill and Co have a huge ability to do so.
The idea that we can’t blame the high-status people in this community because they aren’t de jure leaders when it’s incredibly likely they are the only people who could facilitate a system in which there are de jure leaders seems misguided. I’m not especially interested in assigning blame but when you ask the question who could make significant change to the culture or structure of EA I do think the answer falls on the thought leaders, even if they don’t have official positions.
I started Northwestern’s EA club with a close friend my sophomore year at northwestern (2019). My friend graduated at the end of that year and our club was still nascent. There was an exec board of 6 or 7 but truly only a couple were trustworthy with both getting stuff done and actually understanding EA.
Running the club during covid and having to respond to all these emails and carrying all this responsibility somewhat alone(alone isn’t quite fair but ) and never meeting anyone in person and having to explain to strangers over and over again what ea was stressed /tired me a decent bit (I was 19-20) and honestly I just started to see EA more negatively and not want to engage with the community as much, even though I broadly agreed with it about everything.
I’m not sure I really feel externally higher status in any way because of it. I guess I might feel some internal status/confidence from founding the club, because it is a unique story I have, but I would be lying if I said more than 1 or 2 people hit me up during eagx boston (had a great time btw, met really cool people)to talk over swapcard, meanwhile my friend who has never interacted with ea outside of NU friends and fellowship but has an interesting career was dmed up like 45 times. And the 2 people who hit me up did not even do so because I founded, much less organized the club. The actual success of the club in terms of current size/avg. commitment and probabilistic trajectory does not seem to be data that anyone in the community would necessarily notice if I didn’t try to get them to notice. Don’t even get me started on whether or not they would know if I promoted/delegated (to) the right people. At any point during our clubs history I could tell you which people were committed and which weren’t, but no one ever asked. There are people who work with the university groups but it’s not like they truly knew the ins and outs of the club, and even if I told them how things are truly going, what does that really do for me? It may be the case that they would be more likely to hirer or recommend people who are better at delegating but anecdotally this doesn’t even seem true to me. Which is still a far cry from doing impact estimates and funding me based on that. Plus isn’t it possible that people who delegate less just inherently seem like a more important piece of a universities “team”. Maybe there are other people waiting to take over and do and even better job but they are quite literally competition to their boss in that case. Perhaps it increases my chance of getting jobs? but I’m not sure, and if it was, it’s not like it would be connected to any sort of impact score.
Founding the club has at best a moderate impact on its own. It is the combination of starting the club and giving it a big enough kick to keep going that I believe is where the value is created. Otherwise the club may die and you basically did nothing. A large part of this “kick” is ofc ensuring the people after you are good. Currently, Northwestern’s Effective Altruism club is doing pretty good. We seem to be on pace to graduate 50+ fellows this year, we have had 10-15 people attend conferences. TO BE CLEAR—I have done almost nothing this year. The organizers that (at risk of bragging) I convinced/told last year to do the organizing this year have done a fire job. Much better than I could have. I like to think that if I had put very little effort in last year, or potentially even worse, not give authority to other positive actors in the club, there would have been a not tiny chance the club would have just collapsed, though I could be wrong. It does seem as though there is a ton of interest in effective altruism among the young people here, so it’s feasible that this wasn’t such a path dependent story.
Still—If I had started the club, put almost no effort in to creating any structure to the club/giving anyone else a meaningful role during covid year other than running events with people I wanted to meet (and coordinating with them myself, which counterintuitively is easier then delegating), and then not stepped down/maintained control this year so that I could continue doing so, no one would have criticized me, even though this action would probably have cost ea 15-30 committed northwestern students already, and potentially many more down the line. I mean, no one criticized me when I ghosted them last year(lol). If I had a better sense of the possibility of actually getting paid currently or after school for this stuff, I could see it increasing the chance I actually did something like above. Moreover, if I had a sense of the potential networking opportunities I might have had access to this year ( I did almost all my organizing except the very beginning during heavy covid), this probably would have increased my chances of doing something like above even more than the money.
To be clear I probably suck at organizing, and even if I hadn’t solely used the club as my own status machine it would have been pretty terrible if I didn’t step down and get replaced by the people who currently organize.
To summarize/ Organize:
There is a lack of real supervision (maybe this has changed like I said I wasn’t super involved this year) from the top of what is happening at these clubs, and to the extent that you might receive status for success while you organize, it seems highly related to how willing you are to reach out to people in CEA and ask for more responsibility, or to post updates online, or to generally socialize with other EAs about stuff
If you correctly step down so someone better can run the club, it’s not clear there is any sort of reward
I would be surprised if delegating correctly was noticed.
In general, being a good organizer isn’t even something that seems to get you much clout in this community, see other post today about this (i haven’t read it yet)
Thus, the real clout from organizing, esp. If you don’t have an online presence, comes from the access organizing can give you
organizing provides opportunities to reach out to anyone in the community
BUT, these opportunities often come hand in hand with specific events that your club is participating in. The most “bonding” moments come from helping plan events with other members of EA from different places. There are a finite number of these and each one you delegate is a lost opportunity to talk to someone at CEA, another organizer, a possible speaker, etc.
It can feel as though you deserve these opportunities because if you just spent the work that you used on organizing networking in the first place, or blogging, you would probably be more respected, since in the first place organizing doesn’t seem to get much status. Because there is no real oversight, you definitely are not at risk of getting shamed for using the club as a status machine.
So you start attending meetings that someone else in the club should have been at, or emailing people to ask them to speak at the club when you should have let a freshman or sophomore email them.
or even giving an intro talk when you should have let a younger student give it, because it means all the other people from your school will see you more as one of the sole leaders of the club, which tbf is less related to the overarching concept of this post. And also I want to give a nod to the discussion on balancing resilliance vs. immediate impact, in the sense that you might give a better talk(or so you think), which will convince more people, which might make the club more resilient. But Then I would say you should have coached the younger student better.
Seems like we might be promoting squeaky wheels. You get paid if you ask for money(i think?), you get status if you take it, etc. This could both provide bad incentives and be incredibly frustrating to the shyer folk.
No one has ever reached out to me for advice on starting a club, or asked how my experience went, or asked me if I would be interested in meta work. I have never received a cent for any of my community building work. If I was actually getting paid what I believe my time is worth, which is probably still much much less than the actual value of my time to EA while I was organizing, I would almost certainly be owed (tens of?) thousands of dollars. I definitely feel like my sense that this was a community where you didn’t need to market yourself to get to the top was not as true as I originally envisioned. At the same time I don’t regret starting the club at all. It is probably one of the few things I have done in my life that I feel proud of.
What should we do? Can we federalize clubs? Should we have more data analysts and researchers and CEA people work on this? Would we actually audit a college club? Should we pay organizers more? ← but wouldn’t this increase “vulturism”?
The core realization should be that EA needs an institution(s) that doesn’t exist. Without more complex institutions we are basically being culty and trusting each other on a variety of dimensions. I hope the trust remains but why not build resiliency(unless of course, you believe gatekeeping is the solution).
I know I didn’t precisely answer your questions and more just rambled. let me know if you have questions, and obviously if I said stuff that sounds wrong disagree. I feel like even though this post is long it’s lacking a lot of nuance I would like to include but I felt it was best to post it like this.
The biggest reason I am / have been disillusioned is that ethics are subjective (in my view, though I feel very confident). I don’t understand how a movement like EA can even exist within this paradigm unless
The movement only serves as a knowledge keeper of how to apply epistemics to the real world, with a specific focus on making things “better”, better being left undefined, but does not engage in object level/non research work outside of education. Which is almost just LW.
The movement splits into a series of submovements which each have an agreed upon ethical framework. Thus every cause area/treatment can be compared under a standardized cost benefit analysis, and legitimate epistemic progress can be made. Trades can be made between the movements to accomplish shared ethical goals. Wars can be waged when consensus is impossible (sad).
Clearly, neither of the above suggestions are what we are currently doing. It feels low integrity to me. I’m not sure how we have coalesced around a few cause areas, admitted each is justified basically because of total utilitarianism, and then still act like we are ethically agnostic. That seems like a very clear example of mental gymnastics.
Once you get in this mindset, I feel like it immediately becomes clear that EA in fact doesn’t have particularly good epistemics. We are constantly doing CBAs (or even worse, just vaguely implying things are good without clear evidence and analysis) with ill defined goals. Out of this many problems emerge. We have no institutional system for deciding where our knowledge is at and checking decision making powers (decentralization is good and bad though). Billionaires have an outsized ability to imprint their notion of ethics on our movement. We hero worship. We pick our careers as much on what looks like it will be funded well by EA and by what other top EAs are doing as what seems in theory to be the best thing to us. Did you get into AI safety because it was justified under your world view or did you adopt a worldview because people who seemed smart convinced you of AI safety before you even had a clearly defined worldview?
One reason I’ve never really made comments like this on the forum is that it feels sort of silly. I would get it if people feel like there isn’t a place for anti-realists here, since once you go down the rabbit hole literally everything is arbitrary. Still I find myself more aligned with EAs by far than anyone else in my thinking patterns, so I never leave.
“The average age of Members of the House at the beginning of the 117th Congress was 58.4 years; of Senators, 64.3 years.”
Two (barely) related thoughts that I’ve wanted to bring up. Sorry if it’s super off topic.
Rethink priorities application for a role I applied for two years ago told applicants it was timed application and not to take over two hours. However there was no actual verification of this; it was simply a Google form. The first round I “cheated” and took about 4 hours. I made it to the second round. I felt really guilty about this so made sure not to go over on the second round. I didn’t finish all the questions and did not get to the next round. I was left with the unsavory feeling that they were incentivizing dishonest behavior and it could have easily been solved by using something similar to tech companies where a timer starts when you open the task. I haven’t applied for other stuff since so maybe they fixed this.
Charity entrepreneurship made a post a couple months back extending their deadline for the incubator because they thought it was worth it to get good candidates. I decided to apply and made it a few rounds in. I would say I spent like 10 ish hours doing the tasks. I might be misremembering, but at the time of extension I’m pretty sure they already had 2000-4000 applicants. Considering the time it took me, and assuming other applicants were similar, and the amount of applicants they already had, I’m not sure it was actually positive ev extending the deadline.
Neither of these things are really that big of a deal but thought I’d share
It seems plausible that there are ≥100,000 researchers working on ML/AI in total. That’s a ratio of ~300:1, capabilities researchers:AGI safety researchers.
Barely anyone is going for the throat of solving the core difficulties of scalable alignment. Many of the people who are working on alignment are doing blue-sky theory, pretty disconnected from actual ML models.
One question I’m always left with is: what is the boundary between being an AGI safety researcher and a capabilities researcher?
For instance, My friend is getting his PhD in machine learning, he barely knows about EA or LW, and definitely wouldn’t call himself a safety researcher. However, when I talk to him, it seems like the vast majority of his work deals with figuring out how ML systems act when put in foreign situations wrt the training data.
I can’t claim to really understand what he is doing but it sounds to me a lot like safety research. And it’s not clear to me this is some “blue-sky theory”. A lot of the work he does is high-level maths proofs, but he also does lots of interfacing with ml systems and testing stuff on them. Is it fair to call my friend a capabilities researcher?
- 10 Apr 2023 13:01 UTC; 35 points) 's comment on Nobody’s on the ball on AGI alignment by (
This comment might seem somewhat tangential but my main point is that the problem you are trying to solve is unsolvable and we might be better off reframing the question/solution.
My views
(1) anti realism is true
Every train/stop is equally crazy/arbitrary.
(2) The EA community has a very nebulous relationship with meta-ethics
obligatory the community is not a monolith
I see lots of people who aren’t anti-realists, lots who are
Community has no political system, so logic/persuasion is often the only way to push many things forward (unless you control resources)
if anti-realism is true there is no logical way to pick a worldview
(if anti-realism is true) most of this discussion is just looping and/or trying to persuade other EAs. I say that as someone who likes to loop about this stuff.
most of the power is in the hands of the high status early joiners (we constantly reference Karnofsky or Macaskill as if they have some special insight on meta-ethics) and rich people who join the community (give money to whatever supports their worldview).
(3) I think the community should explicitly renounce its relationship with utilitarianism or any other ethical worldview.
Let subgroups pop up that explicitly state their worldview, and the political system they will use to try to get there. e.g. utilitarianism + democracy, welfarism + dictatorship, etc.
You can reflect upon your moral views and divy up your funds/time to the groups accordingly.
These groups will become the primary retro funders for impact markets, since their goals are clear, and they will have the institutions to decide how they measure impact.
and also just the main funders but I wanted to emphasize that this will have good synergy with higher tech philanthropy.
I feel that this is much more transparent, honest, and clear. This is personally important to me.
We can stop arguing about this sort of stuff and let the money talk.
EA as a “public forum”, not an agent with power
Thanks for the post. I’ve also been surprised how little this is discussed, even though the value of x-risk reduction is almost totally conditional to the answer to this question (the EV of the future conditional on human/progeny survival). Here are my big points to bring up re this issue, though some might be slight rephrasing of yours.
Interpersonal comparisons of utility canonically have two parts—a definition of utility, of which every sentient being is measured by. Then, to compare and sum utility, one must pick (subjective) weights for each sentient being, scale their utility by the weights, and add everything up. (u_1x_1+.....+u_nx_n). If we don’t agree on the weights, it’s possible that one person may think the future be in expectation positive while another thinks it will be negative even w/ perfect information of what the future will look like. It could be even harder to agree on the weights of sentient beings when we don’t even know what agents are going to be alive. We have obvious candidates for general rules about how to weight utility (brain size, pain receptors, etc.) but who knows how are conceptions of these things will change.
Basically repeating your last point in the chart but it’s really important so I’ll reiterate. Like everything else normative, there is no objective “0” line, no non-arbitrary point at which life is worth living. It is a decision we have to make. Moreover, I don’t see any agreement in this community on the specific point where life is worth living. It is pretty obvious that disagreement about this could flip the sign of the EV of the future.
“Alien Counterfactuals”. I actually mentioned this in a comment to a previous post where someone said we should mostly just call longtermism x-risk(extremely wrong in my opinion). First, for simplicity, let’s just assume humans become grabby. If we become grabby, a question of specific interest to us should be, what characteristics do our society and species have relative to other grabby societies/species? Are we going to be better or worse gatekeepers of the future than the other gatekeepers of the future? I’m pretty sure we should take the prior that we display the mean characteristics of a grabby civilization (interested in hearing if others disagree). If this is the case, then, again for simplicity, assuming(for simplicity) that our lightcone will be populated by aliens whether or not we specifically become grabby, x-risk reduction could be argued to have exactly 0 expected value, as we have no reason to believe that we are going to do a better job with the future than aliens. Evidence updating against the prior would probably take the form of arguments about why our specific evolutionary or economic history was a weird way to become grabby, not an easy task. Of course even with all the simplifying assumptions I’ve made, it’s not so simple. Even if we have the mean characteristics of all the other grabby civilizations, adding more civilizations to the mix can change the game theory of space wars and governance. Still, it’s not clear if more or less players is better. I talked to a few people in EA about ‘alien counterfactuals’, and they all seemed to dismiss the argument, thinking that humans are better than “rolling the dice” on a new grabby civilization. No one provided arguments that were super convincing though. The most convincing counter argument I heard was that it is very unlikely that grabby aliens will actually end up existing in our lightcone, subverting the whole argument. AI makes this argument significantly more confusing but it’s not worth getting into without further ironing out of the initial arguments.
And then this is sort of the whole point of your post but I will reiterate—predicting the future is extremely difficult. We should have very little confidence in what it will be like. Predicting whether the future will be good or bad (given that we have already ironed out the normative considerations, which we haven’t) is probably easier than predicting the future but still seems really difficult. The burden of evidence is on us to prove the future will be good, not on other people to prove it will be bad. After all, we are pumping huge amounts of money into creating impact which is completely conditional on this information. I’ve found posts like this one to be the only type of things that even feel tractable, and if that is the level of specificity we are at, it truly does feel like we have been p. wagered on this issue. posts like this one that you mentioned ultimately don’t have nearly enough firepower to serve as anything more than an exploration of what a full argument would look like.
- 17 Oct 2022 15:14 UTC; 2 points) 's comment on Parfit + Singer + Aliens = ? by (
It’s easy to say that no one should do what SBF did. If the rumours were true, there are very few ethical systems that would justify the behavior. What’s harder and more action relevant is to specify ahead of time very clearly the exact lines of you find acceptable
What is “fraud” and how much “fraud” do we allow? You can argue good advertisements always toe the line. at some point, you seriously screw over business opportunities. Now, not gambling with customer money is again an obvious case far exceeding this.
Lots of companies toe the line of honesty but we would never expect huge backlash, there is a level that is accepted by society.
What about anti-competitive practices and monopolistic behavior? Should we ask our founders to not profit max? Should we ask them to not lobby the government for tax breaks and favorable licensing, etc. ?
If so, do we feel bad about taking money from bill gates? Microsoft has a history of anti-competitive behavior.
What about businesses that probably harm society overall?
Facebook has probably been net-bad for society in my view- though it is hard to imagine what would exist if it was gone—Should we feel bad about taking money from meta?
If I’m in a VC meeting trying to get funding for my startup, should I not significantly exaggerate my product? Isn’t this what everyone does while in a VC meeting?
What if we know for sure that we can lie to consumers, make a ton of money, and there won’t be any backlash for it (yes this is probably not a real situation). Are we not doing that? if so we aren’t utilitarians, which is fine but then why are we utilitarians in our cause prioritization? Seems arbitrary
etc.
Commenting to say I strongly agree that epistemic and attention distortions are big problems. It already seems like future funds has swayed the ideological center of this movement.
Would like to see an analysis on how future funds changed the ideological mass distribution of this community. I think you could argue that most shift it caused was simply by changing incentives and not from new information.
e.g. As someone who has thought EA has underfunded political type stuff for a while, It’s been concerning to see people get more interested in (EA) politics and spend so much attention on whether politics is worth it and/or how best to do politics just because someone in the community donated 12M dollars(and because they have high status, which is because they are rich… ). It’s not like SBF is a poly sci expert or wrote a ground breaking cost benefit to convince us(correct me if I’m wrong). He just went on 80k pod and said he thinks politics is a good bet and then dumped the cash trucks. I understand that even if you disagree w/ flynn campaign you’re going to want to comment on how you disagree, but the implication here is if an EA billionaire gives 12M dollars to have people dig holes in the ground (ok it would have to be something a bit more convoluted and or justifiable) it’s going to at least cause a bunch of impactful people to spend time thinking about the value prop.
If EA people think that project is valuable we would hope there focus would not be super conditional on the current funding streams.
I have gotten the general feeling that there is not nearly enough curiosity in this community about the ins and outs of politics vs stuff like the research and tech world. Reports just aren’t very sexy. Specialization can be good but there are topics that EAs engage with that are probably as specialized (hyper specific notions of how an ai might kill us?) that see much more engagement and I don’t think it is due to impact estimates.
I don’t read much on AI safety so I could be way off but it feels pretty important. The US government could snap their fingers and double the amount of funding going into AI safety. This seems very salient for predicting impacts of EA AI safety orgs. Either way, this has made me more interested in reading through https://forum.effectivealtruism.org/tag/ai-governance.
Fair point. First let me add another piece of info about the congress: “The dominant professions of Members are public service/politics, business, and law.”
Now on to your point.
How old are the leaders of the military? How many of them know what python is? What was their major in college? Now ask yourself the same thing about the CIA/NSA./Etc. This isn’t a rhetorical question. I assume each department will differ. Though there may be a bit of smugness implicit.
Conditional on such a cluster existing: How likely do you think it is that it would be declassified? I don’t find it that unlikely that the NSA or CIA could be running a program and not speaking on it, and it seems possible to figure this out simply by accounting for where every CS/AI graduate in the US works. I feel less strongly that the military would hide such a project. FWIW my epistemic confidence is very low for this entire claim, I am not someone who has obsessed over governmental classification and things like that.
How many CS PHDs are there in the US government in total? How many masters? how many bachelors?
I think there is also more to say about the variety of reasons people feel more comfortable giving their input on economic, social, and foreign policy issues (even if they have no business doing so), which I think could leak into leaders just naturally trending towards dealing with those issues, but I think this is a much more delicate argument that I don’t feel comfortable fleshing out right now.
I think aogaras point above is reasonable and mostly true, but I don’t think it goes to the level of explaining the discrepancy. This is incredibly skewed because of who I associate with(not all of my friends are eas though), but anecdotally I think AGI is starting to gain some recognition as a very important issue among people my age (early 20s), specifically those in STEM fields. Not a lot, but certainly more than it is talked about in the mainstream. Let’s be real though, none of my friends will ever be in the military or run for office, nor do I believe they will work for the intelligence agencies. My point is, In addition to age, we have a serious problem with under-representation of stem in high up positions and over-representation of lawyers. It would be interesting to test the leaders of various Gov departments on their level of computer science competency/comprehension.
Hi Peter thanks for the response—I am/was disappointed in myself also.
I assumed RP had thought about this. and I hear what you are saying about the trade-off. I don’t have kids or anything like that and I can’t really relate to struggling to sit down for a few hours straight but I totally believe this is an issue for some applicants and I respect that.
What I am more familiar with is doing school during COVID. My experience left me with a strong impression that even relatively high-integrity people will cheat in this version of the prisoner’s dilemma. Moreover, it will cause them tons of stress and guilt, but they are way less likely to bring it up than someone who is caused issues from having to take the test in one sitting because no one wants to out themselves as a cheater or even thinking about cheating.
I will say in school there is something additionally frustrating or tantalizing about seeing your math tests that usually have a 60% average be in the 90%s and having that confirmation that everyone in your class is cheating but given the people applying are thoughtful and smart they probably would assign this a high probability anyway.
If I had to bet, I would guess a decent chunk of the current employees who took similar tests (>20%) at RP did go over time limits but ofc this is pure speculation on my part. I just do think a significant portion of people will cheat in this situation (10-50%) and given a random split between the cheaters and non-cheaters, the people who cheat are going to have better essays and you are more likely to select them.
(to be clear I’m not saying that even if the above is true that you should definitely time the tests, I could still understand it not being worth it)
I’m confused why you and everyone else in this thread are so quick to dismiss the idea that hunter gathers have more happiness/ life satisfaction/well being.
This is not at all obvious to me.
- 20 Mar 2023 11:09 UTC; 17 points) 's comment on Why I’m suss on wellbeing surveys by (
I stopped being vegetarian almost 2 years ago—one of the biggest reasons I’m not a vegetarian is that I stay up late pretty much every day and don’t always feel like cooking or eating snacks so I will go to whatever is open near me. During university, nothing really stayed open after 10 anyway because Evanston is a lame place. So I would often eat at or before 10, and if I was eating out there were vegetarian options (stir fry with tofu, chipotle, etc.) still at this time.
Now I live in a predominantly eastern European and Mexican area of Chicago. There isn’t much vegetarian food in this neighborhood in general, although there is some still. However, the vegetarian restaurants here seem to service a wealthier demographic than the non vegetarian food. It closes earlier, more expensive, etc. The cheap and late night options are fast food and taquerias, which essentially have no quality vegetarian items. But since this stuff is open, it actually makes me lazier and I’ll often eat at 11:00 PM because I can. However getting into this routine means I eat more meat.
I’m pretty sure if there was a decent cheap vegetarian restaurant that stayed open till 2:00 am I would eat at least 1 less meat meal a week, probably 2-3.
why aren’t there any vegetarian late night options near me? probably the normal reasons—no one around here wants or can open one, or there isn’t enough demand.
In either case it got me wondering. If there is enough demand to recoup say 95% ish of cost for a late night falafel stand, would it be a cost effective intervention (over whatever other things ACE recommends) to fund that last 5%? I might think more about this unless it’s super obvious to someone that this is orders of magnitude worse than other options.
I started filling this out and then stopped because I’m confused about this CEV and cosmopolitan value stuff and just generally what OP means by value. It’s possible I’m confused because I missed something (I skimmed the post but read most of it). Questions that would help me answer the prediction’s above.
What is the definition of value are we are supposed to be using (my current intuition is average CEV of humans)?
Was I meant to just answer the above question with my own values (or my CEV)?
Do other people feel like the above questions are invariant to the definition of value/ specific value of CEVs?
What is the definition of cosmopolitian value and how is it action relevant in all of this?
The stuff below is a bit rambly so apologies in advance.
I don’t really get the purpose of CEV for this stuff or why it solves any deep problems of defining value. I definitely think we should reflect on our moral values and update on new information as it feels right to us. This doesn’t mean we solved ethics. It also begs the question of whose CEV we are using? CEV is agent dependent, so we need to specficy how we weight the CEV’s of all the agents we are taking into consideration. In any case, my main complaint is that if the answers to the above questions are at least in part a function of what our CEV is(or what definition of value we are use), then I feel like we are stacking two questions on top of each other and not necessarily leaving room to talk through cruxes of either.
Let’s assume we are just taking the average CEV of human’s alive today as our definition of value. Some vales might be more difficult to pull off then others, as they may trend further from what aliens want or just be harder to pull of in the context of the amount of shards we have. Plus like, I just assumed we are taking the average of human’s CEVs but we don’t know what political system we will have. Who’s to say that just because we have the ASI and have an average CEV value the human’s will agree to push towards this average CEV. I guess in short I feel like I’m guessing the CEV and how that achievable that CEV is.
I also don’t really follow the cosmopolitan stuff. I have cosmopolitan intuitions but I’m unclear what the author is getting at with it. I have some vague sense that this is trying to address the fact that CEV gives special weight to agents that are alive now. Not really sure how to even express my confusion if I’m being honest.
That being said I loved this post. Lot’s of information from disparate places put together. A summary could be nice, maybe i’ll try to write one if no one else does.
I was reached out to by a regranter and got the vibe immediately that they were stressed about providing grants that might be accepted and basically just optimizing for what they perceived to be the most likely things for the team to give the ok.
Now again I only talked to one person but if they were just shooting ideas at the team to be processed similar to how they were processing general apps the regranter program serves more as a marketing tool to increase applicants and a slight filter of awful apps than it does change who has the power. I would be very interested to say the data on how many regrants were given / how many regrants were suggested compared to the normal funds.
Grant-making as we currently do it seems pretty analogous to a command economy.
I think you might have replied on the wrong subthread but a few things.
This is the post I was referring to. At the time of extension, they claim they had ~3k applicants. They also infer that they had way fewer (in quantity or quality) applicants for the fish welfare and tobacco taxation projects but I’m not sure exactly how to interpret their claim.
Did you end up accepting late applicants? Did they replace earlier applicants who would otherwise have been accepted, or increase the total class size? Do you have a guess for the effects of the new participants?
using some pretty crude math + assuming both applicant pools are the same, each additional applicant has ~.7% chance of being one of the 20 best applicants (I think they take 10 or 20). so like 150 applicants to get one replaced. if they had to internalize the costs to the candidates, and lets be conservative and say 20 bucks a candidate, then that would be about 3k per extra candidate replaced.
and this doesn’t included the fact that the returns consistently diminish. and they also have to spend more time reviewing candidates, and even if a candidate is actually better, this doesn’t guarantee they will correctly pick them. you can probably add another couple thousands for these considerations so maybe we go with ~5k?
Then you get into issues of fit vs quality, grabbing better quality candidates might help CE counterfactual value but doesn’t help the EA movement much since your pulling from the talent pool. And lastly it’s sort of unfair to the people who applied on time but that’s hard to quantify.
and I think 20 bucks per candidate is really really conservative. I value my time closer to 50$ an hour than 2$ and I’d bet most people applying would probably say something above 15$.So my very general and crude estimate IMO is they are implicitly saying they value replacing a candidate at 2k-100k, and most likely somewhere between 5-50k. I wonder if we asked them how much they would have to pay for one candidate getting replaced at the time they extended what they would say.
if anyone thinks I missed super obvious considerations or made a mistake lmk.
I feel like this community was never meant to scale. There is little to no internal structure, and like others have said, so much of this community relies on trust. I don’t think this is just an issue of “vultures”, it will also be an issue of internal politics and nepotism.
To me the issue isn’t primarily about grantmaking. If you are a good grantmaker, you should see when people’s proposals aren’t super logical or aligned with EA reasoning. More people trying to get big grants is mostly a good thing, even if many are trying to trick us into giving free money. I think the much larger issue is about status/internal politics, where there is no specific moment if you can decide how aligned someone is.
But first to give some evidence of vultures, I have already seen multiple people in the periphery of my life submit apps to EAGs who literally don’t even plan on going to the conferences, and are just using this as a chance to get a free vacation. I feel sorry to say that they may have heard of EA because of me. More than that, I get the sense that a decent contingent of the people at EAGx Boston came primarily for networking purposes(and I don’t mean networking so the can be more effective altruists). At the scale we are at right now, this seems fine, but I seriously think this could blow up quicker than we realize.
Speaking to the internal politics, I believe we should randomly anonymize the names on the on the forum every few days and see if certain things are correlated with getting more upvotes (more followers on twitter, a job at a prestigous org, etc.). My intuition has been that having a job at a top EA org means 100-500% more upvotes on your posts here, hell even the meme page. Is this what we want? The more people who join for networking purposes, potentially the worse these effects become. That could entail more bias.
I post (relatively) anonymously on twitter, and the amount of (IMO) valid comments I make that don’t get responded to makes me worry we are not as different from normal people as we claim, just having intellectual jousts where we want to seem smart among the other high status people. To be fair this is an amazing community and I trust almost everyone here more than almost anyone not in this community to try to be fair about these things.
I get the sense (probably because this is often going on the back of my mind), that many people are in fact simply optimizing for status in this group, not positive impact as they define it themself. Of course status in this community is associated with positive impact, BUT as defined by the TOP people in the community. Could this be why the top causes haven’t changed much? I don’t feel strongly about this, but it’s worth considering.
As a former group organizer, there is a strong tension between doing what you think is best for the community vs for yourself. Here is an example: To build resilience for your group, you should try to get the people who might run the group after you leave to run events/retreats/network with other group organizers, so they are more committed, have practice, and have a network built up. But you get more clout if you run retreats, if you network with other group organizers, etc. It takes an extremely unselfish person to not just default to not delgating a ton of stuff, in no small part for the clout benefits. This tension exists now, so I’m not claiming this would only result from the influx of money, but now that organizers can get jobs after they graduate school, expect this to become a bigger issue.
P.S. If the community isn’t meant to scale, then individual choices like vegetarianism are not justified within our own worldview.