Talk to me about cost benefit analysis !
Charlie_Guthmann
Good post. I have been following worm wars, the case against randomistas, etc. At the risk of being blunt(and as someone with personal ties to randomista), I think it seems pretty certain that growth in almost any form is not what EAs should be focusing on in terms of actual research. So I disagree with the claim that the method of growth is a high impact space to evaluate especially when we haven’t settled whether growth in general is high or positive impact.
The long term effects (and by this I don’t mean if people will be happy ten years after growth occurs) are highly uncertain, and honestly to the best of my intuition, negative. Given the utter lack of any sort of unifying government on this planet, I think we have enough players as is. The topic is obviously a lot more nuanced than that, but I think it’s suffice to say that no one is about to come up with an airtight argument for how dev will make the world better in 100 years. Like many others have pointed out, it continues to surprise me how much we focus on direct or semi direct impacts in EA when most of us have accepted long termism.
The best piece of evidence by far imo, and it’s not even a very good one, is pritchetts claim on income/poverty negative correlation. And honestly, unless someone can completely dismantle that claim, its not difficult to see x-risk as a more effective anti-poverty measure, given the general upward trajectory of our planet. My epistemic status on that claim isn’t super high, but still.
That being said, empirical poverty research presents a very good recruiting tool for finding people who value the EA framework but haven’t had their third eye opened. I wonder if this could bite us in the butt at some point, but I don’t think EA has much of a choice unless it wants to be an even smaller, more idiosyncratic community than it currently is.
I thought your moderate drinking point was very interesting and connected some dots in my head. It seems plausible that the vast majority of causal relations are mild. If this is the case the majority of causality could be ‘occurring’ through effects too small to call significant. I guess that could seem pretty obvious but it isn’t something I ever heard talked about in my econometrics class nor in my RAing.
What would be worthy of an up vs down? I was thinking something along this line also though, but my thought was rank them based on didactic potential according to an SNT framework—If you think it is a really important concept(S) but not many people know about it(N), and people would be interested if they did find out (T), this is the highest priority page.
Is this what you meant by best books or were you just thinking rank them by how much you liked them?
I can’t find the exact location right now, but someone on LW made a web visualization of EA academic papers—lines between papers representing citations. I was thinking something like this could be done for the forum in general with hyperlinks but it might be cooler to do it with the wiki. The thought behind it outside of just being a cool visualization is that many thoughts come in clusters and being able to visualize the thoughtspace you’re in might help you breakthrough plateaus more easily and visualize how things connect within ea.
more of an open question but I think its relevant to think about how atomic you make the pages, as in how much ideas are embedded/hyperlinked vs written out in full.
One of the other comments here says there might be some evidence of microdosing not doing much. One of my friends swears that a ‘hero’s journey’ is orders of magnitude more impactful or effective than simply doing a normal dose. 1. Is there research being done on heavy one time usage? 2. If it turned out the most effective way to use psychedelics was to use a large amount at once, would this be politically feasible?
first, not condoning bill’s behavior. My intuition is that it is good to be trustworthy, not sexually harass anyone, etc. That being said, I didn’t find any of the arguments linked particularly convincing.
“In general, I try to behave as I would like others to behave: I try to perform very well on “standard” generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it.”
Sure generally you shouldn’t be a jerk, but generally being kind isn’t mutually exclusive to achieving goals. Beyond that what does ‘overlay’ mean? The statement is quite vague, and I’m actually sure there is some bar of family event that he would skip. I’m sure 99%+ of his work w/ givewell is not time sensitive in the way a family event is, so this statement somewhat amounts to a perversion of opportunity cost. In fact, Holden even says in the blog that nothing is absolute. It’s potentially presentist also because I would love for people to treat me with respect and kindness, but I would probably prefer if past people just built infrastructure.
And again with julia’s statement, she’s just saying “Because we believe that trust, cooperation, and accurate information are essential to doing good”. Ok, that could be true but isn’t that the core of the questions we are asking- When we talk about these types of situations we are to some extent asking: is it possible x person or group did more good by not being trustworthy, cooperative, etc. Maybe this feels less relevant for EA research, but what about EAs running businesses? Microsoft got to the top with extremely scummy tactics, and now we think bill gates may be on of the greatest EAs ever, which isn’t supposed to be a steel counterargument but I’m just pointing out its not that hard to spin a sentence that contradicts that point. And to swing back to the original topic, it seems extremely unlikely that sexually harassing people is ever essential or even helpful to having more impact, so it seems fair to say don’t sexually harass people, but not under the grounds that “you should always default to standard generosity, only overlaying your biased agenda on top of the first level generosity.” However, what about having an affair? What if he was miserable and looking for love. If the affair made him .5% more productive, there is at least some sort of surface level utilitarian argument in favor. The same for his money manager, If he thought Larson was gonna make .5% higher returns then the next best person, most of which is going to high impact charity stuff, you can once again spin a (potentially nuance-lacking) argument in favor. And what is the nuance here? Well the nuance is about how not being standardly good affects your reputation, affects culture, affects institutions, hurts peoples feelings, etc.
*I also want to point out that julia is making a utilitarian backed claim, that trust, etc. are instrumentally important while Holden is backing some sort of moral pluarlism (though maybe also endorsing the kindness/standard goodness as instrumental hypothesis).
So while I agree with Holden and Julia generally on an intuitional level, I think that it would be nice if someone actually presented some sort of steelmanned argument (maybe someone has) for what types of unethical behavior could be condoned, or where the edges of these decisions lied. The EA brand may not want to be associated with that essay though.
It feels a bit to me like EAs are often naturally not ‘standardly kind’ or at least are not utility maximizing because they are so awkward/bad at socializing (in part due to the standard complaints about dark-web, rational types) which has bad affects on our connections and careers as well as EAs general reputation, and so Central EA is saying, lets push people in the direction so that we have a reputation of being nice rather than thinking critically about the edge cases because it will put our group more at the correct value of not being weirdos and not getting cancelled(+ there are potentially more important topics to explore when you consider that being kind is a fairly safe bet).
I have gotten the general feeling that there is not nearly enough curiosity in this community about the ins and outs of politics vs stuff like the research and tech world. Reports just aren’t very sexy. Specialization can be good but there are topics that EAs engage with that are probably as specialized (hyper specific notions of how an ai might kill us?) that see much more engagement and I don’t think it is due to impact estimates.
I don’t read much on AI safety so I could be way off but it feels pretty important. The US government could snap their fingers and double the amount of funding going into AI safety. This seems very salient for predicting impacts of EA AI safety orgs. Either way, this has made me more interested in reading through https://forum.effectivealtruism.org/tag/ai-governance.
I see no reason to pay attention to academic history until historians start flexing their predictive abilities. If historians have nothing other than knowing more history to show for their studies, then history profs for the most part are just individual dewey decimals systems for their hyper niche topic. Anything I can learn from class I can learn from youtube/wiki, unless the prof structures it in a way where there is some thinking style that in tandem with the info helps you see the world more clearly (something I didn’t find to be true from the history classes I have taken, but obviously anecdotal).
To expand a little more(all personal opinions), the world is extremely complicated. History as an academic field provides basically no tools for understanding this complicated world, compared to other fields. Furthermore, we have even more incomplete information of the past, which is often extremely biased. Finally, the world changes incredibly fast incredibly quick these days. A priori, there doesn’t seem to be much reason to expect the overall patterns of society to continue. In order to use history in any meaningful sense, we need to find smaller patterns within larger systems that can be clearly defined and then carefully studied across many time periods. We would only study specific patterns that we know to be highly influential under certain conditions, there’s no reason to study noise. I guess this is possible. I don’t think this is easy. If this was easy, we should probably expect to see some ungodly levels of wealth among top macro economists and historians. Obviously many don’t care about money, but some do, so if they had huge predictive advantages you would expect huge gainzzz, or at the very least, we would all look towards historians to predict elections(this is sort of a straw man but one I think turns out to be correct).
When people say understanding history helps us understand ourselves, you should pause. Would you say that learning about math history/math? Why or why not?
That being said I agree with taking easier classes for most people who don’t have academic-related goals. However, I think communication/journalism/writing classes may be more useful.
To be clear, I think history is important. I don’t believe that college history classes are the best forum for learning history\the important aspects of history for prediction was my point. Also, to reiterate: If history as taught by academics is so important for prediction, shouldn’t we then expect academic historians to be the best forecasters(to be fair maybe they are, I’m not an expert on who the best forecasters are but kind of assume its gonna be cs people\rationalists)? comment currently sitting at −7 but no one has even contested that point or said why it doesn’t make sense. Also I condone taking macro classes.
>
I see your point but my response is that I don’t need historians to study history. Again, you keep saying that history is useful, I’m not contesting that(though it seems like you may think it is more important than me). I’m contesting that the way you are taught history in the classroom as being specifically useful. I’ve personally found reading macro history type blogs and doing very general overviews on wiki to be more useful than taking specific courses on a topic in school, in terms of understanding my place in the world/trajectory of the world. You say historians are not supposed to be predictive. That is literally my point. If historians are just a source of data, what makes a historian/history different than Wikipedia in any real sense, outside of the motivation to actually do the material because grades. Why would I take a history class that has no value added from reading sources when I could have a professional writer coach me on writing skills.
Again, how do you use historical data when attempting to predict things?
Take for example guessing about what politician wins some election. You might use historical data of how the previous elections went to make a prediction (hopefully your model isn’t fully just based on historical data with no account for how things have changed). However, it just doesn’t seem like taking academic history provides you with anything here. Maybe they are the people who combed the primary sources so that the data is on the internet in the first place, but absent them having a monopoly on that data, I’d trust a cs/rationalist type more to use that data in a useful way. Historians will probably claim some story about why something happened, IMO that is antithethic to what we are trying to do here, unless that story is more predictive.
Again like if some history professor at your school teaches in a really quantitative way or teaches a class that is like about large scale historical trends that seems like it could be useful but that has not been my experience taking history classes.
“The average age of Members of the House at the beginning of the 117th Congress was 58.4 years; of Senators, 64.3 years.”
Fair point. First let me add another piece of info about the congress: “The dominant professions of Members are public service/politics, business, and law.”
Now on to your point.
How old are the leaders of the military? How many of them know what python is? What was their major in college? Now ask yourself the same thing about the CIA/NSA./Etc. This isn’t a rhetorical question. I assume each department will differ. Though there may be a bit of smugness implicit.
Conditional on such a cluster existing: How likely do you think it is that it would be declassified? I don’t find it that unlikely that the NSA or CIA could be running a program and not speaking on it, and it seems possible to figure this out simply by accounting for where every CS/AI graduate in the US works. I feel less strongly that the military would hide such a project. FWIW my epistemic confidence is very low for this entire claim, I am not someone who has obsessed over governmental classification and things like that.
How many CS PHDs are there in the US government in total? How many masters? how many bachelors?
I think there is also more to say about the variety of reasons people feel more comfortable giving their input on economic, social, and foreign policy issues (even if they have no business doing so), which I think could leak into leaders just naturally trending towards dealing with those issues, but I think this is a much more delicate argument that I don’t feel comfortable fleshing out right now.
I think aogaras point above is reasonable and mostly true, but I don’t think it goes to the level of explaining the discrepancy. This is incredibly skewed because of who I associate with(not all of my friends are eas though), but anecdotally I think AGI is starting to gain some recognition as a very important issue among people my age (early 20s), specifically those in STEM fields. Not a lot, but certainly more than it is talked about in the mainstream. Let’s be real though, none of my friends will ever be in the military or run for office, nor do I believe they will work for the intelligence agencies. My point is, In addition to age, we have a serious problem with under-representation of stem in high up positions and over-representation of lawyers. It would be interesting to test the leaders of various Gov departments on their level of computer science competency/comprehension.
https://www.ai.gov/ What do you make of this?
The link for the trustworth AI wasn’t broken for me? https://www.ai.gov/strategic-pillars/advancing-trustworthy-ai/#Use-of-AI-by-the-Federal-Government
But unsurprisingly, it mostly seems like they are talking about bigoted algorithms and not singularity.
However it did link this:
Find their abriged 2021 report here:
https://reports.nscai.gov/final-report/table-of-contents/
https://reports.nscai.gov/final-report/chapter-7/
Personally, this looked more promising than anything else I had seen. There was a section titled adversarial AI, which I thought might be about AGI, but upon further reading, it wasn’t. So this appears to also be in the vein of what Ozzie is saying. However, It seems they have events semi-frequently. I think someone from EA should really try to go to they are allowed. The second link is the closest chapter in the report to AGI stuff if anyone wants to take a look- again though it’s not that impressive.
And Also I found this: https://www.dod-coe4ai-ml.org/leadership-members
But I can’t really tell if this is the DODs org or Howard universities; it seems like they only hire Howard professors and students so probably the latter.
Closest paper I could find from them to anything AGI related: https://www.techrxiv.org/articles/preprint/Recent_Advances_in_Trustworthy_Explainable_Artificial_Intelligence_Status_Challenges_and_Perspectives/17054396/1
Right, I had similar thoughts.
The desert hitchhiker: My intuition here is that if you are completely rational, you realize that if you don’t believe you will pay later you won’t get a ride now. In this sense the question feels similar to simply going to the store and the clerk says, you have to pay for that, and you say no I don’t, and they say yes you do, and you say, no really you can’t make me, and they say, yes I can. At this point, you pay if you are rational. The only difference being, in this case, you don’t actually have to pay, you just have to convince yourself you are going to pay.
The same can be said for the firefighting example if you know they have a lie detector. Once you know you can’t lie, this simplifies down to a non temporal problem IMO other than you don’t actually have to change your brainstate to make you help, you just have to convince yourself that that is the brainstate you have.
For kate the writer, it feels like she isn’t actually being selfish, but rather just not thinking long term. Would she really quit writing or just not write as much?
Schelling’s answer to armed robbery: Is bluffing irrational? Only when the costs outweigh the gains. If bluffing is rational but you are too scared to bluff, simply change your brain to be less scared :).
The alien virus
I’m confused- so the virus makes us do good things but we don’t enjoy doing those things? So are we being mind controlled? What does it feel like to have this alien virus?
It seems like the claim is more selfish = greater potential valence.
Humans are mostly unique in that we are both able to have utility and have profound influences on others utility, hence there is an equilibrium where past which as consequentialists we need to change our worldview towards being selfish (but we are not close to this equilibrium imo, if you consider future humans plus animals probably have much more potential utility than us).
If there is one human and one dog in the world thing that doesn’t get the virus (and let’s say they live forever), and we say the dog has up to 2 potential units of utility and the human has 0 when unselfish and 2 when selfish, the virus should regulate my behavior to switch between being selfish and unselfish to max out the sum of the utility. I guess you might run into problems of getting stuck in local equilibria here though.
Also- I enjoyed the post alot, thought experiments are always fun.
I am interested in making documentaries and wanted to answer your questions, but found it hard to find a starting out point. It feels useful to first compare documentaries to all types of mediums, and also ask some probing questions. The range in what types of documentaries we are talking about is in some cases as large as the range between some documentaries and other mediums. I started to outline a belief sheet/ comparison chart because it helped me start to organize my thoughts on the subject. In no way complete but feel free to take a look/edit if useful.
I feel like this community was never meant to scale. There is little to no internal structure, and like others have said, so much of this community relies on trust. I don’t think this is just an issue of “vultures”, it will also be an issue of internal politics and nepotism.
To me the issue isn’t primarily about grantmaking. If you are a good grantmaker, you should see when people’s proposals aren’t super logical or aligned with EA reasoning. More people trying to get big grants is mostly a good thing, even if many are trying to trick us into giving free money. I think the much larger issue is about status/internal politics, where there is no specific moment if you can decide how aligned someone is.
But first to give some evidence of vultures, I have already seen multiple people in the periphery of my life submit apps to EAGs who literally don’t even plan on going to the conferences, and are just using this as a chance to get a free vacation. I feel sorry to say that they may have heard of EA because of me. More than that, I get the sense that a decent contingent of the people at EAGx Boston came primarily for networking purposes(and I don’t mean networking so the can be more effective altruists). At the scale we are at right now, this seems fine, but I seriously think this could blow up quicker than we realize.
Speaking to the internal politics, I believe we should randomly anonymize the names on the on the forum every few days and see if certain things are correlated with getting more upvotes (more followers on twitter, a job at a prestigous org, etc.). My intuition has been that having a job at a top EA org means 100-500% more upvotes on your posts here, hell even the meme page. Is this what we want? The more people who join for networking purposes, potentially the worse these effects become. That could entail more bias.
I post (relatively) anonymously on twitter, and the amount of (IMO) valid comments I make that don’t get responded to makes me worry we are not as different from normal people as we claim, just having intellectual jousts where we want to seem smart among the other high status people. To be fair this is an amazing community and I trust almost everyone here more than almost anyone not in this community to try to be fair about these things.
I get the sense (probably because this is often going on the back of my mind), that many people are in fact simply optimizing for status in this group, not positive impact as they define it themself. Of course status in this community is associated with positive impact, BUT as defined by the TOP people in the community. Could this be why the top causes haven’t changed much? I don’t feel strongly about this, but it’s worth considering.
As a former group organizer, there is a strong tension between doing what you think is best for the community vs for yourself. Here is an example: To build resilience for your group, you should try to get the people who might run the group after you leave to run events/retreats/network with other group organizers, so they are more committed, have practice, and have a network built up. But you get more clout if you run retreats, if you network with other group organizers, etc. It takes an extremely unselfish person to not just default to not delgating a ton of stuff, in no small part for the clout benefits. This tension exists now, so I’m not claiming this would only result from the influx of money, but now that organizers can get jobs after they graduate school, expect this to become a bigger issue.
P.S. If the community isn’t meant to scale, then individual choices like vegetarianism are not justified within our own worldview.
Longtermism =/ existential risk, though it seems the community has more or less decided they mean similar things (at least while at our current point in history).
Here is an argument to the contrary- “the civilization dice roll”: Current Human society becoming grabby will be worse for the future of our lightcone than the counterfactual society that will(might) exist and end up becoming grabby if we die out/ our civilization collapses.
Now, to directly answer your point on x-risk vs longtermism, yes you are correct. Fear mongering will always trump empathy mongering in terms of getting people to care. We might worry though that in a society already full of fear mongering, we actually need to push people to build their thoughtful empathy muscles, not their thoughtful fear muscles. That is to say we want people to care about x-risk because they care about other people, not because they care about themselves.
So now turning back to the dice roll argument, we may prefer to survive because we became more empathetic/expanded our moral circle and as a result cared about x-risk, rather than because we just really really didn’t want to die in the short-term. Once (if) we pass the hinge of history, or at least the peak of existential risk, we still have to decide what the fate of our ecosystem will be. Personally, I would prefer we decide with maximal moral circles.
Some potential gaps in my argument. (1) There might be reasons to believe that our lightcone will be better off with current human society becoming grabby, in which case we really should just be optimizing almost exclusively on reducing x-risk (probably). (2) Focusing on Fear mongering x-risk rather than empathy mongering x-risk will not decrease the likelihood of people expanding their moral circles , maybe it will even increase moral circle expansion because it will actually get people to grapple with the possibility of these issues (3) Moral circle expansion won’t actually make the future go better (4) AI will be uncorrelated with human culture, so this whole argument is sort of irrelevant if the AI does the grabbing.
- 17 Oct 2022 15:14 UTC; 2 points) 's comment on Parfit + Singer + Aliens = ? by (
I started Northwestern’s EA club with a close friend my sophomore year at northwestern (2019). My friend graduated at the end of that year and our club was still nascent. There was an exec board of 6 or 7 but truly only a couple were trustworthy with both getting stuff done and actually understanding EA.
Running the club during covid and having to respond to all these emails and carrying all this responsibility somewhat alone(alone isn’t quite fair but ) and never meeting anyone in person and having to explain to strangers over and over again what ea was stressed /tired me a decent bit (I was 19-20) and honestly I just started to see EA more negatively and not want to engage with the community as much, even though I broadly agreed with it about everything.
I’m not sure I really feel externally higher status in any way because of it. I guess I might feel some internal status/confidence from founding the club, because it is a unique story I have, but I would be lying if I said more than 1 or 2 people hit me up during eagx boston (had a great time btw, met really cool people)to talk over swapcard, meanwhile my friend who has never interacted with ea outside of NU friends and fellowship but has an interesting career was dmed up like 45 times. And the 2 people who hit me up did not even do so because I founded, much less organized the club. The actual success of the club in terms of current size/avg. commitment and probabilistic trajectory does not seem to be data that anyone in the community would necessarily notice if I didn’t try to get them to notice. Don’t even get me started on whether or not they would know if I promoted/delegated (to) the right people. At any point during our clubs history I could tell you which people were committed and which weren’t, but no one ever asked. There are people who work with the university groups but it’s not like they truly knew the ins and outs of the club, and even if I told them how things are truly going, what does that really do for me? It may be the case that they would be more likely to hirer or recommend people who are better at delegating but anecdotally this doesn’t even seem true to me. Which is still a far cry from doing impact estimates and funding me based on that. Plus isn’t it possible that people who delegate less just inherently seem like a more important piece of a universities “team”. Maybe there are other people waiting to take over and do and even better job but they are quite literally competition to their boss in that case. Perhaps it increases my chance of getting jobs? but I’m not sure, and if it was, it’s not like it would be connected to any sort of impact score.
Founding the club has at best a moderate impact on its own. It is the combination of starting the club and giving it a big enough kick to keep going that I believe is where the value is created. Otherwise the club may die and you basically did nothing. A large part of this “kick” is ofc ensuring the people after you are good. Currently, Northwestern’s Effective Altruism club is doing pretty good. We seem to be on pace to graduate 50+ fellows this year, we have had 10-15 people attend conferences. TO BE CLEAR—I have done almost nothing this year. The organizers that (at risk of bragging) I convinced/told last year to do the organizing this year have done a fire job. Much better than I could have. I like to think that if I had put very little effort in last year, or potentially even worse, not give authority to other positive actors in the club, there would have been a not tiny chance the club would have just collapsed, though I could be wrong. It does seem as though there is a ton of interest in effective altruism among the young people here, so it’s feasible that this wasn’t such a path dependent story.
Still—If I had started the club, put almost no effort in to creating any structure to the club/giving anyone else a meaningful role during covid year other than running events with people I wanted to meet (and coordinating with them myself, which counterintuitively is easier then delegating), and then not stepped down/maintained control this year so that I could continue doing so, no one would have criticized me, even though this action would probably have cost ea 15-30 committed northwestern students already, and potentially many more down the line. I mean, no one criticized me when I ghosted them last year(lol). If I had a better sense of the possibility of actually getting paid currently or after school for this stuff, I could see it increasing the chance I actually did something like above. Moreover, if I had a sense of the potential networking opportunities I might have had access to this year ( I did almost all my organizing except the very beginning during heavy covid), this probably would have increased my chances of doing something like above even more than the money.
To be clear I probably suck at organizing, and even if I hadn’t solely used the club as my own status machine it would have been pretty terrible if I didn’t step down and get replaced by the people who currently organize.
To summarize/ Organize:
There is a lack of real supervision (maybe this has changed like I said I wasn’t super involved this year) from the top of what is happening at these clubs, and to the extent that you might receive status for success while you organize, it seems highly related to how willing you are to reach out to people in CEA and ask for more responsibility, or to post updates online, or to generally socialize with other EAs about stuff
If you correctly step down so someone better can run the club, it’s not clear there is any sort of reward
I would be surprised if delegating correctly was noticed.
In general, being a good organizer isn’t even something that seems to get you much clout in this community, see other post today about this (i haven’t read it yet)
Thus, the real clout from organizing, esp. If you don’t have an online presence, comes from the access organizing can give you
organizing provides opportunities to reach out to anyone in the community
BUT, these opportunities often come hand in hand with specific events that your club is participating in. The most “bonding” moments come from helping plan events with other members of EA from different places. There are a finite number of these and each one you delegate is a lost opportunity to talk to someone at CEA, another organizer, a possible speaker, etc.
It can feel as though you deserve these opportunities because if you just spent the work that you used on organizing networking in the first place, or blogging, you would probably be more respected, since in the first place organizing doesn’t seem to get much status. Because there is no real oversight, you definitely are not at risk of getting shamed for using the club as a status machine.
So you start attending meetings that someone else in the club should have been at, or emailing people to ask them to speak at the club when you should have let a freshman or sophomore email them.
or even giving an intro talk when you should have let a younger student give it, because it means all the other people from your school will see you more as one of the sole leaders of the club, which tbf is less related to the overarching concept of this post. And also I want to give a nod to the discussion on balancing resilliance vs. immediate impact, in the sense that you might give a better talk(or so you think), which will convince more people, which might make the club more resilient. But Then I would say you should have coached the younger student better.
Seems like we might be promoting squeaky wheels. You get paid if you ask for money(i think?), you get status if you take it, etc. This could both provide bad incentives and be incredibly frustrating to the shyer folk.
No one has ever reached out to me for advice on starting a club, or asked how my experience went, or asked me if I would be interested in meta work. I have never received a cent for any of my community building work. If I was actually getting paid what I believe my time is worth, which is probably still much much less than the actual value of my time to EA while I was organizing, I would almost certainly be owed (tens of?) thousands of dollars. I definitely feel like my sense that this was a community where you didn’t need to market yourself to get to the top was not as true as I originally envisioned. At the same time I don’t regret starting the club at all. It is probably one of the few things I have done in my life that I feel proud of.
What should we do? Can we federalize clubs? Should we have more data analysts and researchers and CEA people work on this? Would we actually audit a college club? Should we pay organizers more? ← but wouldn’t this increase “vulturism”?
The core realization should be that EA needs an institution(s) that doesn’t exist. Without more complex institutions we are basically being culty and trusting each other on a variety of dimensions. I hope the trust remains but why not build resiliency(unless of course, you believe gatekeeping is the solution).
I know I didn’t precisely answer your questions and more just rambled. let me know if you have questions, and obviously if I said stuff that sounds wrong disagree. I feel like even though this post is long it’s lacking a lot of nuance I would like to include but I felt it was best to post it like this.
Thanks for the post. Good to see some investment in the risk loving side of things. However, I am a little disappointed that none of these charities are long-term related or meta. I’m not super hardlined, but there is soft consensus in the EA community that these things are important. Just wondering if anyone knows why charity-entrepreneurship doesn’t prioritize these things? I could see the argument that it is hard to run a long-term focused charity, though I haven’t thought much on it. Is there another incubator that focuses on these areas? Otherwise it seems like a really promising area to push for.
As a side note, I agree with Misha that I could see decentralized mental health type things being cause-y, would love to see more done in this area. Anyone looking in this direction might want to check out https://www.stickk.com/ , which tries to apply behavioral research to help people “stick” to their goals.