This doesn’t address the elephant which is “quality” of talent. EA has a funding overhang with respect to some implicit “quality line” at which people will be hired. Getting more people who can demonstrate talent over that line (where the placement of each specific line is very dependent on context) lowers the funding overhang, but only getting more people under the line doesn’t change anything.
Max Clarke
No no, I still believe it’s a great idea. It just needs people to want to do it, and I was just sharing my observation that there doesn’t seem to be that many people who want it enough to offset other things in their life (everyone is always busy).
Your comment about “selecting for people who don’t find it boring” is a good re-framing, I like it.
I’ve had quite a few people ask me “What’s altruism?” when running university clubs fair stalls for EA Wellington.
I’ve been very keen to run “deep dives” where we do independent research on some topic, with the aim that the group as a whole ends up with significantly more expertise than at the start.
I’ve proposed doing this with my group, but people are disappointingly unreceptive to it, mainly because of the time commitment and “boringness”.
For an overview of most of the current efforts into “epistemic infrastructure”, see the comments on my recent post here https://forum.effectivealtruism.org/posts/qFPQYM4dfRnE8Cwfx/project-a-web-platform-for-crowdsourcing-impact-estimates-of
For an overview of most of the current efforts into “epistemic infrastructure”, see the comments on my recent post here https://forum.effectivealtruism.org/posts/qFPQYM4dfRnE8Cwfx/project-a-web-platform-for-crowdsourcing-impact-estimates-of
Post civilizational collapse you might not be able to pay that cost though
Buying coal mines to secure energy production post-global-catastrophe is a much more interesting question.
Seems to me that buying coal, rather than mines, is a better idea in that case.
I’m really hoping we can get some better data on resource allocation and estimated effectiveness to make it clearer when funders or individuals should return to focusing on global poverty etc.
There’s a few projects in the works for “ea epistemic infrastructure”
Ok—this is a good critique of my comment.
I was kind of off-topic and responding to something a bit more general. Since writing my comment I have found someone on the forum summarizing my perspective better.
and relatedly re. funding
Strong messaging to the effect of “we need talent” gives the impression that there are enough jobs that if you are reasonably skilled, you can get a job.
Strong messaging to the effect of “we need founders”, or “just apply for funding” gives the impression that you will get funding.
In both cases, people can be repeatedly rejected and get extremely disheartened.
Some things that can be done:
Communicate (with real examples?) the level of competence required for success in a job / funding application. Unfortunately “apply but don’t get sad at rejection” is an unrealistic message to send. Go the other way, and try to make people’s self-screening more accurate.
Provide better feedback for rejected applicants.
Provide more opportunities for up-skilling.
Try really, really hard not to filter based on unchangeable parts of people’s background such as their education (esp. fanciness of school) and location. (and of course ethnicity, gender etc.)
I’ve been meaning to write a post but it’s a big ball of thoughts and I don’t have the right structure for it.
Thanks for this post. I have some disagreements but I want to say that this part in particular is pretty common and is a big problem I have with the EA “big wigs” culture.
I’ve applied for a few jobs in EA over the years. I didn’t get them. This was painful. In one case I was doing 25+ hours a week freelance work for an org for several months, it went really well, they put up a job posting with precisely my current duties and invited me to apply, then hired someone else. This was very painful, and strongly discouraged me from applying to full time EA roles in the future.
This seems to be really common and it’s totally understandable to feel hurt. One of the things that drives us to EA is the desire to contribute, and naturally our self-worth can get very tied up in that. I really hope that orgs can do a lot better on this, because I think this and similar things are pretty harmful.
I disagree and agree with various parts of your post.
But if you’re worrying about alienating people in the periphery, and you’re in the center, it’s worth considering that people in the periphery probably just aren’t paying much attention to how much status you are assigning them.
I agree, this summarizes a vibe I’ve felt before. I think their concern shouldn’t be worrying about “alienating” the edge of EA, but more on positively framing it—making it easy for us to contribute as much as possible.
However, I think some of how you are phrasing this is a bit odd
I will maintain my pledge with a similar level of pride and joy no matter what the official recommendation to current Yale students in EA student groups is (not to undermine them—they are very important, just approximately irrelevant to my life). I am in Florida. Most of my friends work at restaurants or for the state government. Sure, it feels nice when people across the country want to include me or rank me well, [...]
Recommending actions in EA are about making an impact, not about status! Why will you maintain your pledge no matter what?? Are you certain you are having the biggest impact you can?
Now, if you don’t want to change your life very much, that’s totally fine, but I just think it isn’t about status.
There is plenty of status though. Obviously EAs do get caught up in pursuing status, consciously or not, and also we do tend to think of highly impactful people as high status. However, that’s a bad thing, not a good thing. Status assignment and signalling can only make our reasoning worse, not better.
Anyway I think your post is pretty good for making people think, so thanks.
Yeah, that’s satisfying to me. I think it’s honest and clear. I thought it was worth asking though, in case the framing wasn’t deliberate, or you hadn’t thought about it.
I can see you put a lot of effort into this reply—thanks!
I find the focus on longtermism a bit strange.
Because of short timelines and strong constraints on human capital, people in AI alignment / governance / etc. are of course the priority for this kind of spending.
However, other than that, I don’t see a reason to distinguish much between cause areas. “The Berlin Hub will be a co-living and event space hosting people who are interested in contributing to humanity’s long-term future” Why say “humanity’s long-term future” instead of “improving the world as effectively as possible ”? It’s excluding some people (non-longtermist charity entrepreneurship comes to mind, maybe also various policy causes), and I don’t see the reason for it. “Too many people/not enough capacity” doesn’t hold up, that’s just about choosing admissions, not the framing of the whole facility.
Anyway, do you have a reason for framing this as a long-termist hub in particular? If it’s “that’s who we want to hang out with” I personally think that’s a totally fine reason. If that’s not the reason, then given recent discussion around community norms, I think we should default to something like “including all EAs, then disagreeing on the details/beliefs.” Here that means encouraging everyone to apply, but being honest about why you are accepting people, and being open to accepting some non-longtermist people.
Also, it could be that I’m way off here so please tell me.
My background is as a software developer, with professional web-dev experience. I’m currently doing a research master’s on ML (transformers) and last year did a project surveying the field of probabilistic programming languages.
Because of my Master’s, I don’t have capacity to work on this right at the moment, but come September this year it’s absolutely a candidate for things I would work on. I do have the skills and will have capacity to work on this on my own if I think that it’s my best option for impact. From September onward, I have two bottlenecks: 1) Funding 2) Finding the best use of my time among many options.
Agree
I’ve been thinking about which sub-parts to tackle, but I think that the project just isn’t very valuable until it has all three of:
A Prediction / estimation aggregation tool
Up-to-date causal models (using a simplified probabilistic programming language)
Very good UX, needed for adoption.
It’s a lot of work, yes, but that doesn’t mean it can’t happen. I’m not sure there’s a better way to split it up and still have it be valuable. I think the MVP for this project is a pretty high bar.
Ways to split it up:
Do the probabilistic programming language first. This isn’t really valuable, it’s a research project that no one will use.
Do the prediction aggregation part first. This is metaculus.
Do the knowledge graph part first. This is maybe a good start—it’s a wiki with better UX? I’m sure someone is scoping this out / doing it.
These things empower each other.
It’s hard, but nevertheless I’d estimate definitely no more than 3 person-years of effort for the following things:
A snappy, good-looking prediction/estimation (web) interface.
A causal model editor with a graph view.
A backend that can update the distributions with monte-carlo simulations.
Rich-text comments and posts attached to models, bets and “markets” (still need a better name than “markets”)
I-frames for people to embed the UI elsewhere.
What do you estimate?
I’d love to make an aggregate estimate for how much work this project would take
Here’s a list of links and people I have found on this topic:
Paal Kvarberg has scoped this idea and got feedback in this document, but I think didn’t pursue the idea.
Ozzie Gooen got funding from LTFF in 2019 and is now involved in foretold.io, an open-source prediction market. This looks similar, but less ambitious than what I’m trying to do. The notebooks part in particular doesn’t look like it has adoption. He also started guesstimate, a probabilistic spreadsheet app which can do stuff like do drake’s equation with probability distributions. He also has a lesswrong collection of posts on “Prediction-driven collaborative reasoning systems”
These people from Manifold Market are building “charity prediction markets” which allow the use of real money in a prediction market, if the money will be donated to charity.
The Quantified Uncertainty Research Institute (QURI, pronounced “query”), which have a variety of projects (which you can see on their public AirTable), including foretold.io (mentioned before), and a probabilistic programming language called Squiggle.
QURI’s current active project is Metaforecast, a site that collects and links to predictions and estimates from other platforms such as Metaculus.
Agree with all that yep, and perhaps I should phrase my comment better.