Interesting idea. I think this could be useful in cases where people know that they don’t have the credibility to receive a direct grant.
Even without a singularity, no unexpected power upsets seems a bit implausible.
Yeah, it seems to require quite distinct skills. That said, they seem to be encouraging collaboration.
Yeah, it seems strange to be forced to adopt a scenario where the development of AGI doesn’t create some kind of surprising upset in terms of power.
I suppose a contest that included a singularity might seem too far out for most people. And maybe this is the best we can do insofar as persuading people to engage with these ideas. (There’s definitely a risk that people over update on these kind of scenarios, but it’s not obvious that this will be a huge problem).
I guess that makes sense.
I suppose organising such a bootcamp is probably one of the most useful things that national level organisers could be doing.
Intro fellowship facilitator → run a social → committee member
Seems like you could let someone run a social essentially straight off, as it’s pretty hard to mess up a social.
That said, I agree with your core point, it’s important to provide people exciting opportunities when they’re most enthusiastic:
This takes time and there’s dropout at every stage. The observation is that organisers are usually the most motivated after a retreat/conference/...
That said, your ideas for sessions all sound really useful:
Some ideas for sessions: how to 1-1s, facilitation training, mental health, pitches for EA short and long, people management & project delegation, Personal productivity, Effective planning, Movement building strategy and strategic prioritization for groups, creating positive epistemic norms, “Agenticness” (as explained in my post), how to trade money for time
I guess my main skepticism is the following:
This seems doubly useful since other organisers seldom have time to skill up new organisers
It seems like there is a lot of effort in running a retreat and that this would likely involve multiple people, so I don’t see you coming out ahead here. That said, I expect you’d end up with more highly trained organizers at the end of this both because of increased amount of training time for each organizer and from the peer-to-peer exchange of ideas.
“I personally am seriously thinking about running a “bootcamp” for new organisers, fellowship facilitators, etc. as a direct result of the retreat. I’ve spoken to Jessica McCurdy from CEA about this and there’s a ~50% chance I’ll actually do it”
I’d be curious to hear more about this idea. What’s the plan?
I think that EAs generally haven’t pursued media outreach due to considerations such as covered in this post: What to know before talking with journalists about EA. Their worries seem to be mostly related to journalists misunderstanding or misrepresenting what was said, unfavorable quotes, or stories being fitted into a narrative.
I suppose Op-Eds manage to avoid most of these problems and add a lot of credibility to the field. I guess the main potential downside I can see is that we wouldn’t want existential risk to become a buzzword that people start adding to all kinds of proposals that have nothing to do with x-risk. However, it seems unlikely that just a couple of articles would have this kind of effect. So overall, I think having at least a small amount of this kind of work is important as it does improve the credibility of the field.
I find that surprising. Any thoughts on why that might be? Do you think that groups don’t know that they can apply or that most groups aren’t really doing much in the way of activities that would benefit from funding?
I think the forum prize should have focused on EAs not at orgs b/c those EAs are already sufficiently incentivised to do good work and when the prizes are dominated by people already at orgs this dilutes the ability of the forum prizes to highlight and encourage new talent.
Have you seen this post?
Emerson Spartz recently ran a similar contest where he was paying people $1000 to come up with bounties.
Here’s a few ideas I shared in this thread.
I really like the second idea. I mean the first one isn’t bad either, but a bit of a let down if you don’t actually have the $20,000.
I guess if you start setting the standards that high, maybe that would lead to far too many jobs becoming a priority path.
Tbh, I don’t have a huge amount of desire to produce more content on this topic beyond this post.
I’d be curious to know about the kind of salary that those orgs were offering as if it were significantly below market rate that might explain the discrepency. Alternatively, maybe it’s the case that well-known and long-established orgs are flooded with applications, while newer ones have slimmer pickings?
So I think it’s likely that EA efforts with cost-effectiveness comparable or higher than GiveWell top charities will continue to be funded going forwards, rather than “have the rug pulled out from underneath them.
Yeah, some parts of this discussion are more theoretical than practical and I probably should have highlighted this. Nonetheless, I think it’s easy to make the mistake of saying “We’ll never get to point X” and then end up having no idea of what to do if you actually get to point X. If the prominence of long-termism keeps growing within EA, who knows where we’ll end up?
So from a moral uncertainty/trade perspective, it makes a lot of things for EA to dump lots of $s (and relatively little oversight) into shovel-ready neartermism projects, while focusing the limited community building, vetting, etc capacity on longtermism projects.
This is an excellent point and now that you’ve explained this line of reasoning, I agree.
I guess it’s not immediately clear to me to what extent my proposals would shift limited community building and vetting capability away from long-termist projects. If, for example, Giving What We Can had additional money, it’s not clear to me, although it’s certainly possible, that they might hire someone who would otherwise go to work at a long-termist organisation.
I guess it just seems to me that even though there are real human capital and vetting bottlenecks, that you can work around them to a certain extent if you’re willing to just throw money at the issue. Like there has to be something that’s the equivalent of GiveDirectly for long-termism.
For example, Clearer Thinking runs its own studies for almost every article it writes
Wouldn’t that be extremely expensive?
I’m very keen to see how this project goes.
I think it’s certainly going to be something of a challenge given how many high-quality resources are out there.
• If this projects starts becoming influential within EA, then it would be worthwhile paying experts to comment and review the articles
• It might be worth running a survey to see which alternate resources EAs are most likely to utilize instead and use that as the bar you need to exceed
Hey Buck, I guess I’m curious because you linked to the EAIF form down the bottom, but the latest payout report didn’t include any payouts to Less Wrong or ASX groups. Perhaps you could clarify?
Oh, here’s one thing that I missed:
Funds are available to fund non-EA branded groups