Currently doing local AI safety Movement Building in Australia and NZ + assisting with the Alignment 201 Beta.
I guess this is why I asked what you meant.
Publishing What We Owe the Future was an intentional decision, but there’s a sense in which people read whatever people write and make up their own minds.
“Oh but the community may shift towards it in the future”—I guess some of these shifts are pretty predictable in advance, but that’s less important than the point I was making about maintaining option value especially for options that are looking increasingly high value.
I don’t know what you mean by intentional or not.
But my guess is that the community will shift more long-termist after more people have had time to digest What We Owe the Future.
I’m in favour of direct AI safety movement building too, but the point still remains that the EA community is a vital talent pipeline for cause areas that are more talent dependent. And given the increasing prominence of these cause areas, it seems like it would be a mistake to optimise for the other cause, at least when it’s looking highly plausible that the community may shift even more in the longtermist/x-risk over the next few years.
I’d suggest that we need multiple paths for drawing talent and general EA community building has been surprisingly successful so far.
Maybe I should have said global health and development, rather than near-termism.
I think it’s interesting to explore far out ideas and I suppose it might makes sense from the perspective of someone focused on near-termism.
However, as someone more focused on AI safety, one of the cause areas that is more talent dependent and less immediately legible, this seems like this would be a mistake.
If the community is uncertain between the causes, I suggest that it probably wouldn’t be a good idea to dismantle the community now, at least if we think we might obtain more clarity over the next few years.
As an AI Safety person, I tend to believe that the community should move more towards existential risk (not claiming AI Safety maximalism). On the other hand, even if this is an individual’s top priority, your diversification strategy may be optimal for them if AI safety is too abstract to fully engage their motivation.
In fact, I was considering doing some unrelated, non-EA volunteering so that I would have some more concrete impact as well, but I decided that I didn’t actually have time. I may end up doing this at some point, but I’m all-in with AI Safety for now.
Hmm… Interesting, are you sure you weren’t referred through a regranter?
FTX Future Fund decided to fund me on a project working on SRM and GCR, but refused to publicise it on their website. How many other projects were funded but not publicly disclosed? Why did they decide to not disclose such funding?
Did you receive the grant directly or as part of their regranting program?
You may want to also consider the situation where an organisation doesn’t want to pay employees with funds that could potentially be clawed back or which could be seen to be morally-tainted (depending on what information we find out).
A trick: You can say things like “I’m going to tell you an oversimplified story, but we can dig into the details later if you want” or even “Here’s one perspective of what EA is which I hope will provide a good entry point. My perspective is different, but I can tell you about it later”.
The problem is that a lot of money was given out, so only a very few people could do this.
I guess the way I see it, the more intellectually solid a movement is, the more effort it is to produce a solid criticism. So if a movement is intellectually solid, a lot of the criticism on social media will end up being very bad b/c social media pushes towards lower effort than other formats such as the EA forum.
(Another way of putting this: If you’re going to go to all the effort of making a proper critique, why post it on fb vs the EA forum where you’ll geet deeper engagement?).
You often hear of the abstract need for criticism and “red-teaming” but not much about the actual criticisms.
I’m confused about this. A lot of criticisms and red-teaming occurred during the recent competition. Maybe you could clarify what you meant?
Really sorry to hear about your experiences. I hope that if you attend further events, that you have better experiences in the future.
I think that it is valid for EA to think carefully about its target audience and whether we want to target “intellectual elites” or more broadly, but that doesn’t mean that people should be rude at events and I’m sad whenever I hear people discuss this in an unproductive way.
I don’t know, I wouldn’t suggest choosing cause areas based on FTX collapsing, but I’d think more carefully about mega-projects given the potential of the funding situation to substantially change.
Hope you’re feeling okay Dony.