To be clear, I might not be what I seem in more than one way. I also don’t want to dig into this too far, but I’m not sure some people appreciate the level of “neartermist” sentiment aligned with the OP.
Well, no, your comment isn’t true.
If longtermist EAs who currently work on EA-branded projects decided to instead work on projects with different branding (which will plausibly happen; I think longtermists have been increasingly experimenting with non-EA branding for new projects over the last year or two, and this will probably accelerate given the last few months), EA would lose most of the people who contribute to its infrastructure and movement building.
I think we need to zoom out here, as you might be directly making the case for the OP?
Zooming out, there’s a 5-10 page essay here about the feelings of neartermists, who over the last 2+ years, are not super thrilled about seeing people talent change over to LT causes, and potential donors intercepted.
But just over the past few months, the neartermists, who have never heard of or would associate with Michael Vassar, who have no interest in discussing HBD, do not have claim to FTX money, have just been seared. (Also, the “castle”, Vice articles about Neo-Nazis, etc. but I can’t figure out how to work them in here, this is too long). This is made worse by the quality and level of discussion, such as giant, unnecessary threads into the concept of weirdness and personal lifestyles. Neartermists have no interest in judging or litigating most of these issues. They “can’t count that low”.
If the neartermists believe they are exposed, while you’re explaining the LT are sort of evading this by being, well resourced, non-labeled EA orgs, what’s being said here?
I think longtermists have been increasingly experimenting with non-EA branding for new projects over the last year or two, and this will probably accelerate given the last few months
Increasingly? Which actual well esteemed new LT org is explicitly EA labelled? I can’t easily think of any. There is literally nothing besides CEA in the grants?
My guess is that this new neartermist-only EA would not have the resources to do a bunch of things which EA currently does
But this is exactly what the OP is saying and wants to solve. Unless the strategy here is implicitly hold neartermist “hostage”, well, yes, exactly, the neartermists want the similar resources to get these things, and escape this situation you sort of laid out succinctly.
But obviously we should make these decisions to maximize impact
Look, your Redwood Research and Paul’s ARC are two strong object level safety orgs[1]. How many other object level orgs are there doing good work in AI safety?
Apart from you, orgs like LessWrong and MIRI, are marginal, even in their worldview. What has MIRI’s output been for the last 3 years? These highly knit groups can’t stop some of their own community moving into two lush, for-profit organizations that are associated with EA. One or more of these orgs are involved in, if not literally spawned, a new LLM arms race.
Who is John Carmack? What did he decide to do after reading Superintelligence?
For a host of reasons, we both know the T in ITN, could be a big topic for a debate.
to maximize how much we enjoy our social scenes.)
Is the ellipsis that people are choosing neartermist areas based on social circles?
But isn’t the truth sort of exactly the opposite? I think if we studied the social graph and lifestyle choices, the kinds of we would find most neartermists have looser affiliations with EA.
BTW, I don’t find explaining AI safety hard to my circle (and also most of the neartermist counterarguments are wrong and I can usually set them aside). There’s also a ton of shiny objects that AI safety stuff is near or touches on (LLM, stable diffusion, etc.) that seems impressive to “normies”.
There should be a plus-sign next to it to expand the comment.
Only 1 person downvoted it (the op gives an upvote by default, and hovering over the −5 there are only 2 votes, so a high-karma user probably downvoted it)
Why create an account just to post this? You could have used a pre-existing account to post a comment to this effect, or if you are the op just edit your post to ask for more elaboration.
To be clear, I might not be what I seem in more than one way. I also don’t want to dig into this too far, but I’m not sure some people appreciate the level of “neartermist” sentiment aligned with the OP.
Well, no, your comment isn’t true.
I think we need to zoom out here, as you might be directly making the case for the OP?
Zooming out, there’s a 5-10 page essay here about the feelings of neartermists, who over the last 2+ years, are not super thrilled about seeing people talent change over to LT causes, and potential donors intercepted.
But just over the past few months, the neartermists, who have never heard of or would associate with Michael Vassar, who have no interest in discussing HBD, do not have claim to FTX money, have just been seared. (Also, the “castle”, Vice articles about Neo-Nazis, etc. but I can’t figure out how to work them in here, this is too long). This is made worse by the quality and level of discussion, such as giant, unnecessary threads into the concept of weirdness and personal lifestyles. Neartermists have no interest in judging or litigating most of these issues. They “can’t count that low”.
If the neartermists believe they are exposed, while you’re explaining the LT are sort of evading this by being, well resourced, non-labeled EA orgs, what’s being said here?
Increasingly? Which actual well esteemed new LT org is explicitly EA labelled? I can’t easily think of any. There is literally nothing besides CEA in the grants?
https://www.openphilanthropy.org/grants/?q=&focus-area%5B%5D=longtermism
But this is exactly what the OP is saying and wants to solve. Unless the strategy here is implicitly hold neartermist “hostage”, well, yes, exactly, the neartermists want the similar resources to get these things, and escape this situation you sort of laid out succinctly.
Look, your Redwood Research and Paul’s ARC are two strong object level safety orgs[1]. How many other object level orgs are there doing good work in AI safety?
Apart from you, orgs like LessWrong and MIRI, are marginal, even in their worldview. What has MIRI’s output been for the last 3 years? These highly knit groups can’t stop some of their own community moving into two lush, for-profit organizations that are associated with EA. One or more of these orgs are involved in, if not literally spawned, a new LLM arms race.
Who is John Carmack? What did he decide to do after reading Superintelligence?
For a host of reasons, we both know the T in ITN, could be a big topic for a debate.
Is the ellipsis that people are choosing neartermist areas based on social circles?
But isn’t the truth sort of exactly the opposite? I think if we studied the social graph and lifestyle choices, the kinds of we would find most neartermists have looser affiliations with EA.
BTW, I don’t find explaining AI safety hard to my circle (and also most of the neartermist counterarguments are wrong and I can usually set them aside). There’s also a ton of shiny objects that AI safety stuff is near or touches on (LLM, stable diffusion, etc.) that seems impressive to “normies”.
IMO the biosecurity orgs could fit in this purported new movement naturally.
Think about why this comment is downvoted? We can’t even see it.
There should be a plus-sign next to it to expand the comment.
Only 1 person downvoted it (the op gives an upvote by default, and hovering over the −5 there are only 2 votes, so a high-karma user probably downvoted it)
Why create an account just to post this? You could have used a pre-existing account to post a comment to this effect, or if you are the op just edit your post to ask for more elaboration.
For mainly point 3, I’ve downvoted your post