Dangers from AI is real, moderate timelines are real
AI alignment is a serious issue, AIs can be unaligned and dominate humans, for the reasons most EA AI safety people say it does
One major objection, that severe AI danger correlates highly with intractability, is powerful
Some vehement neartermists actually believe in AI risk but don’t engage because of tractability
This objection is addressed by this argument, which seems newer and should be an update to all neartermist EAs
Another major objection of AI safety concerns, that seems very poorly addressed, is AI competence in the real world. This is touched on here and here.
This seems important, but relying on a guess that AGI can’t navigate the world, is bad risk management
Several lock-in scenarios fully justify neartermist work.
Some considerations in AI safety may even heavily favor neartermist work (if AI alignment tractability is low and lock in is likely and this can occur fairly soon)
There is no substance behind “nanotech”, “intelligence explosion in hours” based narratives
These are good as theories/considerations/speculations, but their central place is very unjustifiable
They expose the field to criticism and dismissal by any number of scientists (skeptics and hostile critics outnumber senior EA AI safety people, which is bad and recent trends are unpromising)
This slows progress. It’s really bad these suboptimal viewpoints have existed for so long, and damages the rest of EA
It is remarkably bad that there hasn’t been any effective effort to recruit applied math talent from academia (even good students from top 200 schools would be formidable talent)
This requires relationship building and institutional knowledge (relationships with field leaders, departments and established professors in applied math/computer science/math/economics/others
Taste and presentation is big
Probably the current choice of math and theories around AI safety or LessWrong, are quaint and basically academic poison
For example, acausal work is probably pretty bad
(Some chance they are actually good, tastes can be weird)
Fixation on current internal culture is really bad for general recruitment
Talent pool may be 5x to 50x greater with effective program
A major implementation of AI safety is very highly funded new EA orgs and this is close to an existential issue for some parts of EA
Note that (not yet stated) critiques of these organizations, like “spending EA money” or “conflict of interest” aren’t valid
They are even counterproductive, for example, because closely knit EA leaders are the best talent, and they actually can make profit back to EA (however which produces another issue)
It’s fair to speculate that they will perform two key traits/activities, probably not detailed in their usually limited public communications:
Often will attempt to produce AGI outright or find explore/conditions related to it
Will always attempt to produce profit
(These are generally prosocial)
Because these orgs are principled, they will hire EAs for positions whenever possible, with compensation, agency and culture that is extremely high
This has major effects on all EA orgs
A concern is that they will not achieve AI safety, AGI, and the situation becomes one where EA gets caught up creating rather prosaic tech companies
This could result in a bad ecosystem and bad environment (think of a new season of the Wire, where ossified patterns of EA jargon cover up pretty regular shenanigans)
So things just dilute down to profit seeking tech companies. This seems bad:
In one case, speaking to an AI safety person, they brought up donations from their org, casually by chance. The donation amount was large compared to EA grants.
It’s problematic if ancillary donations by EA orgs are larger than EA grantmakers.
The “constant job interview” culture of some EA events and interactions will be made worse
Leadership talent from one cause area may gravitate to middle management in another—this would be very bad
All these effects can actually be positive, and these orgs and cultures can strengthen EA.
I think this can be addressed by monitoring of talent flows, funding and new organizations
E.g. a dedicated FTE, with a multi year endowment, monitors talent in EA and hiring activity
I think this person should be friendly (pro AI safety), not a critic
They might release reports showing how good the orgs are and how happy employees are
This person can monitor and tabulate grants as well, which seems useful.
This content empowers the moderator to explore any relevant idea, and cause thousands of people, to learn and update on key EA thought, and develop object level views of the landscape. They can stay grounded.
This can justify a substantial service team, such as editors and artists, who can illustrate posts or produce other design
AI Safety
Dangers from AI is real, moderate timelines are real
AI alignment is a serious issue, AIs can be unaligned and dominate humans, for the reasons most EA AI safety people say it does
One major objection, that severe AI danger correlates highly with intractability, is powerful
Some vehement neartermists actually believe in AI risk but don’t engage because of tractability
This objection is addressed by this argument, which seems newer and should be an update to all neartermist EAs
Another major objection of AI safety concerns, that seems very poorly addressed, is AI competence in the real world. This is touched on here and here.
This seems important, but relying on a guess that AGI can’t navigate the world, is bad risk management
Several lock-in scenarios fully justify neartermist work.
Some considerations in AI safety may even heavily favor neartermist work (if AI alignment tractability is low and lock in is likely and this can occur fairly soon)
There is no substance behind “nanotech”, “intelligence explosion in hours” based narratives
These are good as theories/considerations/speculations, but their central place is very unjustifiable
They expose the field to criticism and dismissal by any number of scientists (skeptics and hostile critics outnumber senior EA AI safety people, which is bad and recent trends are unpromising)
This slows progress. It’s really bad these suboptimal viewpoints have existed for so long, and damages the rest of EA
It is remarkably bad that there hasn’t been any effective effort to recruit applied math talent from academia (even good students from top 200 schools would be formidable talent)
This requires relationship building and institutional knowledge (relationships with field leaders, departments and established professors in applied math/computer science/math/economics/others
Taste and presentation is big
Probably the current choice of math and theories around AI safety or LessWrong, are quaint and basically academic poison
For example, acausal work is probably pretty bad
(Some chance they are actually good, tastes can be weird)
Fixation on current internal culture is really bad for general recruitment
Talent pool may be 5x to 50x greater with effective program
A major implementation of AI safety is very highly funded new EA orgs and this is close to an existential issue for some parts of EA
Note that (not yet stated) critiques of these organizations, like “spending EA money” or “conflict of interest” aren’t valid
They are even counterproductive, for example, because closely knit EA leaders are the best talent, and they actually can make profit back to EA (however which produces another issue)
It’s fair to speculate that they will perform two key traits/activities, probably not detailed in their usually limited public communications:
Often will attempt to produce AGI outright or find explore/conditions related to it
Will always attempt to produce profit
(These are generally prosocial)
Because these orgs are principled, they will hire EAs for positions whenever possible, with compensation, agency and culture that is extremely high
This has major effects on all EA orgs
A concern is that they will not achieve AI safety, AGI, and the situation becomes one where EA gets caught up creating rather prosaic tech companies
This could result in a bad ecosystem and bad environment (think of a new season of the Wire, where ossified patterns of EA jargon cover up pretty regular shenanigans)
So things just dilute down to profit seeking tech companies. This seems bad:
In one case, speaking to an AI safety person, they brought up donations from their org, casually by chance. The donation amount was large compared to EA grants.
It’s problematic if ancillary donations by EA orgs are larger than EA grantmakers.
The “constant job interview” culture of some EA events and interactions will be made worse
Leadership talent from one cause area may gravitate to middle management in another—this would be very bad
All these effects can actually be positive, and these orgs and cultures can strengthen EA.
I think this can be addressed by monitoring of talent flows, funding and new organizations
E.g. a dedicated FTE, with a multi year endowment, monitors talent in EA and hiring activity
I think this person should be friendly (pro AI safety), not a critic
They might release reports showing how good the orgs are and how happy employees are
This person can monitor and tabulate grants as well, which seems useful.
Sort of a census taker or statistician
EA Common Application seems like a good idea
I think a common application seems good and to my knowledge, no one I know is working on a very high end, institutional version
See something written up here
EA forum investment seems robustly good
This is one example (“very high quality focus posts”)
This content empowers the moderator to explore any relevant idea, and cause thousands of people, to learn and update on key EA thought, and develop object level views of the landscape. They can stay grounded.
This can justify a substantial service team, such as editors and artists, who can illustrate posts or produce other design