Instead of talking about me or this ban anymore, while you are here, I really want to encourage considerations of some ideas that I wrote in the following comments:
Global health and poverty should have stories and media that show work and EA talent
The sentiment that “bednets are boring” is common.
This is unnecessary, as the work in these areas are fascinating, involve great skill and unique experiences, that can be exciting and motivating.
These stories have educational value to EAs and others.
They can cover skills and work like convincing stakeholders, governments and complex logistical, scientific related implementations in many different countries or jurisdictions.
They express skills not currently present or visible in most EA communications.
This helps communications and presentation of EA
To be clear, this would be something like an EA journalist, continually creating stories about these interventions. 80K hours but with a different style or approach.
There is a lack of forum discussion on effective animal welfare
This can be improved with the presence of people from the main larger EA animal welfare orgs
Welfarism isn’t communicated well.
Welfarism observes the fact that suffering is enormously unequal among farmed animals, with some experiencing very bad lives
It can be very effective to alter this and reduce suffering, compared to focusing on removing all animal products at once
This idea is well understood and agreed upon by animal welfare EAs
While welfarism may need critique (which it will withstand as it’s substantive as impartialism), its omission is distorting and wasting thinking, in the same way the omission of impartialism would
Anthropomorphism is common (discussions contain emotionally salient points, that are different than what fish and land animal welfare experts focus on )
Reasoning about prolonged, agonizing experiences is absent (it’s understandably very difficult), yet is probably the main source of suffering.
Patterns of communication in wild animal welfare and other areas aren’t ideal.
It should be pointed out that this work involves important foundational background research. Addressing just the relevant animals in human affected environments could be enormously valuable.
In conversations that are difficult or contentious with otherwise altruistic people, it might be useful to be aware of the underlying sentiment where people feel pressured or are having their morality challenged.
Moderation of views and exploration is good, and pointing out one’s personal history in more regular animal advocacy and other altruistic work is good.
Sometimes it may be useful to avoid heavy use of jargon, or applied math that might be seen as undue or overbearing.
A consistent set of content (web pages seem to be good).
Showing upcoming work would be good in Wild Animal Welfare, such as explaining foundational scientific work
Weighting suffering by neuron count is not scientific—resolving this might be EA cause X
EAs often weight by neuron count, as a way to calculate suffering. This has no basis in science. There are reasons (not solved or concrete unfortunately) to think smaller animals (mammals and birds) can have similar level of feelings of pain or suffering as humans.
To calibrate, I think most or all animal welfare EAs, as well as many welfare scientists would agree that simple neuron count weighting is primitive or wrong
Weighting by neuron count has been necessary because it’s very difficult to deal with the consequences of not weighting
Weighting by neuron counts is almost codified—its use turns up casually, probably because omitting it is impractical (emotionally abhorrent)
Because it’s blocked for unprincipled reasons, this could probably be “cause X”
The alleviation of suffering may be tremendously greater if we remove this artificial and maybe false modifier, and take appropriate action with consideration of the true experiences of the sentient beings.
The considerations about communication and overburdening people apply, and a conservative approach would be good
Maybe driving this issue starting from prosaic, well known animals is a useful tactic
Many new institutions in EA animal welfare, which have languished from lack of attention, should be built.
Dangers from AI is real, moderate timelines are real
AI alignment is a serious issue, AIs can be unaligned and dominate humans, for the reasons most EA AI safety people say it does
One major objection, that severe AI danger correlates highly with intractability, is powerful
Some vehement neartermists actually believe in AI risk but don’t engage because of tractability
This objection is addressed by this argument, which seems newer and should be an update to all neartermist EAs
Another major objection of AI safety concerns, that seems very poorly addressed, is AI competence in the real world. This is touched on here and here.
This seems important, but relying on a guess that AGI can’t navigate the world, is bad risk management
Several lock-in scenarios fully justify neartermist work.
Some considerations in AI safety may even heavily favor neartermist work (if AI alignment tractability is low and lock in is likely and this can occur fairly soon)
There is no substance behind “nanotech”, “intelligence explosion in hours” based narratives
These are good as theories/considerations/speculations, but their central place is very unjustifiable
They expose the field to criticism and dismissal by any number of scientists (skeptics and hostile critics outnumber senior EA AI safety people, which is bad and recent trends are unpromising)
This slows progress. It’s really bad these suboptimal viewpoints have existed for so long, and damages the rest of EA
It is remarkably bad that there hasn’t been any effective effort to recruit applied math talent from academia (even good students from top 200 schools would be formidable talent)
This requires relationship building and institutional knowledge (relationships with field leaders, departments and established professors in applied math/computer science/math/economics/others
Taste and presentation is big
Probably the current choice of math and theories around AI safety or LessWrong, are quaint and basically academic poison
For example, acausal work is probably pretty bad
(Some chance they are actually good, tastes can be weird)
Fixation on current internal culture is really bad for general recruitment
Talent pool may be 5x to 50x greater with effective program
A major implementation of AI safety is very highly funded new EA orgs and this is close to an existential issue for some parts of EA
Note that (not yet stated) critiques of these organizations, like “spending EA money” or “conflict of interest” aren’t valid
They are even counterproductive, for example, because closely knit EA leaders are the best talent, and they actually can make profit back to EA (however which produces another issue)
It’s fair to speculate that they will perform two key traits/activities, probably not detailed in their usually limited public communications:
Often will attempt to produce AGI outright or find explore/conditions related to it
Will always attempt to produce profit
(These are generally prosocial)
Because these orgs are principled, they will hire EAs for positions whenever possible, with compensation, agency and culture that is extremely high
This has major effects on all EA orgs
A concern is that they will not achieve AI safety, AGI, and the situation becomes one where EA gets caught up creating rather prosaic tech companies
This could result in a bad ecosystem and bad environment (think of a new season of the Wire, where ossified patterns of EA jargon cover up pretty regular shenanigans)
So things just dilute down to profit seeking tech companies. This seems bad:
In one case, speaking to an AI safety person, they brought up donations from their org, casually by chance. The donation amount was large compared to EA grants.
It’s problematic if ancillary donations by EA orgs are larger than EA grantmakers.
The “constant job interview” culture of some EA events and interactions will be made worse
Leadership talent from one cause area may gravitate to middle management in another—this would be very bad
All these effects can actually be positive, and these orgs and cultures can strengthen EA.
I think this can be addressed by monitoring of talent flows, funding and new organizations
E.g. a dedicated FTE, with a multi year endowment, monitors talent in EA and hiring activity
I think this person should be friendly (pro AI safety), not a critic
They might release reports showing how good the orgs are and how happy employees are
This person can monitor and tabulate grants as well, which seems useful.
This content empowers the moderator to explore any relevant idea, and cause thousands of people, to learn and update on key EA thought, and develop object level views of the landscape. They can stay grounded.
This can justify a substantial service team, such as editors and artists, who can illustrate posts or produce other design
Instead of talking about me or this ban anymore, while you are here, I really want to encourage considerations of some ideas that I wrote in the following comments:
Global health and poverty should have stories and media that show work and EA talent
The sentiment that “bednets are boring” is common.
This is unnecessary, as the work in these areas are fascinating, involve great skill and unique experiences, that can be exciting and motivating.
These stories have educational value to EAs and others.
They can cover skills and work like convincing stakeholders, governments and complex logistical, scientific related implementations in many different countries or jurisdictions.
They express skills not currently present or visible in most EA communications.
This helps communications and presentation of EA
To be clear, this would be something like an EA journalist, continually creating stories about these interventions. 80K hours but with a different style or approach.
Examples of stories (found in a few seconds)
https://www.nytimes.com/2011/10/09/magazine/taken-by-pirates.html
https://www.nytimes.com/2022/05/20/world/africa/somalia-free-ambulance.html
(These don’t have a 80K sort of, long form in depth content, or cover perspectives from the founders, which seems valuable).
Animal welfare
There is a lack of forum discussion on effective animal welfare
This can be improved with the presence of people from the main larger EA animal welfare orgs
Welfarism isn’t communicated well.
Welfarism observes the fact that suffering is enormously unequal among farmed animals, with some experiencing very bad lives
It can be very effective to alter this and reduce suffering, compared to focusing on removing all animal products at once
This idea is well understood and agreed upon by animal welfare EAs
While welfarism may need critique (which it will withstand as it’s substantive as impartialism), its omission is distorting and wasting thinking, in the same way the omission of impartialism would
Anthropomorphism is common (discussions contain emotionally salient points, that are different than what fish and land animal welfare experts focus on )
Reasoning about prolonged, agonizing experiences is absent (it’s understandably very difficult), yet is probably the main source of suffering.
Patterns of communication in wild animal welfare and other areas aren’t ideal.
It should be pointed out that this work involves important foundational background research. Addressing just the relevant animals in human affected environments could be enormously valuable.
In conversations that are difficult or contentious with otherwise altruistic people, it might be useful to be aware of the underlying sentiment where people feel pressured or are having their morality challenged.
Moderation of views and exploration is good, and pointing out one’s personal history in more regular animal advocacy and other altruistic work is good.
Sometimes it may be useful to avoid heavy use of jargon, or applied math that might be seen as undue or overbearing.
A consistent set of content (web pages seem to be good).
Showing upcoming work would be good in Wild Animal Welfare, such as explaining foundational scientific work
Weighting suffering by neuron count is not scientific—resolving this might be EA cause X
EAs often weight by neuron count, as a way to calculate suffering. This has no basis in science. There are reasons (not solved or concrete unfortunately) to think smaller animals (mammals and birds) can have similar level of feelings of pain or suffering as humans.
To calibrate, I think most or all animal welfare EAs, as well as many welfare scientists would agree that simple neuron count weighting is primitive or wrong
Weighting by neuron count has been necessary because it’s very difficult to deal with the consequences of not weighting
Weighting by neuron counts is almost codified—its use turns up casually, probably because omitting it is impractical (emotionally abhorrent)
Because it’s blocked for unprincipled reasons, this could probably be “cause X”
The alleviation of suffering may be tremendously greater if we remove this artificial and maybe false modifier, and take appropriate action with consideration of the true experiences of the sentient beings.
The considerations about communication and overburdening people apply, and a conservative approach would be good
Maybe driving this issue starting from prosaic, well known animals is a useful tactic
Many new institutions in EA animal welfare, which have languished from lack of attention, should be built.
(Theres no bullet points here)
AI Safety
Dangers from AI is real, moderate timelines are real
AI alignment is a serious issue, AIs can be unaligned and dominate humans, for the reasons most EA AI safety people say it does
One major objection, that severe AI danger correlates highly with intractability, is powerful
Some vehement neartermists actually believe in AI risk but don’t engage because of tractability
This objection is addressed by this argument, which seems newer and should be an update to all neartermist EAs
Another major objection of AI safety concerns, that seems very poorly addressed, is AI competence in the real world. This is touched on here and here.
This seems important, but relying on a guess that AGI can’t navigate the world, is bad risk management
Several lock-in scenarios fully justify neartermist work.
Some considerations in AI safety may even heavily favor neartermist work (if AI alignment tractability is low and lock in is likely and this can occur fairly soon)
There is no substance behind “nanotech”, “intelligence explosion in hours” based narratives
These are good as theories/considerations/speculations, but their central place is very unjustifiable
They expose the field to criticism and dismissal by any number of scientists (skeptics and hostile critics outnumber senior EA AI safety people, which is bad and recent trends are unpromising)
This slows progress. It’s really bad these suboptimal viewpoints have existed for so long, and damages the rest of EA
It is remarkably bad that there hasn’t been any effective effort to recruit applied math talent from academia (even good students from top 200 schools would be formidable talent)
This requires relationship building and institutional knowledge (relationships with field leaders, departments and established professors in applied math/computer science/math/economics/others
Taste and presentation is big
Probably the current choice of math and theories around AI safety or LessWrong, are quaint and basically academic poison
For example, acausal work is probably pretty bad
(Some chance they are actually good, tastes can be weird)
Fixation on current internal culture is really bad for general recruitment
Talent pool may be 5x to 50x greater with effective program
A major implementation of AI safety is very highly funded new EA orgs and this is close to an existential issue for some parts of EA
Note that (not yet stated) critiques of these organizations, like “spending EA money” or “conflict of interest” aren’t valid
They are even counterproductive, for example, because closely knit EA leaders are the best talent, and they actually can make profit back to EA (however which produces another issue)
It’s fair to speculate that they will perform two key traits/activities, probably not detailed in their usually limited public communications:
Often will attempt to produce AGI outright or find explore/conditions related to it
Will always attempt to produce profit
(These are generally prosocial)
Because these orgs are principled, they will hire EAs for positions whenever possible, with compensation, agency and culture that is extremely high
This has major effects on all EA orgs
A concern is that they will not achieve AI safety, AGI, and the situation becomes one where EA gets caught up creating rather prosaic tech companies
This could result in a bad ecosystem and bad environment (think of a new season of the Wire, where ossified patterns of EA jargon cover up pretty regular shenanigans)
So things just dilute down to profit seeking tech companies. This seems bad:
In one case, speaking to an AI safety person, they brought up donations from their org, casually by chance. The donation amount was large compared to EA grants.
It’s problematic if ancillary donations by EA orgs are larger than EA grantmakers.
The “constant job interview” culture of some EA events and interactions will be made worse
Leadership talent from one cause area may gravitate to middle management in another—this would be very bad
All these effects can actually be positive, and these orgs and cultures can strengthen EA.
I think this can be addressed by monitoring of talent flows, funding and new organizations
E.g. a dedicated FTE, with a multi year endowment, monitors talent in EA and hiring activity
I think this person should be friendly (pro AI safety), not a critic
They might release reports showing how good the orgs are and how happy employees are
This person can monitor and tabulate grants as well, which seems useful.
Sort of a census taker or statistician
EA Common Application seems like a good idea
I think a common application seems good and to my knowledge, no one I know is working on a very high end, institutional version
See something written up here
EA forum investment seems robustly good
This is one example (“very high quality focus posts”)
This content empowers the moderator to explore any relevant idea, and cause thousands of people, to learn and update on key EA thought, and develop object level views of the landscape. They can stay grounded.
This can justify a substantial service team, such as editors and artists, who can illustrate posts or produce other design