I think the EA forums have an important role in being a platform where EA leaders can make oaths of fealty to the appropriate Open Phil staff member in their cause area.
(Important Reminder!—have you remembered to upvote this week’s Cold Takes post?)
Given that so much of the decision making process for these causes is private, what are we actually debating when we talk about them on the EA Forum?
More seriously, can you elaborate on the nuances or thought of what you think makes the new (“HOP”) cause areas private compared to any other cause area?
Taking the opposing perspective to your post, and using examples from the “Global Health and Well Being” space:
I know of one cause area that is critical, well known, yet that no one really discusses here for compelling reasons, even though we would really want to.
Other cause areas have powerful, historic, non-EA aligned wings, which makes discussion difficult. I would argue AI and x-risk are freer from these spectres and enjoy more open discussion.
Also, I think it’s very likely that “GiveWell”-style interventions, whose epistemics might be viewed as prosaic, could have been equally difficult to discuss or promote. Even if something seems well rated and has strong evidence now, it doesn’t mean the marginal decision to fund it was opaque and difficult in the time it was created (e.g. concerns with RCT scaling up). Whole subdomains of development have risen and fallen with an epistemology that can seem impenetrable and often social/politically motivated.
If you think that intangible qualities such as technical depth make “HOP” causes impenetrable, the same criticism applies to the large Open Phil spending on specific scientific bets.
It’s unclear how we would expect a public forum discussion to substantially influence any of the scientific granting above.
To give some more specific examples, it’s unclear to me how someone outside of Open Philanthropy could go about advocating for the importance of an organization like New Science or Qualia Research Institute.
(I just found these new orgs from your post. These are really interesting!).
I think it’s reasonable to pattern match Qualia Research Institute to orgs like OpenAI or MIRI.
MIRI has active Lesswrong-style forums and their work is literally discussed on Lesswrong and here.
As you know, AI interested folks are also usually associated with the applied rationality community, which is exceptionally open to debate and direct reasoning.
New Science has a website (which is a little polemical) with perspectives and critiques that I think are fairly easy to understand:
With quotes like “And let’s make science advance one young scientist at a time, not one funeral at a time.”, it seems to have something like a constructivist view of science (?).
This worldview and explicitness of its mission seems promising for open discussion of itself and other meta issues.
Also, New Science explicitly models itself on Cold Spring Harbor. I have a small sample size, but the one scientist I know who went there is welcome to skeptical about science and is open to discussion.
Using discussion or openness as a good proxy for your question, the following suggests these orgs would be easy to talk about.
There are objections that aren’t addressed here, but these can be discussed in other comments.
I think your post is great honestly, and this comment only touches on one facet, is a little devil’s advocate and moderate in quality.
the same criticism applies to the large Open Phil spending on specific scientific bets.
Sorry, just to clarify again (and on the topic of swearing fealty), I don’t mean any of this as a criticism of Open Phil. I agree enthusiastically with the hits-based giving point, and generally think it’s good for at least some percentage of philanthropy to be carried out without the expectation of full transparency and GiveWell-level rigor.
It’s unclear how we would expect a public forum discussion to substantially influence any of the scientific granting above.
I think that’s what I’m saying. It’s unclear to me if EA Forum, and public discussions more generally, play a role in this style of grant-making. If the answer is simply “no”, that’s okay too, but would be helpful to hear.
these orgs would be easy to talk about.
I agree that there are avenues for discussion. But it’s not totally clear to me which of these are both useful and appropriate. For example, I could write a post on whether or not the constructivist view of science is correct (FWIW I don’t believe Alexey actually holds this view), but it’s not clear that the discussion would have any bearing on the grant-worthiness of New Science.
Again, maybe EA Forum is simply not a place to discuss the grant-worthiness of HOP-style causes, but the recent discussion of Charter Cities made me think otherwise.
Again, maybe EA Forum is simply not a place to discuss the grant-worthiness of HOP-style causes, but the recent discussion of Charter Cities made me think otherwise.
I don’t think this is true or even can be true, as long as we value general discussion.
I think I have a better sense of your question and maybe I will write up a more direct answer from my perspective.
I am honestly worried my writeup will be long-winded or wrong, and I’ll wait in case someone else writes something better first.
Also, using low effort/time on your end, do you have any links to good writeup(s) on the “constructivist view of science”?
I’m worried I don’t have a real education and will get owned on a discussion related to it, the worst case while deep in some public conversation relying on it.
I think the EA forums have an important role in being a platform where EA leaders can make oaths of fealty to the appropriate Open Phil staff member in their cause area.
(Important Reminder!—have you remembered to upvote this week’s Cold Takes post?)
More seriously, can you elaborate on the nuances or thought of what you think makes the new (“HOP”) cause areas private compared to any other cause area?
Taking the opposing perspective to your post, and using examples from the “Global Health and Well Being” space:
I know of one cause area that is critical, well known, yet that no one really discusses here for compelling reasons, even though we would really want to.
Other cause areas have powerful, historic, non-EA aligned wings, which makes discussion difficult. I would argue AI and x-risk are freer from these spectres and enjoy more open discussion.
Also, I think it’s very likely that “GiveWell”-style interventions, whose epistemics might be viewed as prosaic, could have been equally difficult to discuss or promote. Even if something seems well rated and has strong evidence now, it doesn’t mean the marginal decision to fund it was opaque and difficult in the time it was created (e.g. concerns with RCT scaling up). Whole subdomains of development have risen and fallen with an epistemology that can seem impenetrable and often social/politically motivated.
If you think that intangible qualities such as technical depth make “HOP” causes impenetrable, the same criticism applies to the large Open Phil spending on specific scientific bets.
It’s unclear how we would expect a public forum discussion to substantially influence any of the scientific granting above.
(I just found these new orgs from your post. These are really interesting!).
I think it’s reasonable to pattern match Qualia Research Institute to orgs like OpenAI or MIRI.
MIRI has active Lesswrong-style forums and their work is literally discussed on Lesswrong and here.
As you know, AI interested folks are also usually associated with the applied rationality community, which is exceptionally open to debate and direct reasoning.
New Science has a website (which is a little polemical) with perspectives and critiques that I think are fairly easy to understand:
With quotes like “And let’s make science advance one young scientist at a time, not one funeral at a time.”, it seems to have something like a constructivist view of science (?).
This worldview and explicitness of its mission seems promising for open discussion of itself and other meta issues.
Also, New Science explicitly models itself on Cold Spring Harbor. I have a small sample size, but the one scientist I know who went there is welcome to skeptical about science and is open to discussion.
Using discussion or openness as a good proxy for your question, the following suggests these orgs would be easy to talk about.
There are objections that aren’t addressed here, but these can be discussed in other comments.
I think your post is great honestly, and this comment only touches on one facet, is a little devil’s advocate and moderate in quality.
Sorry, just to clarify again (and on the topic of swearing fealty), I don’t mean any of this as a criticism of Open Phil. I agree enthusiastically with the hits-based giving point, and generally think it’s good for at least some percentage of philanthropy to be carried out without the expectation of full transparency and GiveWell-level rigor.
I think that’s what I’m saying. It’s unclear to me if EA Forum, and public discussions more generally, play a role in this style of grant-making. If the answer is simply “no”, that’s okay too, but would be helpful to hear.
I agree that there are avenues for discussion. But it’s not totally clear to me which of these are both useful and appropriate. For example, I could write a post on whether or not the constructivist view of science is correct (FWIW I don’t believe Alexey actually holds this view), but it’s not clear that the discussion would have any bearing on the grant-worthiness of New Science.
Again, maybe EA Forum is simply not a place to discuss the grant-worthiness of HOP-style causes, but the recent discussion of Charter Cities made me think otherwise.
Thanks!
Thanks for the thoughtful response!
I don’t think this is true or even can be true, as long as we value general discussion.
I think I have a better sense of your question and maybe I will write up a more direct answer from my perspective.
I am honestly worried my writeup will be long-winded or wrong, and I’ll wait in case someone else writes something better first.
Also, using low effort/time on your end, do you have any links to good writeup(s) on the “constructivist view of science”?
I’m worried I don’t have a real education and will get owned on a discussion related to it, the worst case while deep in some public conversation relying on it.