The fact that others are not interested in such due diligence is itself a separate concern that support for the EA Hotel is perhaps not as rigorous as it should be; however, this is not a concern against you or the EA Hotel, but rather against your supporters. This to me seems like a basic requirement, particularly in the EA movement.
I think this is missing the point. The point of the EA Hotel is not to help the residents of the hotel, it is to help them help other people (and animals). AI Safety research, and other X-risk research in general, is ultimately about preventing the extinction of humanity (and other life). This is clearly a valuable thing to be aiming for. However, as I said before, it’s hard to directly compare this kind of thing (and meta level work), with shovel-ready object level interventions like distributing mosquito nets in the developing world.
No, I recognize, understand, and appreciate this point fully; but I fundamentally do not agree, that is why I cannot in good conscience support this project currently. Because it is such a high value and potentially profitable family of fields (e.g. AI research in particular) to society, it has already attracted significant funding from entrenched institutions, e.g. Google’s DeepMind. In general, there is a point of diminishing returns for incremental investments, which is a possibility in such a richly-funded area as this one specifically. Unless there is evidence or at least logic to the contrary, there is no reason to reject this concern or assume otherwise.
Also, as part of my review, I looked at the profiles of the EA Hotel’s current and past occupants; the primary measurable output seems to be progressing in MOOCs and posting threads on the EA Forum; this output is frankly ~0 in my opinion—this is again not an insult at all, it is simply my assessment. It may be that the EA Hotel is actually fulfilling the role of remedial school for non-competitive researchers who are not attracting employment from the richly funded organizations in their fields; such an effort would likely be low yield—again, this is not an insult, it is just a possibility from market-based logic. There are certainly other, more positive potential roles (which I am very open to, otherwise I would not bother to be continuing this discussion to this thread depth), however these have not yet been proven.
Re: the measurement bias response, this is an incomplete answer. It is fine to not have much data in support as the project is only ~1 year old at this point, however some data should be generated; and more importantly the project should have a charter with an estimate or goal for which to anticipate measurable effects, against which data can be attempted (whether successfully or unsuccessfully) to record how well the organization is doing in its efforts. How else will you know if you are being effective or not, or successful or not?
How do you know that the EA Hotel is being effectively altruistic (again particularly against competing efforts), in the context of your given claim at the top about being effectively “the best use of money”?
These issues still remain open in my opinion. Hopefully these critiques will at least be some food for thought to strengthen the EA Hotel and future endeavors.
The fact that others are not interested in such due diligence is itself a separate concern that support for the EA Hotel is perhaps not as rigorous as it should be; however, this is not a concern against you or the EA Hotel, but rather against your supporters. This to me seems like a basic requirement, particularly in the EA movement.
No, I recognize, understand, and appreciate this point fully; but I fundamentally do not agree, that is why I cannot in good conscience support this project currently. Because it is such a high value and potentially profitable family of fields (e.g. AI research in particular) to society, it has already attracted significant funding from entrenched institutions, e.g. Google’s DeepMind. In general, there is a point of diminishing returns for incremental investments, which is a possibility in such a richly-funded area as this one specifically. Unless there is evidence or at least logic to the contrary, there is no reason to reject this concern or assume otherwise.
Also, as part of my review, I looked at the profiles of the EA Hotel’s current and past occupants; the primary measurable output seems to be progressing in MOOCs and posting threads on the EA Forum; this output is frankly ~0 in my opinion—this is again not an insult at all, it is simply my assessment. It may be that the EA Hotel is actually fulfilling the role of remedial school for non-competitive researchers who are not attracting employment from the richly funded organizations in their fields; such an effort would likely be low yield—again, this is not an insult, it is just a possibility from market-based logic. There are certainly other, more positive potential roles (which I am very open to, otherwise I would not bother to be continuing this discussion to this thread depth), however these have not yet been proven.
Re: the measurement bias response, this is an incomplete answer. It is fine to not have much data in support as the project is only ~1 year old at this point, however some data should be generated; and more importantly the project should have a charter with an estimate or goal for which to anticipate measurable effects, against which data can be attempted (whether successfully or unsuccessfully) to record how well the organization is doing in its efforts. How else will you know if you are being effective or not, or successful or not?
How do you know that the EA Hotel is being effectively altruistic (again particularly against competing efforts), in the context of your given claim at the top about being effectively “the best use of money”?
These issues still remain open in my opinion. Hopefully these critiques will at least be some food for thought to strengthen the EA Hotel and future endeavors.