does the EA Hotel provide better benefit than animal welfare efforts? Polio? Global warming? Political campaigns? Poverty alleviation?
The EA Hotel is a meta level project, as opposed to the other more object level efforts you refer to, so it’s hard to do a direct comparison. Perhaps it’s best to think of the Hotel as a multiplier for efforts in the EA space in general. We are enabling people to study and research topics relevant to EA, and also to start new projects and collaborations. Ultimately we hope that this will lead to significant pay-offs in terms of object level value down the line (although in many cases this could take a few years, considering that most of the people we host are in the early stages of their careers).
That is understandable; however it is dissatisfactory in my personal opinion—I cannot commit to funding on such indefinite and vague terms. You (and others) clearly think otherwise, but hopefully you can understand this contrary perspective.
Even if “the EA Hotel is a meta level project,” which to be clear I can certainly understand, there should still be some understanding or estimate of what the anticipated multiplier should be, i.e. a range with a target within a +/- margin. From what I can see upon reviewing current and past guests’ projects, I am not confident that there will be a high return of ethical utility for resource inputs.
Unless it can be demonstrated contrarily, my default inclination (similar to Bill Gates’ strategy) is that projects in developing regions are generally (but certainly not always) significantly higher in yield than in developed regions; which is not unlike the logic that animal welfare efforts are higher yield than human efforts. The EA Hotel is in a developed society and seems focused on field(s) with some of the highest funding already, e.g. artificial intelligence (AI) research. Based on this, it seems perhaps incorrect (or even unethical) to allocate to this project. This isn’t necessarily conclusive, but evidence to the opposite has not been clear.
Hopefully you understand that what I am describing above is not meant as a personal insult by any means or the result of rash emotions, but rather the result of rational consideration along ethical lines.
[Replying to above thread] One reason I asked you to plug some numbers in is that these estimates will depend a lot on what your priors are for various parameters. We will hopefully provide some of our own numerical estimates soon, but I don’t think that too much weight should be put on them (Halffull makes a good point about measurability above). Also consider that our priors may be biased relative to yours.
I’ll also say that a reason for Part 2 of the EV estimate being put on the back burner for so long was that Part 1 didn’t get a very good reception (i.e. people didn’t see much value in it). You are the first person to ask about Part 2!
[Replying to this thread]
projects in developing regions are generally (but certainly not always) significantly higher in yield than in developed regions
I think this is missing the point. The point of the EA Hotel is not to help the residents of the hotel, it is to help them help other people (and animals). AI Safety research, and other X-risk research in general, is ultimately about preventing the extinction of humanity (and other life). This is clearly a valuable thing to be aiming for. However, as I said before, it’s hard to directly compare this kind of thing (and meta level work), with shovel-ready object level interventions like distributing mosquito nets in the developing world.
The fact that others are not interested in such due diligence is itself a separate concern that support for the EA Hotel is perhaps not as rigorous as it should be; however, this is not a concern against you or the EA Hotel, but rather against your supporters. This to me seems like a basic requirement, particularly in the EA movement.
I think this is missing the point. The point of the EA Hotel is not to help the residents of the hotel, it is to help them help other people (and animals). AI Safety research, and other X-risk research in general, is ultimately about preventing the extinction of humanity (and other life). This is clearly a valuable thing to be aiming for. However, as I said before, it’s hard to directly compare this kind of thing (and meta level work), with shovel-ready object level interventions like distributing mosquito nets in the developing world.
No, I recognize, understand, and appreciate this point fully; but I fundamentally do not agree, that is why I cannot in good conscience support this project currently. Because it is such a high value and potentially profitable family of fields (e.g. AI research in particular) to society, it has already attracted significant funding from entrenched institutions, e.g. Google’s DeepMind. In general, there is a point of diminishing returns for incremental investments, which is a possibility in such a richly-funded area as this one specifically. Unless there is evidence or at least logic to the contrary, there is no reason to reject this concern or assume otherwise.
Also, as part of my review, I looked at the profiles of the EA Hotel’s current and past occupants; the primary measurable output seems to be progressing in MOOCs and posting threads on the EA Forum; this output is frankly ~0 in my opinion—this is again not an insult at all, it is simply my assessment. It may be that the EA Hotel is actually fulfilling the role of remedial school for non-competitive researchers who are not attracting employment from the richly funded organizations in their fields; such an effort would likely be low yield—again, this is not an insult, it is just a possibility from market-based logic. There are certainly other, more positive potential roles (which I am very open to, otherwise I would not bother to be continuing this discussion to this thread depth), however these have not yet been proven.
Re: the measurement bias response, this is an incomplete answer. It is fine to not have much data in support as the project is only ~1 year old at this point, however some data should be generated; and more importantly the project should have a charter with an estimate or goal for which to anticipate measurable effects, against which data can be attempted (whether successfully or unsuccessfully) to record how well the organization is doing in its efforts. How else will you know if you are being effective or not, or successful or not?
How do you know that the EA Hotel is being effectively altruistic (again particularly against competing efforts), in the context of your given claim at the top about being effectively “the best use of money”?
These issues still remain open in my opinion. Hopefully these critiques will at least be some food for thought to strengthen the EA Hotel and future endeavors.
The EA Hotel is a meta level project, as opposed to the other more object level efforts you refer to, so it’s hard to do a direct comparison. Perhaps it’s best to think of the Hotel as a multiplier for efforts in the EA space in general. We are enabling people to study and research topics relevant to EA, and also to start new projects and collaborations. Ultimately we hope that this will lead to significant pay-offs in terms of object level value down the line (although in many cases this could take a few years, considering that most of the people we host are in the early stages of their careers).
That is understandable; however it is dissatisfactory in my personal opinion—I cannot commit to funding on such indefinite and vague terms. You (and others) clearly think otherwise, but hopefully you can understand this contrary perspective.
Even if “the EA Hotel is a meta level project,” which to be clear I can certainly understand, there should still be some understanding or estimate of what the anticipated multiplier should be, i.e. a range with a target within a +/- margin. From what I can see upon reviewing current and past guests’ projects, I am not confident that there will be a high return of ethical utility for resource inputs.
Unless it can be demonstrated contrarily, my default inclination (similar to Bill Gates’ strategy) is that projects in developing regions are generally (but certainly not always) significantly higher in yield than in developed regions; which is not unlike the logic that animal welfare efforts are higher yield than human efforts. The EA Hotel is in a developed society and seems focused on field(s) with some of the highest funding already, e.g. artificial intelligence (AI) research. Based on this, it seems perhaps incorrect (or even unethical) to allocate to this project. This isn’t necessarily conclusive, but evidence to the opposite has not been clear.
Hopefully you understand that what I am describing above is not meant as a personal insult by any means or the result of rash emotions, but rather the result of rational consideration along ethical lines.
[Replying to above thread] One reason I asked you to plug some numbers in is that these estimates will depend a lot on what your priors are for various parameters. We will hopefully provide some of our own numerical estimates soon, but I don’t think that too much weight should be put on them (Halffull makes a good point about measurability above). Also consider that our priors may be biased relative to yours.
I’ll also say that a reason for Part 2 of the EV estimate being put on the back burner for so long was that Part 1 didn’t get a very good reception (i.e. people didn’t see much value in it). You are the first person to ask about Part 2!
[Replying to this thread]
I think this is missing the point. The point of the EA Hotel is not to help the residents of the hotel, it is to help them help other people (and animals). AI Safety research, and other X-risk research in general, is ultimately about preventing the extinction of humanity (and other life). This is clearly a valuable thing to be aiming for. However, as I said before, it’s hard to directly compare this kind of thing (and meta level work), with shovel-ready object level interventions like distributing mosquito nets in the developing world.
The fact that others are not interested in such due diligence is itself a separate concern that support for the EA Hotel is perhaps not as rigorous as it should be; however, this is not a concern against you or the EA Hotel, but rather against your supporters. This to me seems like a basic requirement, particularly in the EA movement.
No, I recognize, understand, and appreciate this point fully; but I fundamentally do not agree, that is why I cannot in good conscience support this project currently. Because it is such a high value and potentially profitable family of fields (e.g. AI research in particular) to society, it has already attracted significant funding from entrenched institutions, e.g. Google’s DeepMind. In general, there is a point of diminishing returns for incremental investments, which is a possibility in such a richly-funded area as this one specifically. Unless there is evidence or at least logic to the contrary, there is no reason to reject this concern or assume otherwise.
Also, as part of my review, I looked at the profiles of the EA Hotel’s current and past occupants; the primary measurable output seems to be progressing in MOOCs and posting threads on the EA Forum; this output is frankly ~0 in my opinion—this is again not an insult at all, it is simply my assessment. It may be that the EA Hotel is actually fulfilling the role of remedial school for non-competitive researchers who are not attracting employment from the richly funded organizations in their fields; such an effort would likely be low yield—again, this is not an insult, it is just a possibility from market-based logic. There are certainly other, more positive potential roles (which I am very open to, otherwise I would not bother to be continuing this discussion to this thread depth), however these have not yet been proven.
Re: the measurement bias response, this is an incomplete answer. It is fine to not have much data in support as the project is only ~1 year old at this point, however some data should be generated; and more importantly the project should have a charter with an estimate or goal for which to anticipate measurable effects, against which data can be attempted (whether successfully or unsuccessfully) to record how well the organization is doing in its efforts. How else will you know if you are being effective or not, or successful or not?
How do you know that the EA Hotel is being effectively altruistic (again particularly against competing efforts), in the context of your given claim at the top about being effectively “the best use of money”?
These issues still remain open in my opinion. Hopefully these critiques will at least be some food for thought to strengthen the EA Hotel and future endeavors.