Please read the posts linked to on eahotel.org/fundraiser (and as stated in the OP, we have more in the pipeline).
See also the totaliser on that page (will be updated soon) - total donations (in addition to those made founder Greg Colbourn) are currently ~£36k from >50 individuals, they have come through various means—the PayPal MoneyPool, GoFundMe, Patreon and privately).
The question remains on the value or utility of the EA Hotel though, it looks like this is not expected to be answered until a future update 10. In my opinion this is a mistake, a clear estimate should have been provided in a prospectus prior to the initial launch of the EA Hotel as part of due diligence, and then this forecast should have been measured against continuously in the early months of the project.
I am one of the contributors to the Donations List Website (DLW), the site you link to. DLW is not affiliated with the EA Hotel in anyway (although Vipul, the maintainer of DLW, made a donation to the EA Hotel). Some reasons for the discrepancy in this case:
As stated in bold letters at the top of the page, “Current data is preliminary and has not been completely vetted and normalized”. I don’t think this is the main reason in this case.
Pulling data into DLW is not automatic, so there is a lag between when the donations are made and when they appear on DLW.
That is understandable, however when presenting information (i.e. linking to it on your homepage) there is an implicit endorsement of said information; else it should not be presented. This is irrespective of whether the source is formally affiliated or not—its simple presence is already an informal affiliation. The simple fact that the EA Hotel does not have a better presentation of information is itself meta information on the state of the organization and project.
However, this was not really the main point; it was only a “little” concern as I previously wrote. The more significant concern is that there does not seem to be a ready presentation of the effort’s value (expected or real), and that this is not expected until update 10--as I wrote previously, this was in my opinion a mistake, as it should be one of the primary priorities for an EA organization.
It basically comes down to yield or return on investment (ROI). It seems quite common for utilitarianism and effective altruism to be related, and in the former to have some quantification of value and ratio of output per unit of input; one might say that the most ethical stance is to find the min / max optimization that produces the highest return. Whether EA demands or requires such an optimal maximum, or whether a suboptimal effectivity is still ethical, is an interesting but separate discussion.
So in a lot of the animal welfare threads, there is commonly some idea that they produce superior yield in ethical utility, usually because there is simply more biomass, it is much cheaper, etc. Even if I usually don’t agree, there is still the basic quantification that provides a foundation for such an ethical claim.
Another example are environmental organizations such as Cool Earth, which gives x acres of rain forest preserved and y oxygen production or carbon sequestration for $ given. That is not exactly utility per se, but it is a good measure that could probably be converted into some units of generic utility.
For the EA Hotel, I am not sure what the yield is for $ given. In order to make a claim that x is the “best use” of resources, this sort of consideration is required and must be clear, IMO.
Consider that if allocation of resources for yield is an ethical decision, then asking for some clarification is not intended to be rude at all or an off-topic question, it is simply due diligence. Even if I have the funds to donate to the EA Hotel, if the yield is lower than the utility that can be produced by an alternative (and in fact competing) effort, then it is my ethical obligation to fund the alternative. Is it not?
Perhaps there is still a misunderstanding that my ask is overly aggressive or impolite; however, what is worse is for someone simply not to care and not to engage in the discussion. But from my perspective, the EA Forum is seemingly giving a pass to one of our own. Again, my ask is simply due diligence. For context, at £5.7k/month, I could fund the EA Hotel for over a year. However, does the EA Hotel provide better benefit than animal welfare efforts? Polio? Global warming? Political campaigns? Poverty alleviation?
The answer does not seem clear to me. Without that, it is difficult to proceed in making an ethical decision.
I think there’s a clear issue here with measurability bias. The fact of the matter is that the most promising opportunities will be the hardest to measure (see for instance investing in a startup vs. buying stocks in an established business) - The very fact that opportunities are easy to measure and obvious makes them less likely to be neglected.
The proper way to evaluate new and emerging projects is to understand the landscape, and do a systems level analysis of the product, process, and team to see if you think the ROI will be high compared to other hard to measure project. This is what I attempted to do with the EA hotel here: https://www.lesswrong.com/posts/tCHsm5ZyAca8HfJSG/the-case-for-the-ea-hotel
This point is reasonable, and I fully acknowledge that the EA Hotel cannot have much measurable data yet in its ~1 year of existence. However, I don’t think it is a particularly satisfying counter response.
If the nature of the EA Hotel’s work is fundamentally immeasurable, how is one able to objectively quantify that it is in fact being altruistic effectively? If it is not fundamentally immeasurable but is not measured and could have been measured, then that is likely simply incompetence. Is it not? Either way, it would be impossible to evidentially state that the EA Hotel has good yield.
Further, the idea that the EA Hotel’s work is immeasurable because it is a meta project or has some vague multipler effects is fundamentally dissatisfying to me. There is a page full of attempted calculations in update 3, so I do not believe the EA Hotel assumes it is immeasurable either, or at least originally did not. The more likely answer, a la Occam’s Razor, is that there is simply insufficient effort in resolving the quantification. There are, after all, plenty of other more pressing and practical challenges to be met on a day-to-day basis; and [surprisingly] it does not seem to have been pressed much as a potential issue before (per the other response by Greg_Colbourn).
Even if it is difficult to measure, a project (particularly one which aspires to be effective—or greatly effective, or even the most effective) must as a requirement outline some clear goals against which its progress can be benchmarked in my opinion, so that it can determine its performance and broadcast this clearly. It is simply best practice to do so. This has not been done as far as I can tell—if I am mistaken, please point me to it and I will revise my opinions accordingly.
There are a couple additional points I would make. Firstly, as an EA Hotel occupant, you are highly likely to be positively biased in its favor. Therefore, you are naturally inclined to calculate more generously in its favor; and certainly the article you wrote and linked to is positive in its support indeed. Is this refutable? It is also likely an objective fact that your interests align with the EA Hotel’s, and someone whose interests were less aligned could easily weight the considerations you stated less heavily. You are therefore not an objective or the best judge of the EA Hotel’s value, despite (or because of) your first-hand experience.
The other point, which I think is common throughout the EA community, is that it is somewhat elitist in thinking that the EA way is the best (and perhaps only) way—there is some credibility to this claim I believe as it was noted on the recent EA survey. For example, is Bill Gates an EA? He does not visit the EA forum much AFAIK, focuses on efforts that differ somewhat from the EA’s priorities, etc. But undeniably I would think that his net positive utility vastly outweighs the entire EA Forum’s even if he does not follow EA, or at least does not follow strictly. Bill Gates does not (to my knowledge) support the EA Hotel, and if he does then not to a level to make it financially sustainable in perpetuity. Should he—and if he does not, is he wrong for not doing so? If you believe that the EA Hotel is the best use of funds (as has been claimed at the top of this thread and is supported in your article), then yes, you would probably conclude that he is wrong based on his inaccurate allocation of resources which results in a sub-ideal outcome in terms of ethical utility. This logic is misguided in my opinion.
Contrarily to EA puritanism, the fact in my opinion is that there are generally EAs commonly beyond EA borders, e.g. celebrities like Bill Gates and Elon Musk, but also plenty of anonymous people in general. Is the “Chasm” you described real? I am not sure that it is, or at least not so acutely. In particular for the EA Hotel’s context, there are plenty of other organizations already which are richly-funded such as Google’s DeepMind that are active and significantly contributing to the same fields which the EA Hotel is interested in (from my understanding). The EA Hotel’s contributions in such an environment are therefore likely not to be a large multipler (although it is not impossible to the opposite, and I am open to that possibility), but instead small relatively. It is a possibility that contributing to the EA Hotel is actually suboptimal or even unethical because of its incremental contributions which yield diminished returns relative to what could be the result via alternative avenues. This is not a definite conclusion, but I am noting it for completeness or inclusivity of contrary viewpoints.
To be clear, none of what I have written is intended as an insult in any way. The point is only that it is not clear that the EA Hotel is able to substantiate its claim to being effectively altruistic (e.g. via lack of measurability, which seems to be your argument), particularly “very” or even “the most” effective (in terms of output per resource input). Based on this lack of clarity, I find that I cannot personally commit to supporting the project.
However, it looks like the EA Hotel already has the funding it needs now, so perhaps we may simply go our separate ways at this point. My aim throughout was to be constructive. Hopefully some of it was useful in some way.
Bill Gates does not (to my knowledge) support the EA Hotel
We are far too small to be on Bill Gates’ radar. It’s not worth his time looking at grants of less than millions of $ (Who know’s though, maybe we’ll get there eventually?)
does the EA Hotel provide better benefit than animal welfare efforts? Polio? Global warming? Political campaigns? Poverty alleviation?
The EA Hotel is a meta level project, as opposed to the other more object level efforts you refer to, so it’s hard to do a direct comparison. Perhaps it’s best to think of the Hotel as a multiplier for efforts in the EA space in general. We are enabling people to study and research topics relevant to EA, and also to start new projects and collaborations. Ultimately we hope that this will lead to significant pay-offs in terms of object level value down the line (although in many cases this could take a few years, considering that most of the people we host are in the early stages of their careers).
That is understandable; however it is dissatisfactory in my personal opinion—I cannot commit to funding on such indefinite and vague terms. You (and others) clearly think otherwise, but hopefully you can understand this contrary perspective.
Even if “the EA Hotel is a meta level project,” which to be clear I can certainly understand, there should still be some understanding or estimate of what the anticipated multiplier should be, i.e. a range with a target within a +/- margin. From what I can see upon reviewing current and past guests’ projects, I am not confident that there will be a high return of ethical utility for resource inputs.
Unless it can be demonstrated contrarily, my default inclination (similar to Bill Gates’ strategy) is that projects in developing regions are generally (but certainly not always) significantly higher in yield than in developed regions; which is not unlike the logic that animal welfare efforts are higher yield than human efforts. The EA Hotel is in a developed society and seems focused on field(s) with some of the highest funding already, e.g. artificial intelligence (AI) research. Based on this, it seems perhaps incorrect (or even unethical) to allocate to this project. This isn’t necessarily conclusive, but evidence to the opposite has not been clear.
Hopefully you understand that what I am describing above is not meant as a personal insult by any means or the result of rash emotions, but rather the result of rational consideration along ethical lines.
[Replying to above thread] One reason I asked you to plug some numbers in is that these estimates will depend a lot on what your priors are for various parameters. We will hopefully provide some of our own numerical estimates soon, but I don’t think that too much weight should be put on them (Halffull makes a good point about measurability above). Also consider that our priors may be biased relative to yours.
I’ll also say that a reason for Part 2 of the EV estimate being put on the back burner for so long was that Part 1 didn’t get a very good reception (i.e. people didn’t see much value in it). You are the first person to ask about Part 2!
[Replying to this thread]
projects in developing regions are generally (but certainly not always) significantly higher in yield than in developed regions
I think this is missing the point. The point of the EA Hotel is not to help the residents of the hotel, it is to help them help other people (and animals). AI Safety research, and other X-risk research in general, is ultimately about preventing the extinction of humanity (and other life). This is clearly a valuable thing to be aiming for. However, as I said before, it’s hard to directly compare this kind of thing (and meta level work), with shovel-ready object level interventions like distributing mosquito nets in the developing world.
The fact that others are not interested in such due diligence is itself a separate concern that support for the EA Hotel is perhaps not as rigorous as it should be; however, this is not a concern against you or the EA Hotel, but rather against your supporters. This to me seems like a basic requirement, particularly in the EA movement.
I think this is missing the point. The point of the EA Hotel is not to help the residents of the hotel, it is to help them help other people (and animals). AI Safety research, and other X-risk research in general, is ultimately about preventing the extinction of humanity (and other life). This is clearly a valuable thing to be aiming for. However, as I said before, it’s hard to directly compare this kind of thing (and meta level work), with shovel-ready object level interventions like distributing mosquito nets in the developing world.
No, I recognize, understand, and appreciate this point fully; but I fundamentally do not agree, that is why I cannot in good conscience support this project currently. Because it is such a high value and potentially profitable family of fields (e.g. AI research in particular) to society, it has already attracted significant funding from entrenched institutions, e.g. Google’s DeepMind. In general, there is a point of diminishing returns for incremental investments, which is a possibility in such a richly-funded area as this one specifically. Unless there is evidence or at least logic to the contrary, there is no reason to reject this concern or assume otherwise.
Also, as part of my review, I looked at the profiles of the EA Hotel’s current and past occupants; the primary measurable output seems to be progressing in MOOCs and posting threads on the EA Forum; this output is frankly ~0 in my opinion—this is again not an insult at all, it is simply my assessment. It may be that the EA Hotel is actually fulfilling the role of remedial school for non-competitive researchers who are not attracting employment from the richly funded organizations in their fields; such an effort would likely be low yield—again, this is not an insult, it is just a possibility from market-based logic. There are certainly other, more positive potential roles (which I am very open to, otherwise I would not bother to be continuing this discussion to this thread depth), however these have not yet been proven.
Re: the measurement bias response, this is an incomplete answer. It is fine to not have much data in support as the project is only ~1 year old at this point, however some data should be generated; and more importantly the project should have a charter with an estimate or goal for which to anticipate measurable effects, against which data can be attempted (whether successfully or unsuccessfully) to record how well the organization is doing in its efforts. How else will you know if you are being effective or not, or successful or not?
How do you know that the EA Hotel is being effectively altruistic (again particularly against competing efforts), in the context of your given claim at the top about being effectively “the best use of money”?
These issues still remain open in my opinion. Hopefully these critiques will at least be some food for thought to strengthen the EA Hotel and future endeavors.
Please read the posts linked to on eahotel.org/fundraiser (and as stated in the OP, we have more in the pipeline).
See also the totaliser on that page (will be updated soon) - total donations (in addition to those made founder Greg Colbourn) are currently ~£36k from >50 individuals, they have come through various means—the PayPal MoneyPool, GoFundMe, Patreon and privately).
UPDATE 6th Nov 2019: the fundraiser page has now been updated, and a histogram of donations added: https://eahotel.org/fundraiser/
Yes, I see that that page is more up-to-date, I was looking at this page https://donations.vipulnaik.com/donee.php?donee=EA+Hotel#doneeDocumentList which is linked on eahotel.org. The inconsistency is itself a little concerning.
The question remains on the value or utility of the EA Hotel though, it looks like this is not expected to be answered until a future update 10. In my opinion this is a mistake, a clear estimate should have been provided in a prospectus prior to the initial launch of the EA Hotel as part of due diligence, and then this forecast should have been measured against continuously in the early months of the project.
I am one of the contributors to the Donations List Website (DLW), the site you link to. DLW is not affiliated with the EA Hotel in anyway (although Vipul, the maintainer of DLW, made a donation to the EA Hotel). Some reasons for the discrepancy in this case:
As stated in bold letters at the top of the page, “Current data is preliminary and has not been completely vetted and normalized”. I don’t think this is the main reason in this case.
Pulling data into DLW is not automatic, so there is a lag between when the donations are made and when they appear on DLW.
DLW only tracks public donations.
That is understandable, however when presenting information (i.e. linking to it on your homepage) there is an implicit endorsement of said information; else it should not be presented. This is irrespective of whether the source is formally affiliated or not—its simple presence is already an informal affiliation. The simple fact that the EA Hotel does not have a better presentation of information is itself meta information on the state of the organization and project.
However, this was not really the main point; it was only a “little” concern as I previously wrote. The more significant concern is that there does not seem to be a ready presentation of the effort’s value (expected or real), and that this is not expected until update 10--as I wrote previously, this was in my opinion a mistake, as it should be one of the primary priorities for an EA organization.
Can you give some examples of EA organizations that have done things the “right way” (in your view)?
Thanks for asking, that’s a good question.
It basically comes down to yield or return on investment (ROI). It seems quite common for utilitarianism and effective altruism to be related, and in the former to have some quantification of value and ratio of output per unit of input; one might say that the most ethical stance is to find the min / max optimization that produces the highest return. Whether EA demands or requires such an optimal maximum, or whether a suboptimal effectivity is still ethical, is an interesting but separate discussion.
So in a lot of the animal welfare threads, there is commonly some idea that they produce superior yield in ethical utility, usually because there is simply more biomass, it is much cheaper, etc. Even if I usually don’t agree, there is still the basic quantification that provides a foundation for such an ethical claim.
Another example are environmental organizations such as Cool Earth, which gives x acres of rain forest preserved and y oxygen production or carbon sequestration for $ given. That is not exactly utility per se, but it is a good measure that could probably be converted into some units of generic utility.
For the EA Hotel, I am not sure what the yield is for $ given. In order to make a claim that x is the “best use” of resources, this sort of consideration is required and must be clear, IMO.
Consider that if allocation of resources for yield is an ethical decision, then asking for some clarification is not intended to be rude at all or an off-topic question, it is simply due diligence. Even if I have the funds to donate to the EA Hotel, if the yield is lower than the utility that can be produced by an alternative (and in fact competing) effort, then it is my ethical obligation to fund the alternative. Is it not?
Perhaps there is still a misunderstanding that my ask is overly aggressive or impolite; however, what is worse is for someone simply not to care and not to engage in the discussion. But from my perspective, the EA Forum is seemingly giving a pass to one of our own. Again, my ask is simply due diligence. For context, at £5.7k/month, I could fund the EA Hotel for over a year. However, does the EA Hotel provide better benefit than animal welfare efforts? Polio? Global warming? Political campaigns? Poverty alleviation?
The answer does not seem clear to me. Without that, it is difficult to proceed in making an ethical decision.
I think there’s a clear issue here with measurability bias. The fact of the matter is that the most promising opportunities will be the hardest to measure (see for instance investing in a startup vs. buying stocks in an established business) - The very fact that opportunities are easy to measure and obvious makes them less likely to be neglected.
The proper way to evaluate new and emerging projects is to understand the landscape, and do a systems level analysis of the product, process, and team to see if you think the ROI will be high compared to other hard to measure project. This is what I attempted to do with the EA hotel here: https://www.lesswrong.com/posts/tCHsm5ZyAca8HfJSG/the-case-for-the-ea-hotel
This point is reasonable, and I fully acknowledge that the EA Hotel cannot have much measurable data yet in its ~1 year of existence. However, I don’t think it is a particularly satisfying counter response.
If the nature of the EA Hotel’s work is fundamentally immeasurable, how is one able to objectively quantify that it is in fact being altruistic effectively? If it is not fundamentally immeasurable but is not measured and could have been measured, then that is likely simply incompetence. Is it not? Either way, it would be impossible to evidentially state that the EA Hotel has good yield.
Further, the idea that the EA Hotel’s work is immeasurable because it is a meta project or has some vague multipler effects is fundamentally dissatisfying to me. There is a page full of attempted calculations in update 3, so I do not believe the EA Hotel assumes it is immeasurable either, or at least originally did not. The more likely answer, a la Occam’s Razor, is that there is simply insufficient effort in resolving the quantification. There are, after all, plenty of other more pressing and practical challenges to be met on a day-to-day basis; and [surprisingly] it does not seem to have been pressed much as a potential issue before (per the other response by Greg_Colbourn).
Even if it is difficult to measure, a project (particularly one which aspires to be effective—or greatly effective, or even the most effective) must as a requirement outline some clear goals against which its progress can be benchmarked in my opinion, so that it can determine its performance and broadcast this clearly. It is simply best practice to do so. This has not been done as far as I can tell—if I am mistaken, please point me to it and I will revise my opinions accordingly.
There are a couple additional points I would make. Firstly, as an EA Hotel occupant, you are highly likely to be positively biased in its favor. Therefore, you are naturally inclined to calculate more generously in its favor; and certainly the article you wrote and linked to is positive in its support indeed. Is this refutable? It is also likely an objective fact that your interests align with the EA Hotel’s, and someone whose interests were less aligned could easily weight the considerations you stated less heavily. You are therefore not an objective or the best judge of the EA Hotel’s value, despite (or because of) your first-hand experience.
The other point, which I think is common throughout the EA community, is that it is somewhat elitist in thinking that the EA way is the best (and perhaps only) way—there is some credibility to this claim I believe as it was noted on the recent EA survey. For example, is Bill Gates an EA? He does not visit the EA forum much AFAIK, focuses on efforts that differ somewhat from the EA’s priorities, etc. But undeniably I would think that his net positive utility vastly outweighs the entire EA Forum’s even if he does not follow EA, or at least does not follow strictly. Bill Gates does not (to my knowledge) support the EA Hotel, and if he does then not to a level to make it financially sustainable in perpetuity. Should he—and if he does not, is he wrong for not doing so? If you believe that the EA Hotel is the best use of funds (as has been claimed at the top of this thread and is supported in your article), then yes, you would probably conclude that he is wrong based on his inaccurate allocation of resources which results in a sub-ideal outcome in terms of ethical utility. This logic is misguided in my opinion.
Contrarily to EA puritanism, the fact in my opinion is that there are generally EAs commonly beyond EA borders, e.g. celebrities like Bill Gates and Elon Musk, but also plenty of anonymous people in general. Is the “Chasm” you described real? I am not sure that it is, or at least not so acutely. In particular for the EA Hotel’s context, there are plenty of other organizations already which are richly-funded such as Google’s DeepMind that are active and significantly contributing to the same fields which the EA Hotel is interested in (from my understanding). The EA Hotel’s contributions in such an environment are therefore likely not to be a large multipler (although it is not impossible to the opposite, and I am open to that possibility), but instead small relatively. It is a possibility that contributing to the EA Hotel is actually suboptimal or even unethical because of its incremental contributions which yield diminished returns relative to what could be the result via alternative avenues. This is not a definite conclusion, but I am noting it for completeness or inclusivity of contrary viewpoints.
To be clear, none of what I have written is intended as an insult in any way. The point is only that it is not clear that the EA Hotel is able to substantiate its claim to being effectively altruistic (e.g. via lack of measurability, which seems to be your argument), particularly “very” or even “the most” effective (in terms of output per resource input). Based on this lack of clarity, I find that I cannot personally commit to supporting the project.
However, it looks like the EA Hotel already has the funding it needs now, so perhaps we may simply go our separate ways at this point. My aim throughout was to be constructive. Hopefully some of it was useful in some way.
We are far too small to be on Bill Gates’ radar. It’s not worth his time looking at grants of less than millions of $ (Who know’s though, maybe we’ll get there eventually?)
The EA Hotel is a meta level project, as opposed to the other more object level efforts you refer to, so it’s hard to do a direct comparison. Perhaps it’s best to think of the Hotel as a multiplier for efforts in the EA space in general. We are enabling people to study and research topics relevant to EA, and also to start new projects and collaborations. Ultimately we hope that this will lead to significant pay-offs in terms of object level value down the line (although in many cases this could take a few years, considering that most of the people we host are in the early stages of their careers).
That is understandable; however it is dissatisfactory in my personal opinion—I cannot commit to funding on such indefinite and vague terms. You (and others) clearly think otherwise, but hopefully you can understand this contrary perspective.
Even if “the EA Hotel is a meta level project,” which to be clear I can certainly understand, there should still be some understanding or estimate of what the anticipated multiplier should be, i.e. a range with a target within a +/- margin. From what I can see upon reviewing current and past guests’ projects, I am not confident that there will be a high return of ethical utility for resource inputs.
Unless it can be demonstrated contrarily, my default inclination (similar to Bill Gates’ strategy) is that projects in developing regions are generally (but certainly not always) significantly higher in yield than in developed regions; which is not unlike the logic that animal welfare efforts are higher yield than human efforts. The EA Hotel is in a developed society and seems focused on field(s) with some of the highest funding already, e.g. artificial intelligence (AI) research. Based on this, it seems perhaps incorrect (or even unethical) to allocate to this project. This isn’t necessarily conclusive, but evidence to the opposite has not been clear.
Hopefully you understand that what I am describing above is not meant as a personal insult by any means or the result of rash emotions, but rather the result of rational consideration along ethical lines.
[Replying to above thread] One reason I asked you to plug some numbers in is that these estimates will depend a lot on what your priors are for various parameters. We will hopefully provide some of our own numerical estimates soon, but I don’t think that too much weight should be put on them (Halffull makes a good point about measurability above). Also consider that our priors may be biased relative to yours.
I’ll also say that a reason for Part 2 of the EV estimate being put on the back burner for so long was that Part 1 didn’t get a very good reception (i.e. people didn’t see much value in it). You are the first person to ask about Part 2!
[Replying to this thread]
I think this is missing the point. The point of the EA Hotel is not to help the residents of the hotel, it is to help them help other people (and animals). AI Safety research, and other X-risk research in general, is ultimately about preventing the extinction of humanity (and other life). This is clearly a valuable thing to be aiming for. However, as I said before, it’s hard to directly compare this kind of thing (and meta level work), with shovel-ready object level interventions like distributing mosquito nets in the developing world.
The fact that others are not interested in such due diligence is itself a separate concern that support for the EA Hotel is perhaps not as rigorous as it should be; however, this is not a concern against you or the EA Hotel, but rather against your supporters. This to me seems like a basic requirement, particularly in the EA movement.
No, I recognize, understand, and appreciate this point fully; but I fundamentally do not agree, that is why I cannot in good conscience support this project currently. Because it is such a high value and potentially profitable family of fields (e.g. AI research in particular) to society, it has already attracted significant funding from entrenched institutions, e.g. Google’s DeepMind. In general, there is a point of diminishing returns for incremental investments, which is a possibility in such a richly-funded area as this one specifically. Unless there is evidence or at least logic to the contrary, there is no reason to reject this concern or assume otherwise.
Also, as part of my review, I looked at the profiles of the EA Hotel’s current and past occupants; the primary measurable output seems to be progressing in MOOCs and posting threads on the EA Forum; this output is frankly ~0 in my opinion—this is again not an insult at all, it is simply my assessment. It may be that the EA Hotel is actually fulfilling the role of remedial school for non-competitive researchers who are not attracting employment from the richly funded organizations in their fields; such an effort would likely be low yield—again, this is not an insult, it is just a possibility from market-based logic. There are certainly other, more positive potential roles (which I am very open to, otherwise I would not bother to be continuing this discussion to this thread depth), however these have not yet been proven.
Re: the measurement bias response, this is an incomplete answer. It is fine to not have much data in support as the project is only ~1 year old at this point, however some data should be generated; and more importantly the project should have a charter with an estimate or goal for which to anticipate measurable effects, against which data can be attempted (whether successfully or unsuccessfully) to record how well the organization is doing in its efforts. How else will you know if you are being effective or not, or successful or not?
How do you know that the EA Hotel is being effectively altruistic (again particularly against competing efforts), in the context of your given claim at the top about being effectively “the best use of money”?
These issues still remain open in my opinion. Hopefully these critiques will at least be some food for thought to strengthen the EA Hotel and future endeavors.