This seems like a disagreement that goes deeper than the EA Hotel. If your focus is on rigorous, measurable, proven causes, great. I’m very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative causes which are less measurable but we think could have higher expected value, or at the very least, might help the EA movement gather valuable experimental data. That’s why Givewell spun out Givewell Labs which eventually became the Open Philanthropy Project. It’s why CEA started the EA Funds to fund more speculative early stage EA projects. Lots of EA projects, from cultured meat to x-risk reduction to global priorities research, are speculative and hard to rigorously measure or forecast. As a quick concrete example, the Open Philanthropy Project gave $30 million to OpenAI, much more money than the EA Hotel has received, with much less public justification than has been put forth for the EA Hotel, and without much in the way of numerical measurements or forecasts.
If you really want to discuss this topic, I suggest you create a separate post laying out your position—but be warned, this seems to be a fairly deep philosophical divide within the movement which has been relatively hard to bridge. I think you’ll want to spend a lot of time reading EA archive posts before tackling this particular topic. The fact that you seem to believe EAs think contributing to the sort of relatively undirected, “unsafe” AI research that DeepMind is famous for should be a major priority suggests to me that there’s a fair amount you don’t know about positions & thinking that are common to the EA movement.
Here are some misc links which could be relevant to the topic of measurability:
Hey, thanks for the reply, it looks like there is a lot of interesting / useful information there. Also, it looks like you replied twice to me based on notifications, but I can only find this comment, so sorry if I missed something else.
With all due respect, I think there is a bit of a misunderstanding on your part (and others voting you up and me down).
If your focus is on rigorous, measurable, proven causes, great. I’m very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative causes which are less measurable but we think could have higher expected value, or at the very least, might help the EA movement gather valuable experimental data.
First of all, I am interested in rigorous & evidential efforts—what one could perhaps call “scientific” in method generally; however, I am not exclusively interested in such. It is therefore not correct to label that Open_Thinker is not “interested in more speculative causes” with deep, far-reaching potential x consequences, similarly to your apparent position. The difference (e.g. with OpenAI) is that the EA Hotel is a relatively minor organization (at least based on existing budget and staffing levels) in comparison which is less likely to have much of an effective impact, compared to other organizations such as DeepMind and OpenAI (as well as academic groups) with concrete achievements and credentials—e.g. AlphaGo and Elon Musk’s achievements and former involvement, respectively. This should not be underestimated, as it is a critical component; the EA Hotel has a fairly non-existent achievements list relatively it looks like, and from what I can tell by skimming ongoing activities, it may remain so (at least for the near future). It is not only I who thinks this way, as it directly explains the funding gap.
So secondarily, no, I think we do not have a fundamental disagreement vis-a-vis fundamental priorities, although certainly we do regarding implementations, details, specific methods, etc.
Actually, I am being quite on topic, because the topic of this thread specifically is about funding the EA Hotel, which has been my focus throughout. Specifically within the context of the top comment that the EA Hotel is “the best use of EA money,” which I was directly responding to and questioning. It is merely me expressing my skepticism and due diligence, is that wrong? Based on a specific claim like that, where is the evidence to support it?
So just so there is clear understanding, I am really only interested here in whether or not I personally should help fund the EA Hotel, because I am willing to do so if there is convincing logic or evidence to do so. Up until this point I still do not see it; however, the EA Hotel also has the funding it needs from other sources now, so perhaps we should just leave the matter—although I am still willing to continue, although increasingly less so, as there are diminishing apparent returns for all involved, in my estimation.
However, thank you again for the response and information above, I will take some time to peruse it.
which is less likely to have much of an effective impact, compared to other organizations such as DeepMind and OpenAI..
Do you think this is true even in terms of impact/$ (given they are spending ~1,000-10,000x what we are)?
however, the EA Hotel also has the funding it needs from other sources now
We now have ~3 months worth of runway. It’s a good start to this fundraiser, but is hardly conducive to sustainability (as mentioned below, we would like to get to 6 months runway to be able to start a formal hiring process for our Community & Projects Manager. The industry standard for non-profits is 18 months runway).
This seems like a disagreement that goes deeper than the EA Hotel. If your focus is on rigorous, measurable, proven causes, great. I’m very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative causes which are less measurable but we think could have higher expected value, or at the very least, might help the EA movement gather valuable experimental data. That’s why Givewell spun out Givewell Labs which eventually became the Open Philanthropy Project. It’s why CEA started the EA Funds to fund more speculative early stage EA projects. Lots of EA projects, from cultured meat to x-risk reduction to global priorities research, are speculative and hard to rigorously measure or forecast. As a quick concrete example, the Open Philanthropy Project gave $30 million to OpenAI, much more money than the EA Hotel has received, with much less public justification than has been put forth for the EA Hotel, and without much in the way of numerical measurements or forecasts.
If you really want to discuss this topic, I suggest you create a separate post laying out your position—but be warned, this seems to be a fairly deep philosophical divide within the movement which has been relatively hard to bridge. I think you’ll want to spend a lot of time reading EA archive posts before tackling this particular topic. The fact that you seem to believe EAs think contributing to the sort of relatively undirected, “unsafe” AI research that DeepMind is famous for should be a major priority suggests to me that there’s a fair amount you don’t know about positions & thinking that are common to the EA movement.
Here are some misc links which could be relevant to the topic of measurability:
https://www.openphilanthropy.org/blog/hits-based-giving
https://forum.effectivealtruism.org/posts/htCtF4DFKHGxaLfro/why-i-m-skeptical-about-unproven-causes-and-you-should-be
https://en.wikipedia.org/wiki/Streetlight_effect
https://forum.effectivealtruism.org/posts/zdAst6ezi45cChRi6/list-of-ways-in-which-cost-effectiveness-estimates-can-be
https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/
https://www.lesswrong.com/posts/9W9P2snxu5Px746LD/many-weak-arguments-vs-one-relatively-strong-argument
https://www.effectivealtruism.org/articles/prospecting-for-gold-owen-cotton-barratt/
https://forum.effectivealtruism.org/posts/fogJKYXvqzkr9KCud/a-complete-quantitative-model-for-cause-selection
https://forum.effectivealtruism.org/posts/NbFZ9yewJHoicpkBr/a-model-of-the-machine-intelligence-research-institute
And here’s a list of lists of EA resources more generally speaking:
https://www.effectivealtruism.org/resources/
https://forum.effectivealtruism.org/posts/tf3Hous3oc9hNS6eL/is-there-a-good-place-to-find-the-what-we-know-so-far-of-the
https://forum.effectivealtruism.org/posts/Y8mBXCKmkS9eBokhG/ea-syllabi-and-teaching-materials
https://resources.eahub.org/learn/about-ea/
https://80000hours.org
https://www.benkuhn.net/ea-reading
Hey, thanks for the reply, it looks like there is a lot of interesting / useful information there. Also, it looks like you replied twice to me based on notifications, but I can only find this comment, so sorry if I missed something else.
With all due respect, I think there is a bit of a misunderstanding on your part (and others voting you up and me down).
First of all, I am interested in rigorous & evidential efforts—what one could perhaps call “scientific” in method generally; however, I am not exclusively interested in such. It is therefore not correct to label that Open_Thinker is not “interested in more speculative causes” with deep, far-reaching potential x consequences, similarly to your apparent position. The difference (e.g. with OpenAI) is that the EA Hotel is a relatively minor organization (at least based on existing budget and staffing levels) in comparison which is less likely to have much of an effective impact, compared to other organizations such as DeepMind and OpenAI (as well as academic groups) with concrete achievements and credentials—e.g. AlphaGo and Elon Musk’s achievements and former involvement, respectively. This should not be underestimated, as it is a critical component; the EA Hotel has a fairly non-existent achievements list relatively it looks like, and from what I can tell by skimming ongoing activities, it may remain so (at least for the near future). It is not only I who thinks this way, as it directly explains the funding gap.
So secondarily, no, I think we do not have a fundamental disagreement vis-a-vis fundamental priorities, although certainly we do regarding implementations, details, specific methods, etc.
Actually, I am being quite on topic, because the topic of this thread specifically is about funding the EA Hotel, which has been my focus throughout. Specifically within the context of the top comment that the EA Hotel is “the best use of EA money,” which I was directly responding to and questioning. It is merely me expressing my skepticism and due diligence, is that wrong? Based on a specific claim like that, where is the evidence to support it?
So just so there is clear understanding, I am really only interested here in whether or not I personally should help fund the EA Hotel, because I am willing to do so if there is convincing logic or evidence to do so. Up until this point I still do not see it; however, the EA Hotel also has the funding it needs from other sources now, so perhaps we should just leave the matter—although I am still willing to continue, although increasingly less so, as there are diminishing apparent returns for all involved, in my estimation.
However, thank you again for the response and information above, I will take some time to peruse it.
Do you think this is true even in terms of impact/$ (given they are spending ~1,000-10,000x what we are)?
We now have ~3 months worth of runway. It’s a good start to this fundraiser, but is hardly conducive to sustainability (as mentioned below, we would like to get to 6 months runway to be able to start a formal hiring process for our Community & Projects Manager. The industry standard for non-profits is 18 months runway).