Noted; however upon lightly reviewing said information, it seems to be lacking. Hence the request for further information.
It does not seem like this is expected until update 10, as I noted previously. The fact that it is not a higher priority for an EA organization is a lack in my opinion.
This seems like a disagreement that goes deeper than the EA Hotel. If your focus is on rigorous, measurable, proven causes, great. I’m very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative causes which are less measurable but we think could have higher expected value, or at the very least, might help the EA movement gather valuable experimental data. That’s why Givewell spun out Givewell Labs which eventually became the Open Philanthropy Project. It’s why CEA started the EA Funds to fund more speculative early stage EA projects. Lots of EA projects, from cultured meat to x-risk reduction to global priorities research, are speculative and hard to rigorously measure or forecast. As a quick concrete example, the Open Philanthropy Project gave $30 million to OpenAI, much more money than the EA Hotel has received, with much less public justification than has been put forth for the EA Hotel, and without much in the way of numerical measurements or forecasts.
If you really want to discuss this topic, I suggest you create a separate post laying out your position—but be warned, this seems to be a fairly deep philosophical divide within the movement which has been relatively hard to bridge. I think you’ll want to spend a lot of time reading EA archive posts before tackling this particular topic. The fact that you seem to believe EAs think contributing to the sort of relatively undirected, “unsafe” AI research that DeepMind is famous for should be a major priority suggests to me that there’s a fair amount you don’t know about positions & thinking that are common to the EA movement.
Here are some misc links which could be relevant to the topic of measurability:
Hey, thanks for the reply, it looks like there is a lot of interesting / useful information there. Also, it looks like you replied twice to me based on notifications, but I can only find this comment, so sorry if I missed something else.
With all due respect, I think there is a bit of a misunderstanding on your part (and others voting you up and me down).
If your focus is on rigorous, measurable, proven causes, great. I’m very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative causes which are less measurable but we think could have higher expected value, or at the very least, might help the EA movement gather valuable experimental data.
First of all, I am interested in rigorous & evidential efforts—what one could perhaps call “scientific” in method generally; however, I am not exclusively interested in such. It is therefore not correct to label that Open_Thinker is not “interested in more speculative causes” with deep, far-reaching potential x consequences, similarly to your apparent position. The difference (e.g. with OpenAI) is that the EA Hotel is a relatively minor organization (at least based on existing budget and staffing levels) in comparison which is less likely to have much of an effective impact, compared to other organizations such as DeepMind and OpenAI (as well as academic groups) with concrete achievements and credentials—e.g. AlphaGo and Elon Musk’s achievements and former involvement, respectively. This should not be underestimated, as it is a critical component; the EA Hotel has a fairly non-existent achievements list relatively it looks like, and from what I can tell by skimming ongoing activities, it may remain so (at least for the near future). It is not only I who thinks this way, as it directly explains the funding gap.
So secondarily, no, I think we do not have a fundamental disagreement vis-a-vis fundamental priorities, although certainly we do regarding implementations, details, specific methods, etc.
Actually, I am being quite on topic, because the topic of this thread specifically is about funding the EA Hotel, which has been my focus throughout. Specifically within the context of the top comment that the EA Hotel is “the best use of EA money,” which I was directly responding to and questioning. It is merely me expressing my skepticism and due diligence, is that wrong? Based on a specific claim like that, where is the evidence to support it?
So just so there is clear understanding, I am really only interested here in whether or not I personally should help fund the EA Hotel, because I am willing to do so if there is convincing logic or evidence to do so. Up until this point I still do not see it; however, the EA Hotel also has the funding it needs from other sources now, so perhaps we should just leave the matter—although I am still willing to continue, although increasingly less so, as there are diminishing apparent returns for all involved, in my estimation.
However, thank you again for the response and information above, I will take some time to peruse it.
which is less likely to have much of an effective impact, compared to other organizations such as DeepMind and OpenAI..
Do you think this is true even in terms of impact/$ (given they are spending ~1,000-10,000x what we are)?
however, the EA Hotel also has the funding it needs from other sources now
We now have ~3 months worth of runway. It’s a good start to this fundraiser, but is hardly conducive to sustainability (as mentioned below, we would like to get to 6 months runway to be able to start a formal hiring process for our Community & Projects Manager. The industry standard for non-profits is 18 months runway).
I would appreciate it if you could review the information a bit more thoroughly. Perhaps you could generate your own estimate using the framework developed in Fundraiser 3 and the outputs listed here. Fundraiser 10 was listed last because I want to try and do a thorough job of it (but also have other competing urgent priorities with respect to the hotel). There are also manyconsiderations as to why any such estimates will be somewhat fuzzy and perhaps not ideal to rely on too heavily for decision making (hoping to go into detail on this in the post).
The 3rd update was reviewed, that was what led me to search for Part 2 which is expected in update 10. Frankly I am personally not interested in the 3rd update’s calculations, because simply based on personal time allocations I would prefer a more concise estimate than having to tediously go through the calculations myself.
Please understand that this is not an insult, and I think it is a reasonable point in fact—for example, with other [in some ways competing] efforts (e.g. startups), it would not be acceptable to most venture capitalists to present slides of incomplete calculations in a pitch deck and ask them to go through it manually on their own time rather than having the conclusive points tidily summarized. It is likely just not worth the effort in most cases.
It looks like the EA Hotel has obtained funding through 2019 though now, so I congratulate your team on that. If you would like to continue the discussion, I suggest replying to my other comment (below) so that there are not diverging threads.
Noted; however upon lightly reviewing said information, it seems to be lacking. Hence the request for further information.
It does not seem like this is expected until update 10, as I noted previously. The fact that it is not a higher priority for an EA organization is a lack in my opinion.
This seems like a disagreement that goes deeper than the EA Hotel. If your focus is on rigorous, measurable, proven causes, great. I’m very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative causes which are less measurable but we think could have higher expected value, or at the very least, might help the EA movement gather valuable experimental data. That’s why Givewell spun out Givewell Labs which eventually became the Open Philanthropy Project. It’s why CEA started the EA Funds to fund more speculative early stage EA projects. Lots of EA projects, from cultured meat to x-risk reduction to global priorities research, are speculative and hard to rigorously measure or forecast. As a quick concrete example, the Open Philanthropy Project gave $30 million to OpenAI, much more money than the EA Hotel has received, with much less public justification than has been put forth for the EA Hotel, and without much in the way of numerical measurements or forecasts.
If you really want to discuss this topic, I suggest you create a separate post laying out your position—but be warned, this seems to be a fairly deep philosophical divide within the movement which has been relatively hard to bridge. I think you’ll want to spend a lot of time reading EA archive posts before tackling this particular topic. The fact that you seem to believe EAs think contributing to the sort of relatively undirected, “unsafe” AI research that DeepMind is famous for should be a major priority suggests to me that there’s a fair amount you don’t know about positions & thinking that are common to the EA movement.
Here are some misc links which could be relevant to the topic of measurability:
https://www.openphilanthropy.org/blog/hits-based-giving
https://forum.effectivealtruism.org/posts/htCtF4DFKHGxaLfro/why-i-m-skeptical-about-unproven-causes-and-you-should-be
https://en.wikipedia.org/wiki/Streetlight_effect
https://forum.effectivealtruism.org/posts/zdAst6ezi45cChRi6/list-of-ways-in-which-cost-effectiveness-estimates-can-be
https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/
https://www.lesswrong.com/posts/9W9P2snxu5Px746LD/many-weak-arguments-vs-one-relatively-strong-argument
https://www.effectivealtruism.org/articles/prospecting-for-gold-owen-cotton-barratt/
https://forum.effectivealtruism.org/posts/fogJKYXvqzkr9KCud/a-complete-quantitative-model-for-cause-selection
https://forum.effectivealtruism.org/posts/NbFZ9yewJHoicpkBr/a-model-of-the-machine-intelligence-research-institute
And here’s a list of lists of EA resources more generally speaking:
https://www.effectivealtruism.org/resources/
https://forum.effectivealtruism.org/posts/tf3Hous3oc9hNS6eL/is-there-a-good-place-to-find-the-what-we-know-so-far-of-the
https://forum.effectivealtruism.org/posts/Y8mBXCKmkS9eBokhG/ea-syllabi-and-teaching-materials
https://resources.eahub.org/learn/about-ea/
https://80000hours.org
https://www.benkuhn.net/ea-reading
Hey, thanks for the reply, it looks like there is a lot of interesting / useful information there. Also, it looks like you replied twice to me based on notifications, but I can only find this comment, so sorry if I missed something else.
With all due respect, I think there is a bit of a misunderstanding on your part (and others voting you up and me down).
First of all, I am interested in rigorous & evidential efforts—what one could perhaps call “scientific” in method generally; however, I am not exclusively interested in such. It is therefore not correct to label that Open_Thinker is not “interested in more speculative causes” with deep, far-reaching potential x consequences, similarly to your apparent position. The difference (e.g. with OpenAI) is that the EA Hotel is a relatively minor organization (at least based on existing budget and staffing levels) in comparison which is less likely to have much of an effective impact, compared to other organizations such as DeepMind and OpenAI (as well as academic groups) with concrete achievements and credentials—e.g. AlphaGo and Elon Musk’s achievements and former involvement, respectively. This should not be underestimated, as it is a critical component; the EA Hotel has a fairly non-existent achievements list relatively it looks like, and from what I can tell by skimming ongoing activities, it may remain so (at least for the near future). It is not only I who thinks this way, as it directly explains the funding gap.
So secondarily, no, I think we do not have a fundamental disagreement vis-a-vis fundamental priorities, although certainly we do regarding implementations, details, specific methods, etc.
Actually, I am being quite on topic, because the topic of this thread specifically is about funding the EA Hotel, which has been my focus throughout. Specifically within the context of the top comment that the EA Hotel is “the best use of EA money,” which I was directly responding to and questioning. It is merely me expressing my skepticism and due diligence, is that wrong? Based on a specific claim like that, where is the evidence to support it?
So just so there is clear understanding, I am really only interested here in whether or not I personally should help fund the EA Hotel, because I am willing to do so if there is convincing logic or evidence to do so. Up until this point I still do not see it; however, the EA Hotel also has the funding it needs from other sources now, so perhaps we should just leave the matter—although I am still willing to continue, although increasingly less so, as there are diminishing apparent returns for all involved, in my estimation.
However, thank you again for the response and information above, I will take some time to peruse it.
Do you think this is true even in terms of impact/$ (given they are spending ~1,000-10,000x what we are)?
We now have ~3 months worth of runway. It’s a good start to this fundraiser, but is hardly conducive to sustainability (as mentioned below, we would like to get to 6 months runway to be able to start a formal hiring process for our Community & Projects Manager. The industry standard for non-profits is 18 months runway).
I would appreciate it if you could review the information a bit more thoroughly. Perhaps you could generate your own estimate using the framework developed in Fundraiser 3 and the outputs listed here. Fundraiser 10 was listed last because I want to try and do a thorough job of it (but also have other competing urgent priorities with respect to the hotel). There are also many considerations as to why any such estimates will be somewhat fuzzy and perhaps not ideal to rely on too heavily for decision making (hoping to go into detail on this in the post).
The 3rd update was reviewed, that was what led me to search for Part 2 which is expected in update 10. Frankly I am personally not interested in the 3rd update’s calculations, because simply based on personal time allocations I would prefer a more concise estimate than having to tediously go through the calculations myself.
Please understand that this is not an insult, and I think it is a reasonable point in fact—for example, with other [in some ways competing] efforts (e.g. startups), it would not be acceptable to most venture capitalists to present slides of incomplete calculations in a pitch deck and ask them to go through it manually on their own time rather than having the conclusive points tidily summarized. It is likely just not worth the effort in most cases.
It looks like the EA Hotel has obtained funding through 2019 though now, so I congratulate your team on that. If you would like to continue the discussion, I suggest replying to my other comment (below) so that there are not diverging threads.
See reply below.