Misha_Yagudin
Update to Samotsvety AGI timelines
Yes, more broadly, I think that we should think about governance more… I guess there are a bunch of low-hanging fruits we can import from the broader world, e.g., someone doing internal-to-EA investigative journalism could have unraveled risks related to FTX/Alameda leadership or just did an independent risk analysis (e.g., this forecasting question put the risk of FTX default at roughly 8%/yr — I am not sure betters had any private information, I think just base-rates give probability around 10%).
We are giving $10k as forecasting micro-grants
A bit of a tangent. I am confused by SFF’s grant to OAK (Optimizing Awakening and Kindness). Could any recommender comment on its purpose or at least briefly describe what OAK is about as the hyperlink is not very informative.
Dear Morgan,
In this comment I want to address the following paragraph (#3).
I also want to point out that the fact that EA Russia has made oral agreements to give copies of the book before securing funding is deeply unsettling, if I understand the situation correctly. Why are promises being made in advance of having funding secured? This is not how a well-run organization or movement operates. If EA Russia did have funding to buy the books and this grant is displacing that funding, then what will EA Russia spend the original $28,000 on? This information is necessary to evaluate the effectiveness of this grant and should not be absent.
I think that it is a miscommunication on my side.
EA Russia has the oral agreements with [the organizers of math olympiads]...
We contacted organizers of math olympiads and asked them whether they would like to have HPMoRs as a prize (conditioned on us finding a sponsor). We didn’t promise anything to them, and they do not expect anything from us. Also, I would like to say that we hadn’t approached them as the EAs (as I am mindful of the reputational risks).
- 10 Apr 2019 0:08 UTC; 15 points) 's comment on Long-Term Future Fund: April 2019 grant recommendations by (
Another important consideration that is not often mentioned (here and in our forecast) is how much more/less impact you expect to have after a full-out Russia-NATO nuclear war that destroys London.
I am confused about the relevance of Ought’s work to AI alignment. They got early solid endorsements for testing Christiano’s ideas around factored cognition but since pivoted to work on Elicit, which seems really cool but doesn’t feel very alignment related to me. I would appreciate someone making a case clarifying the relevance/importance of their work.
I read it, not as a list of good actors doing bad things. But as a list of idealistic actors [at least in public perception] not living up to their own standards [standards the public ascribes to them].
Interesting side-finding: prediction markets seem notably worse than cleverly aggregated prediction pools (at least when liquidity is as low as in the play markets). Not many studies, but see Appendix A for what we’ve found.
- Early-warning Forecasting Center: What it is, and why it’d be cool by 14 Mar 2022 19:20 UTC; 57 points) (
- 8 May 2022 20:08 UTC; 4 points) 's comment on The AI Messiah by (
I think this is great!
https://funds.effectivealtruism.org/funds/far-future might be a viable option to get funding.
As for suggestions,
maybe link to the markets/forecasting pools you use for the charts like this “… ([Platform] (link-to-the-question))?
I haven’t tested, but it would be great for links to your charts to have snappy social media previews.
Good luck; would be great to see more focus on AI per item 4!
What are some common misconceptions about the suffering-focused world-view within the EA community?
Yeah, it would probably be good if people redirected this energy to climbing ladders in the government/civil service/military or important powerful corporate institutions. But I guess these ladders underpay you in terms of social credit/inner ringing within EA. Should we praise people aiming for 15y-to-high-impact careers more?
This comment should be upvoted for its rigor and contribution to the discussion. But I am a bit disappointed that this comment is so highly upvoted and that Alexey’s response fails to communicate the higher level crux. [CoI: Alexey and I are best friends; I have benefited from his takes on sleep.]
The high-level objection I want to raise is something like “ugh, it’s not how you should think about mad science.” Ozzie Gooen described the difference between disagreeables and assessors quite well. This post is of highly-disagreeable-noticing-a-conspiracy-against-humanity-mad-science type and the comment is careful-measured-assessor type. Peter’s comment and even Alexey’s response were operating under “assessing,” which is fair but misses quite a bit as the essay is titled “Theses on Sleep” and not “A Systematic Review of Sleep.”
I can’t imagine the mindset behind this comment producing the core ideas of the post. Like, it’s very hard for me to imagine someone who is not overly dismissive of sleep science (e.g., thinking that it is 100% psyops) bearing through a harsh sleep-deprivation self-experiment and seriously considering that modern sleep is a superstimulus, contributes to depression, is unnecessary. It’s correct to point out that not every single piece of evidence about sleep has been fabricated and used in psyops but I think this ~passion is a cost of taking mad ideas seriously enough to engage with them.
While the above comment is epistemically virtuous on the level of evaluating the strength of evidence and noticing the priors, I think it misses the larger picture[1]: discoveries are often made in bizarre circumstances and look like epicycle upon epicycle [see: SMTM on scurvy (one, two) and The Copernican Revolution from the Inside].
We surely want to reward hypothesis generation. Ideally, great hypotheses would be generated by people who hedge and bow appropriately, but in practice, it takes bull-headedness.
- ↩︎
Another big picture miss is that evidence has been strongly filtered by authority-prestige forces [see: post-structuralism and hard programme in sociology of science].
- ↩︎
I was confused by the headline. “Ben Garfinkel: How Sure are we about this AI Stuff?” would make it clear that it is not some kind of official statement from the CEA. Changing an author to EA Global or even to the co-authorship of EA Global and Ben Garfinkel would help as well.
And the FLI award is probably worth mentioning.
Especially excited about “Immigration Specialist.”
This is odd to me. I see how committing to be vegan can strengthen one’s belief in the importance of animal suffering. But my not-very-educated guess is that the effect is more akin to why buying iPhone/Android would strengthen your belief into the superiority of one to another. But I don’t see how would it help one to understand/consider animal experiences and needs.
I haven’t read the paper in depth but searched for relevant keywords and found:
Additionally, a sequence of five studies from Jonas Kunst and Sigrid Hohle demonstrates that processing meat, beheading a whole roasted pig, watching a meat advertisement without a live animal versus one with a live animal, describing meat production as “harvesting” versus “killing” or “slaughtering,” and describing meat as “beef/pork” rather than “cow/pig” all decreased empathy for the animal in question and, in several cases, significantly increased willingness to eat meat rather than an alternative vegetarian dish.33
Psychologists involved in these and several other studies believe that these phenomena 34 occur because people recognize an incongruity between eating animals and seeing them as beings with mental life and moral status, so they are motivated to resolve this cognitive dissonance by lowering their estimation of animal sentience and moral status. Since these affective attitudes influence the decisions we make, eating meat and embracing the idea of animals as food negatively influences our individual and social treatment of nonhuman animals.
The cited paper (33, 34) do not provide much evidence to support your claim among people who spend significant time reflecting on welfare of animals.
Dear Morgan,
In this comment I want to address the following paragraph (related to #2).
If the goal is to encourage Math Olympiad winners to join the Effective Altruism community, why are they being given a book that has little explicitly to do with Effective Altruism? The Life You Can Save, Doing Good Better, and _80,000 Hours_are three books much more relevant to Effective Altruism than Harry Potter and the Methods of Rationality. Furthermore, they are much cheaper than the $43 per copy of HPMOR. Even if one is to make the argument that HPMOR is more effective at encouraging Effective Altruism — which I doubt and is substantiated nowhere — one also has to go further and provide evidence that the difference in cost of each copy of HPMOR relative to any of the other books I mentioned is justified. It is quite possible that sending the Math Olympiad winners a link to Peter Singer’s TED Talk, “The why and how of effective altruism”, is more effective than HPMOR in encouraging effective altruism. It is also free!
a. While I agree that the books you’ve mentioned are more directly related to EA than HPMoR. I think it would not be possible to give them as a prize. I think the fact that the organizers whom we contacted had read HPMoR significantly contributed to the possibility to give anything at all.
b. I share your concern about HPMoR not being EA enough. We hope to mitigate it via leaflet + SPARC/ESPR.
- 10 Apr 2019 0:08 UTC; 15 points) 's comment on Long-Term Future Fund: April 2019 grant recommendations by (
Apologies for maybe sounding harsh: but I think this is plausibly quite wrong and nonsubstantive. I am also somewhat upset that such an important topic is explored in a context where substantial personal incentives are involved.
One reason is that the post that gives justice to the topic should explore possible return curves, and this post doesn’t even contextualize betting with how much money EA had at the time (~$60B)/has now(~$20B) until the middle of the post where it mentions it in passing: “so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold.” Arguing that some degree of risk aversion is, indeed, implied by diminishing returns is trivial and has little implications on practicalities.
I wish I had time to write about why I think altruistic actors probably should take a 10% chance of 15B vs. a 100% chance of 1B. Reverse being true would imply a very roughly ≥3x drop in marginal cost-effectiveness upon adding 15B of funding. But I basically think there would be ways to spend money scalably and at current “last dollar” margins.
In GH, this sorta follows from how OP’s bar didn’t change that drastically in response to a substantial change to OP funds (short of $15B, but still), and I think OP’s GH last dollar cost-effectiveness changed even less.
In longtermism, it’s more difficult to argue. But a bunch of grants that pass the current bar are “meh,” and I think we can probably have some large investments that are better than the current ones in the future. If we had much more money in longtermism, buying a big stake in ~TSMC might be a good thing to do (and it preserves option value, among other things). And it’s not unimaginable that labs like Anthropic might want to spend $10Bs in the next decade(s) to match the potential AI R&D expenses of other corporate actors (I wouldn’t say it’s clearly good, but having the option to do so seems beneficial).
I don’t think the analysis above is conclusive or anything. I just want to illustrate what I see as a big methodological flaw of the post (not looking at actual returns curves when talking about diminishing returns) and make a somewhat grounded in reality case for taking substantial bets with positive EV.