I’m highly enjoying the “death of the author” interpretation (and even just its existence), thanks! :)
Fair point, thank you! If I have some time, I might replace the sprout with some other kind of risk (maybe something flammable), but I haven’t though about it very carefully yet, and would definitely take suggestions.
For what it’s worth, I highly enjoyed reading this interaction:) +1 to Dario and everyone else here.
Thanks for the feedback! I definitely dislike propaganda, and would be curious to see which parts felt the most propaganda-y to you. Also, to echo Dario, below—I appreciate your very kind delivery of the negative feedback. :)I don’t know if I will ever end up spending much time improving the story, as my life is pretty hectic at the moment, but I would be interested in any specific improvements you suggest. (So far, I haven’t really tried much, but I’ve considered ways of addressing the inadequacy of the oak sprout metaphor by e.g. replacing it with something flammable.)
To be honest, I didn’t think very hard about the names. The thought process was roughly: 1) I want to make a story whose characters are birds, and I could have a smart black bird. 2) Incidentally, I like that it doesn’t have to be technical or complicated—there are birds you can call “blackbirds,” and there are birds you can call “bluebirds,” so 3) I’ll call my characters “black bird” and “blue bird.” And I liked the colors this suggested, so that didn’t veto the decision. :)
In any case, I’m glad you liked it, thanks!
Thanks for the comments! The urgency argument makes sense. I’m not sure if I’ll end up changing things, but I’ll consider it, and thanks for pointing this out!
Thanks a bunch—I’m glad you liked it!
Thank you for this comment!
Thank you! I’m glad. :)
An update: after a bit of digging, I discovered this post, “Should marginal longtermist donations support fundamental or intervention research?”, which contains a discussion on a topic that is quite close to “should EA value foundational (science/decision theory) research,” (in the pathway (1) section of my post). The conclusions of the post I found do not fit into my vague impressions of “the consensus.” In particular, the conclusion of that post is that longtermist research hours should often be spent on fundamental research (which is defined by its goals).
I’m moderately confident that, from a longtermist perspective, $1M of additional research funding would be better allocated to fundamental rather than intervention research (unless funders have access to unusually good intervention research opportunities, but not to unusually good fundamental research opportunities)
(Disclaimer: the author, Michael, is employed at Rethink Priorities, where I am interning. I don’t know if he still endorses this post or its conclusions, but the post seems relevant here and very valuable as a reference.)
For what it’s worth, I’ve seen “pathway to impact” used in the way you seem to use “impact chain” (e.g. and e.g., and I used it a bunch), and it seems somewhat more natural to me. It’s possible that “pathway to impact” is just a niche term that clicked with me, though, and I definitely agree that it’s a useful concept.
I intuitively would’ve drawn the institution blob in your sketch higher, i.e. I’d have put fewer than (eyeballing) 30% of institutions in the negatively aligned space (maybe 10%?).
I won’t redraw/re-upload this sketch, but I think you are probably right.
In moments like this, including a quick poll into the forum to get a picture what others think would be really useful.
That’s a really good idea, thank you! I’ll play around with that.
re: “argument for how an abstract intervention that improves decision-making would also incidentally improve the value-alignment of an institution” etc.
Thank you for the suggestions! I think you raise good points, and I’ll try to come back to this.
I think this is a really cool work/parable: “That Alien Message.” It’s by Eliezer Yudkowsky, so I don’t know if it’s too well known to count, but it still seems worth collecting in this context. (The topic, or “relevance” from an EA point of view, of the story is a spoiler, but should be pretty clear.)
Thank you so much!
For what it’s worth, I also feel like people might shy away from referring works if every referral has to be a top-level post (rather than a reply, as Linch suggests). In particular, I personally am second guessing myself and will probably not end up referring anything, but would happily contribute things as comments (I might end up doing that anyway, if I feel like it’s relevant enough, and people can repost if they want to). However, this could just be a personal preference rather than a common or shared experience.
Thank you for this response! I think I largely agree with you, and plan to add some (marked) edits as a result. More specifically,
On the 80K problem profile:
I think you are right; they are value-oriented in that they implicitly argue for the targeted approach. I do think they could have make it a little clearer, as much (most?) of the actual work they recommend or list as an example seems to be research-style. The key (and important) exception that I ignored in the post is the “3. Fostering adoption of the best proven techniques in high-impact areas” work they recommend, which I should not have overlooked. (I will edit that part of my post, and likely add a new example of research-level value-neutral IIDM work, like a behavioral science research project.)
“I don’t think the value-neutral version of IIDM is really much of a thing in the EA community”
Once again, I think I agree, although I think there are some rationality/decision-making projects that are popular but not very targeted or value-oriented. Does that seem reasonable? The CES example is quite complicated, but I’m not sure that I think it should be disqualified here. (To be clear, however, I do think CES seems to do very valuable work—I’m just not exactly sure how to evaluate it.)
Side note, on “a core tenet of democracy is the idea that one citizen’s values and policy preferences shouldn’t count more than another’s”
I agree that this is key to democracy. However, I do think it is valid to discuss to what extent voter’s values align with actual global good (and I don’t think this opinion is very controversial). For instance, voters might be more nationalistic than one might hope, they might undervalue certain groups’ rights, or they might not value animal or future lives. So I think that, to understand the actual (welfare) impact of an intervention that improves a government’s ability to execute its voters’ aims, we would need to consider more than democratic values. (Does that make sense? I feel like I might have misinterpreted what you were trying to say, a bit, and am not sure that I am explaining myself properly.) On the other hand, it’s possible that good government decision-making is bottlenecked more by its ability to execute its voters’ aims than it is by the voters’ values’ ethical alignment—but I still wish this were more explicitly considered.
“It looks like you’re essentially using decision quality as a proxy for institutional power, and then concluding that intentions x capability = outcomes.”
I think I explained myself poorly in the post, but this is not how I was thinking about it. I agree that the power of an institution is (at least) as important as its decision-making skill (although it does seem likely that these things are quite related), but I viewed IIDM as mostly focused on decision-making and set power aside. If I were to draw this out, I would add power/scope of institutions as a third axis or dimension (although I would worry about presenting a false picture of orthogonality between power and decision quality). The impact of an institution would then be related to the relevant volume of a rectangular prism, not the relevant area of a rectangle. (Note that the visualizing approach in the “A few overwhelmingly harmful institutions” image is another way of drawing volume or a third dimension, I think.) I might add a note along these lines to the post to clarify things a bit.
About “the distinction between stated values and de facto values for institutions”
You’re right, I am very unclear about this (and it’s probably muddled in my head, too). I am basically always trying to talk about the de facto values. For instance, if a finance company whose only aim is to profit also incidentally brings a bunch of value to the world, then I would view it as value-aligned for the purpose of this post. To answer your questions about the typical private health insurance company, “does bringing its (non-altruistic) actions into greater alignment with its (altruistic) goals count as improving decision quality or increasing value alignment under your paradigm”—it would count as increasing value alignment, not improving decision quality.
Honestly, though, I think this means I should be much more careful about this term, and probably just clearly differentiate between “stated-value-alignment” and “practical-value-alignment.” (These are terrible and clunky terms, but I cannot come up with better ones on the spot.) I think also that my own note about “well-meaning [organizations that] have such bad decision quality that they are actively counterproductive to their aims” clashes with the “value-alignment” framework. I think that there is a good chance that it does not work very well for organizations whose main stated aim is to do good (of some form). I’ll definitely think more about this and try to come back to it.
“The professional world is incredibly siloed, and it’s not hard at all for me to imagine that ostensibly publicly available resources and tools that anyone could use would, in practice, be distributed through networks that ensure disproportionate adoption by well-intentioned individuals and groups. I believe that something like this is happening with Metaculus, for example.”
This is a really good point (and something I did not realize, probably in part due to a lack of background). Would you mind if I added an excerpt from this or a summary to the post?
On your note about”generic-strategy”: Apologies for that, and thank you for pointing it out! I’ll make some edits.
Note: I now realize that I have basically inverted normal comment-response formatting in this response, but I’m too tired to fix it right now. I hope that’s alright!
Once again, thank you for this really detailed comment and all the feedback—I really appreciate it!
Hi! I’m out of the loop, but I’m curious about whether this resolved, and if there is a place to see submissions. The competition was supposed to close at the end of the month (August 2021), and it is now September.
Thank you for the post, I found it interesting! [Minor point in response to Linch’s comment.]
I generally agree with Linch’s surprise, but
When people choose not to work on other people’s ideas, it’s usually due to a combination of personal fit and arrogance in believing your own ideas are more important (or depending on the relevant incentives, other desiderata like “publishable”, “appealing to funders”, or “tractable”), not because of a lack of ideas!
I (weakly) think that another factor here is that people are trained (e.g. in their undergraduate years) to come up with original ideas and work on those, whether or not they are actually useful. This gets people into the habit of over-valuing a form of topic originality. (I.e. it’s not just personal fit, arrogance, and external incentives, although those all seem like important factors.)
This is definitely the case in many of the humanities, but probably less true for those who participate in things like scientific research projects, where there are clearly useful lab roles for undergraduates to fill. In my personal experience, all my math work was assigned to me (inside and outside of class), while on the humanities side, I basically never wrote a serious essay whose topic I did not create. (This sometimes led to less-than-sensible papers, especially in areas where I felt that I lacked background and so had to find somewhat bizarre topics that I was confident were “original.”)
My guess is that changing this would be valuable, but might be very hard. Projects like Effective Thesis come to mind.