Lol, it’s consistently readable. If you expect more, you need to widen your reading horizons.
timunderwood
Sure, I agree with you that the prose is passable, readable and fairly solid, but definitely not flashy, literary, or anything special (though I think it reaches a somewhat higher level by the middle, but it never is what is important or fun about the HPMOR).
I personally never had the delusion that pretty prose was particularly important (if anything I go too far in the other direction), but yeah, it is a mistake that people make.You definitely do not need to write a poem in prose to have a great deal of impact with your writing.
About sapioseparatism:
I suppose this is naturally what I’ll want to push back hardest on, since it is the part that is telling me that there is something wrong with my core identity, assumptions about the world, and way I think. Of course that implies it is likely to be tied up with your core emotions, ways of thinking about the world, identity and assumptions—and hence it is much more difficult for any productive conversation to happen (and less likely for conversation, if it becomes productive) to change anyone’s mind.
So a core utilitarian (which is not identical to EA) idea is that if something is bad, it has to be bad ‘for’ someone—and that except in exceptional cases, that badness for that someone will show up in their stream of subjective experiences.Now certainly mosquitoes, fish, elephants, and small rodents living in Malawi are all someone’s whose subjective wellbeing should have some weight in our moral calculations. But I suspect that I’m wired in a particular way such that I could never care very much about anything that happens to ‘nature’ without affecting anybody’s subjective experiences. This probably goes back to intuitions that cannot be argued with, though possibly they can be modified through prompting examples, social pressure, or by shifting the salience of other considerations and feelings.
At the very least, to the extent that biodiversity (as opposed to individual animals), and nature (as opposed, again, to individual animals) is viewed as important, I’d like to see a greater amount of argument for why this is important for me or for the EA community generally to care about.Now, I personally would prefer a green earth full of trees, but nothing with brains to a completely dead planet, and I’d prefer more weird species of animals, to every ecological niche being filled with the same type of animal. But this isn’t a very strong preference compared to my preference for a long happy human future—and it is a presence which is not at all prompted by my core utilitarian value system.
===A comment on insecticide treated insect nets:
It seems like impregnating bed nets with insecticide is the exact opposite of indiscriminate use of insecticide (ie spraying just about everywhere with it), and as a result I would be very surprised if the quantity is enough to cause substantial ecosystem effects.
===
On environmental impact assessment:
Obviously the numbers should be run—at least to the extent that it is not prohibitively expensive to do the study. Research, calculations, checking additional fringe possibilities, etc is not free, and should only be done if it seems like there is a reasonable chance they will tell us that we were making a mistake. However trying to figure out the size of environmental damages from using nets for fishing, burning, from the insecticide messing with children’s hormones etc seems like it would be fairly easy to get a decent guess on how big the effect is at a cost that is reasonable in the context of a program that has so far distributed 400 million dollars worth of nets.
However, based on my priors, I would be fairly surprised if any of these numbers changes the basic conclusion that this is a cheap way to improve the well being of currently living human beings, and that it has a vanishingly small chance of contributing to a plastics driven extinction event caused by fertility collapse.
I suppose my question here is, to what extent are you actually thinking about these issues as something where that whole set of concerns might in actual fact be irrelevant, and to what extent would you resist having your view on the importance of environmental concerns be changed by mechanics level explanations for why a particular bad outcome is unlikely, or by numerical assessments of costs and benefits?You seem to be saying that environmental concerns have a high chance of convincing us to stop giving out bednets, which will lead to some children dying -- this is the alternative. While changing house designs to discourage mosquitoes sounds like a very good additional idea, I would be shocked if it can be done at the cost of 1 dollar per year per room, like bed nets can be.
Resources are always limited.
So in that context, it is really important that the good thing that we win by stopping giving out bednets to be just a big and awesome of a win as stopping children from dying miserably from malaria. Perhaps that bar can be met—some of your concerns (extinction risks, widespread neurological damage, etc), if they are real, might be worth letting children die to avoid. But those are the stakes that we need to pay attention to.
It is a common misconception that because a piece of fiction was bad for the particular individual writing, or is low status, or is missing some desired marker of ‘goodness’, that it therefore is not ‘good’.
There doesn’t seem to be any commonly agreed upon definition of what ‘good’ means in the context of fiction—so I think it is better to focus on whether it is good for particular individuals, where you can just ask the people if they find the text good.
So while HPMOR is not good for Arjun, it is extremely good for a lot of other text-individual pairings.
Also, if by ‘not that good’ you mean ‘easy to duplicate’, as someone who would very much like to write something that is as powerful, compelling, interesting, emotionally satisfying, multilayered and inspiring as HPMOR, it is not in the slightest easy to write something like it.
It’s definitely not just long termism—and at least before sbf’s money started becoming a huge thing there is still an order of magnitude more money going to children in Africa than anything else. For that matter, I’m mostly longtermist mentally, and most of what I do (partly because of inertia, partly because it’s easier to promote) is saving children in Africa style donations.
Also ‘no because my intuitions say this is likely to be low impact’, and ’other’
But I agree that those four options would be useful—maybe even despite the risk that the person immediately decides to try arguing with the grant maker about how his proposal really is in fact likely to be high impact, beneficial rather than harmful, and totally not confusing, and that the proposal definitely shouldn’t be othered.
I don’t think we have to accede to that at all—it’s not like it’s useful for our goals anyway. What probably happened is sbf’s money hired consultants, and they just did their job without supervision on trying to push better epistemics. A reputation for not going negative in a misleading way ever might be a political advantage, if you can make it credible.
“The following is a backhanded, unfair, insult to write in the immediate days after, but to be show one critique[1]: it reads like the associated account manager (the Google ads sales person whose bonus or promotion depends on volume) got carried away, or someone looked at conventional spending levers and “turned up the knob” to very high levels, out of band[2].”
That sounds about right to me about what happened—I mean, I think it was definitely worth trying (with the only main downside being that the particular way SBF tried possibly crowded out other more effective ways of trying, but mistakes are how you learn), but yeah—it is if nothing else well known that you can’t use money to brute force election results.
I do think that approach of trying to get good local branding is a good idea, though OTOH, we also don’t want it to turn into donating lots of money to comparatively low value local projects—if for no other reason that that would dilute the brand.
Yeah, I confirmed directly that the refugees weren’t what was driving the Prague problem (though maybe on the margin it helps to make it so bad), since last weekend and the weekend following the conference had normal Eastern European prices.
I like the idea, though I think its funny that we go from “It’d be helfpul to have a snappy name for this view,” to another opaque and easily confused made up philosophical term. Maybe ‘Helping other peopleism’.
Ummm, I think for me it is believing that for any fixed number of people with really good lives, there is some sufficiently large number of people with lives that are barely worth living that is preferable.
I’m wondering if this is prompted by all of the hotels and hostels in Prague being bizarrely very packed on the exact weekend of EAGx this year. I could not figure out though just what is happening in Prague to do this, and fortunately I have a relative who lives in Prague whose apartment I can crash in.
Maybe, I mean I’ve been thinking about this a lot lately in the context of Phil Torres argument about messianic tendencies in long termism, and I think he’s basically right that it can push people towards ideas that don’t have any guard rails.
A total utilitarian long termist would prefer a 99 percent chance of human extinction with a 1 percent of a glorious transhuman future stretching across the lightcone to a 100 percent chance of humanity surviving for 5 billion years on earth.
That after all is what shutting up and multiplying tells you—so the idea that long termism makes luddite solutions to X-risk (which to be clear, would also be incredibly difficult to impliment and maintain) extra unappealing relative to how a short termist might feel abou them seems right to me.
Of course there is also the other direction: If there was a 1⁄1 trillion chance that activating this AI would kill us all, and a 999 billion/ 1 trillion chance it would be awesome, but if you wait a hundred years you can have an AI that has only a 1/ 1 quadrillion chance of killing us all, a short termist pulls the switch, while the long termist waits.Also, of course, model error, and any estimate where someone actually uses numbers like ‘1/1 trillion’ that something will happen in the real world that is in the slightest interesting is a nonsense and bad calculation.
My feeling is that it was a bit that people who wanted to attack global poverty efficiently decided to call themselves effective altruists, and then a bunch of Less Wrongers came over and convinced (a lot of) them that ‘hey, going extinct is an even biggler deal’, but the name still stuck, because names are sticky things.
Hmmmm, that is weird in a way, but also as someone who has in the last year been talking with new EAs semi-frequently, my intuition is that they often will not think about things the way I expect them to.
Based on my memory of how people thought while growing up in the church, I don’t think increasing the number of saveable souls is something that makes sense for a Christian—or even any sort of long termist utilitarian framework at all.
Ultimately god is in control of everything. Your actions are fundamentally about your own soul, and your own eternal future, and not about other people. Their fate is between them and God, and he who knows when each sparrow falls will not forget them.
Summoning a benevolent AI god to remake the world for good is the real systemic change.
No, but seriously, I think a lot of the people who care about making processes that make the future good in important ways are actually focused on AI.
A very nitpicky comment, but maybe it does point towards something about something: “What if every person in low-income countries were cash-transferred one years’ wage?”
There is a lot of money in the EA space, but at most 5 percent of the sort of money that would be required for doing that (quick google of ‘how many people live in low income countries’ tells me there are 700 million people in countries with a per capita income below roughly 1000 usd a year, so your suggestion would have a 700 billion dollar bill. No individual, including Elon Musk or Jeff Bezos has more than a quarter of that amount of money, and while very rich, the big EA funders are no where near that rich). Also, of course, give directly is actually giving people in low income countries the equivalent of a year’s wage to let them figure out what they want to do with the money. Of course they are operating on a small enough scale that is affordable within the funding constraints of the community.
I don’t know, the on-topic thing that I would maybe say is that it is important to have a variety of people working in the community, people with a range of skills and experiences (ie we want to have some people who have an intuitive feel for big economic numbers and how they relate to each other—but it is not at all important for everyone, or even most people to have that awareness). But at the same time, not everyone is in a place to be part of the analytic research oriented part of the EA community, and I simply don’t think that decision making will become better at achieving the values I care about if the decision making process is spread out.(But of course the counter point is that decision makers who ignore the voices of the people they are claiming to help often do more harm than good, and usually are maximizing something they care about, which is true).
Also, and I’m not sure how relevant this is, but I think it is likely that part of the reason why X-risks is the area of the community that is closest to being fully funded is because it is the cause area that people can care about for purely selfish reasons—ie spending enough on X-risk reduction is more of a coordination problem than an altruism problem.
The main thing I think is to keep trying lots of different things (probably even if something is working really well relative to expectations). The big fact of trying to get traction with a populat audience is that you simply cannot tell ahead of time what is good.
It’s all good—what matters is whether we make a (the biggest possible) positive difference in the world, not how the motivational system decided to pick this as a goal.
I do think that it is important for the EA community/system/whatever it is to successfully point the stuff that is done for making friends and feeling high status towards stuff that actually makes that biggest possible difference.