Disenangling “nature.”
It is my favorite thing, but I want to know its actual value.
Is it replaceable. Is it useful. Is it morally repugnant. Is it our responsibility. Is it valuable.
“I asked my questions. And then I discovered a whole world I never knew. That’s my trouble with questions. I still don’t know how to take them back.”
EcologyInterventions
The reason for the enforcement of style within the forum is not to keep out the wisdom and experience of ordinary people, but in the long run, to increase it. It seems a bit backwards that keeping posts out would increases the amount of posts but consider this:
If the forum was a lot of spam about miracle cures and flat earth, few people would be think it’s a good place to post their ideas about agricultural reform.
If the forum had lots of good ideas but they were all 1000 page books, few people could read them, and the comments could not discuss one point together since there would be dozens to respond to.
If the forum had lots of good ideas but they were all tweets, few nuanced ideas could get discussed.
I also think very few ideas will be missed because I don’t think it’s too steep of a barrier to ask people to write their main points in a few sentences. I think it is still inclusive even of people with very different backgrounds.
If people had to sort through unrelated poetry, personal rants, and many repeats of the same old ideas, the good points would be difficult to find. Ironically, those people/ideas would get drowned out if there were hundreds of posts without much content. So we ask everyone to be as concise as possible.
Encouraging posts that are concise, encouraging friendliness, encouraging criticism, are all are for the sake of keeping the discussion productive and moving forward.
I think we’ll have to agree to disagree a little bit here, but we agree on the central bit: new evidence must be considered on its own merits, and scientific conclusions must be accepted, however strange and distasteful they are.
But let me share my favorite example of this problem in science:
“For no bias can be more constricting than invisibility—and stasis, inevitably read as absence of evolution, had always been treated as a non-subject. How odd, though, to define the most common of all palaeontological phenomena as beyond interest or notice! Yet paleontologists never wrote papers on the absence of change in lineages before punctuated equilibrium granted the subject some theoretical space. And, even worse, as paleontologists didn’t discuss stasis, most evolutionary biologists assumed continual change as a norm, and didn’t even know that stability dominates the fossil record.”
Stephen Jay Gould & Niles Eldredge, Punctuated Equilibrium Comes Of Age 1993
I am not familiar with this particular domain, although I know what utilons are, so uh… if this was meant for me, this was not immediately convincing? Or elucidating??
Play by play of my gut reactions: (this is for the sake of imagining what strangers might think, not meant to be taken as serious criticism)
“Negative utilitarians:” okay this is some kinda obscure philosophy thing, isn’t it. I’d probably skip it if this were interrupting my fun tiktok videos, but I want to know what an EA tiktok looks like.
:\“Graph that doesn’t illustrate anything I understand immediately” okay, this is about math, hopefully that’s all I need to get.
:\“1 unit of suffering , 1 unit of happiness” okay, simple, cool, lets figure things out!
:D1″ unit of suffering per 2 units of happiness” uh kinda weird, but okay. Where does this lead.
:)“elementary particles” ???
o_0“elementary particles, happy brain, sad brain, elementary particles, therefore measure” NO. This is dumb and jargon obfuscation of nonsense.
D<Afterwards: (still stream-of-conscious reaction) Why does introducing “elementary particles” have anything to do with measuring the amount of happiness and suffering? That doesn’t solve anything! Why are we trying to use the same set of particles to create a happy brain and a sad brain? Are you saying that if it takes more particles to make a happy brain then its worth more sad brains? Isn’t it the arrangement that matters? What? What???
>:/I assume there is a lot more to what you are getting at and that you have a very good theory here! But this part didn’t capture the vital bit I need to understand the basic concept and why it works. (Or clearly understand what you are driving at.) I would say it needs some rephrasing, maybe the context doesn’t matter as much to stating your concept? “Happiness is hard to measure” might be enough?
Here is a post of EA reasons not to pursue political action and how to do political action well. Check out the electoral politics tag for some of the political efforts EA has done and how they turned out!
From what I understand: Government money is notoriously inaccessible. Often, it can only be changed by a long process of approvals or people inside organizations who generally have no interest in changing things because stirring the pot would enrage everyone else in the department/their electoral base and possibly break quite a lot of informal arrangements that keep things running. There is almost no incentive to make things better and quite a lot of disincentive for the power-wielding individuals to change anything.
A lot of EA energy then moved to how to fixing that by improving institutional decision-making processes. This has lead to a lot of discussion of better voting systems (approval,quadratic), prediction markets, impact certificates, and other things. These are being tried and deployed.
EA tends to see gov intervention and diplomacy as 1/3: Important, but not Neglected or Tractable, with some exceptions.
I’m not terribly acquainted with this area and hopefully someone else who knows more about this can weigh in.
The main concept of: “what if instead of only increasing resources, everyone physically needed less resources due to biology” blew my mind the first time I encountered it. It warms my heart to see it appear again.
More seriously: Tallness seems to cause heart and spine issues, as well as seem to have no visible genetic asymptote until we run into awful deadly issues from height. I’m slightly worried that we’ll keep growing until it becomes a nasty issue, but keep pressing further into tallness around because it’s sexually selected for.
Actually, nevermind. Upon investigation Our World In Data seems to think average height was primarily nutrient driven and is ending its climb and plateauing due to maxing out the possible benefits of nutrients. Also apparently ancient people were as tall as we are today. I had no idea.
From wikipedia. Turns out we only just caught up to our ancestors. This really makes agriculture and cities look horrendous. What happened in 2000 BCE with that spike? Smaller people don’t seem to have disadvantages except the unwinnable comparative placement from culture (not everyone can be tall), and if we could reroute that biological energy to health, lifespan, and intellect instead....it sounds like a win to me. Why not rationally laud the luck of the people who are biologically more efficient and likely have longer lives? With this in mind I somewhat wish I were shorter.
A butterfly idea is in early stages. It needs creative input, branching out in possibility space, and expansion.
Versus most forum posts are here for critique, hardening and winnowing down.
Basically three numbers: $2 prevents malaria at a rate of 600 nets per 1 life or 750 cases of malaria. I recall comparing it with the cost of saving a life in an American hospital. All the links I find on The Life you Can Save, Giving What we Can, and GiveWell are all way too detailed so it might have been the Against Malaria Foundation description itself.
Givewell’s charity analysis. Thorough, including counter arguments, focus on net effectiveness, and such a variety of philanthropic choices I had not heard of before. It allowed me to trust that donating would actually be worthwhile, and actually feel okay asking other people to donate too.
CFAR handbook has improved my ability to be intentional and choose things that are effective in my daily life and for the world. I know how to break procrastination and make good tradeoffs with my resources.
Open Philanthropy: Cause Area Exploration and Future of Life Institute: Worldbuilding Contest were both invitations to submit my own writing which got me to analyze and apply EA, plus has gotten me more deeply involved with teams in EA, and now I spend a ton more time working on EA projects which I don’t see stopping any time soon…
I just wanted to say I appreciate you writing this, and I agree the world ought to tolerate and celebrate weirdness more than we currently do. Break-out thinking is inconvenient and useless most of the time, but extremely beneficial some of the time. Obviously weird beliefs ought to be put to the test like any other, but we should celebrate it too, and especially safeguard its generation. I have heard we have become less welcoming to unorthodox worldviews, although it may have been inaccurate.
A nitpick I have is your particular examples: Spiritual reality and UFOs are both famous and have lots of people researching them. I presume all good evidence would have already been found and there isn’t much proof. (but for that very reason they are illustrative examples easily parsed) I’m more positive towards totally unheard of things.
In principle at least. But I’m not terribly good at judging against the grain even though I wish I were, and I try to be.
So thank you, all you high-openness, low agreeableness, prospecting personalities.
As MakoYass pointed out this sounds a lot like you are suggesting halting all acquisition of knowledge. While it would handily stop human-created existential risk, I do not think this is possible to implement (as you note, but don’t go into how to address). It’s sadly another example of solutions like “develop a global culture of coexistence” which would work, but are not practical.
Your post made me think, and I thoroughly applaud the audacity to conceive “unthinkable” directions of inquiry. It made me reconsider my preconceptions! But I think there are some simple reasons this kind of solution isn’t more prevalent—with the notable exception of scrupulous work to avoid information hazards. (did you EA is already working to prevent information hazards?)
Counterpoints I would like to see you address:
Its hard to know what we don’t know, to avoid discovering it.
Information is extremely useful.
Information can eliminate risks as well as create them.
Historically we have suppressed knowledge incorrectly thinking it would harm us, and this caused great damage.
Other people will discover it, so white hatting is preferable.If I had to guess, this was downvoted for the wordiness. Your points are buried instead of stated at the top, requiring someone to puzzle through most of your post to reach them.
Another EA thing: your headers are hooks. This is normally good at drawing a reader in, but the EA forum prefers a summary/conclusion of each paragraph to make it easily assessed—if someone agrees with your point already they can skip it and move to the next paragraph. If they disagree they can read the paragraph and be convinced by your evidence.
For the first 8 sections it sounds like you are suggesting 1) great powers create great risks 2) information creates great powers 3) therefore [no additional information acquisition] to stop creating great risks. Its only in your 9th one that you acknowledge that its impossible, but then don’t really provide any direction. What do we do given this knowledge? What does this change?
Your conclusions are situated right in between practical folks (who already know knowledge is sometimes bad for society) and theoreticians (who think knowledge is always good; it’s the wielding that is the problem) so you might have gotten downvoted by both. It also isn’t favorable to the silicon valley pro-progress culture of “move fast and break things” in every area except existential risk. On the other hand, the average person is too suspicious of scientific progress in my opinion, so I want to encourage a pro-science view generally. I’m pretty far on the side of advancing as fast as we can.
Okay, wordiness:
Compare your version with my pared down version:We don’t have the option of turning our backs on knowledge.
The solution to obesity is obviously not to stop eating. Instead we must develop a more sophisticated relationship with food, eating only what is good for our bodies.
The “more is better” relationship with knowledge which has served us so well for so long must now make way for a more sophisticated relationship involving complicated cost/benefit calculations. This will sometimes involve saying no to some new knowledge.
Nuclear weapons prove that the simplistic “more is better” requires updating to meet the existential threats presented by a revolutionary new era.
Obviously I’m hardly one to speak. Just look at how long my comment has gotten. But I think cutting to the meat of your posts will go a long way to making them stronger. I hope this has been helpful feedback.
Unfortunately I have spend too long on this and must go back to spending time reading about biodiversity and ecosystems. :)
Show how we “will have the judgement and maturity to consistently make wise choices about how to manage ever more, ever larger powers, delivered at an ever faster pace, without limit.”
It’s impossible to do anything without mistake, much less forever, much less wisdom, much less humanity. =p We are aware of our own fallibility, so we (humanity and EA) build systems to catch early warning signs and counteract mistakes before they get out of hand.
“Show how we will never make mistakes” is obviously impossible and does not allow for the many other counterpoints I might have within your post. “There is only one way this could be wrong, and its this” is not a very useful discussion for either of us. It suggests you already have your conclusion and will not incorporate input unless it meets your predefined standards. This does not feel like equals working together towards a common goal, but of someone with preconceptions, judging who is allowed to contribute and precluding evidence to the contrary. This may not be your intention, but it is how I read your phrasing.
I find it more useful to note my weakest points, seeking if anyone can find any flaws or better frameworks so I can discover every possible way to improve my worldview and build upon it.
I may not have downvoted under normal conditions for the reasons you mentioned. Generally I upvote new posters who seem unfamiliar with EA, or leave comments to encourage further engagement. I definitely don’t like to see the votes fall below 0 without explanation (unless its damaging to forum etiquette in some way). But three factors made me willing to discourage engagement in this present form:
Two other posts by the same author were made in the span of a few days, both of which seem like they could be worked over more carefully before placing into one of the limited slots on the front page.
I don’t think this forum is an appropriate space to develop one individual’s philosophy, without something special it has to contribute… I’m not able to articulate this clearly, please trust I am struggling to capture something I think is important and have pattern matched from other people as well, sorry I cannot do it justice.
Especially not if all these posts are to be made without additional detail/careful consideration. In its current state I think this post is better as a shortform or a blog post while the series is being composed.
The author states they intend it to be a series and this magnified all the issues I had above. It would be welcome for a post or two, but an entire series magnifies its impact too far. I was responding to that.
It is not clear in this post how it affects EA. Especially because EA already addresses minds rather than bodies, I think?
As to why I am critical of this posts quality, I think its stating something already familiar to most people on this forum, and further I believe it can be summarized in a few sentences:
We are minds, not bodies. If we had to pick one, most of us would pick keeping our mind over keeping our body. “Me” is a collection of ideas, memories, opinions, personality traits and other symbolic abstractions. “Me” is made of thought.
Given that we are essentially thought, what is thought? (not addressed in this post)
We must study who we are, not just study how to invent and use technology.
It is also not clearly relevant to EA, as it is currently written. Presumably further posts would cover that, but it wasn’t delved into in this post, so I felt comfortable downvoting on those grounds.
My thoughts on the content of this post:
Personally I don’t think it is clear we must study and know who we are, even though it is one of my favorite activities! It can be terribly useful, but it can also be not very influential on other domains. Its hard to tell if this will lead to something influential or something important but non-consequential.Post script:
I will attempt to give similar feedback on the other posts, but I did not feel I had time to do so and was especially discouraged when there were three in a row to try to provide feedback on.Hope this gives a face to the downvoters. I am also discouraged by lack of engagement and wonder why on earth I was downvoted. But I try to chalk it up to how it takes time to learn all the things involved in EA and I don’t know what things are obvious to regulars who have worked on this for years. I understand they don’t always have time to walk me through something that is already established elsewhere, even though I don’t always know where the “elsewhere” is.
Try to approach disagreements with curiosity. ;)
I wish you the best in the development and refinement of your philosophy. And all your further conversations!
“The key point is that plenty of knowledge and data will be dismissed, never published, and/or never encountered by the people with funding to effect change (such as EA grant-makers) simply because of its producers’ position within the geopolitics of knowledge. ” This is terrible. And I believe EA does care about solving it, though maybe not as much as it should.
”As much as any other part of society, power/knowledge shapes academic research.” Science is not the same as social media, but we are in full agreement that it is subject to influence and bias. I would say EA is highly interested in decreasing that power/influence dynamic.
”EA may be inflicting epistemic violence, imperialism, and coloniality through the solutions it funds and research design it undertakes.” Aid is optional and voluntary. Even so I believe indigenous ways of thinking and living will be unintentionally damaged and lost by charitable efforts. I believe this is acceptable if the good is greater than the harm, though we may be unqualified to determine the extent of the harm. I don’t think simply consulting and asking how indigenous peoples want to be helped would address your critique? The alternative, as I understand it, is to operate via their ways of being, (if it is even possible) which is unclear how to speak their language, absorb their morals, understand their lives, and do this all efficiently so as to provide the most benefit. We already recognize that giving directly and cash transfers are some of the most effective ways to assist.
”When common pool resources are attributed monetary value, they become ontologically fungible” Its necessary to compare values and prioritize actions. Without trying to estimate value and compare across them, we resort to the art of acting based on general principles and we risk worse imbalanced actions. I am open to alternative methods but I maintain they need to be as universal and grounded in truth as possible. This is because we are acting for the world, and all its cultures. While I am sympathetic that we often steer wrong—for example by favoring legible metrics over unquantifiable unknowns and often asking the wrong questions—we at least acknowledge these problems and are making efforts to combat known dangers. Again, as far as I know, there are no better alternatives in other cultures. We all struggle with this.
”It is colonial for EA to believe that it can know what will be most the effective solution for people in the Global South, without even consulting them”—This is why EA starts with “saving lives” as it seems to be universally valuable. And EA does consult with the global south, as shown by give directly, cash transfers, and many other EA efforts. (Because it works according to measured outcomes, not because it avoids neocolonialism.)
”Through anticipatory philanthropic pledges, the ultrawealthy gain social capital. ” Are you saying the world is worse off every time massive donations are made—it does more damage than benefit? I assume you are not going that far. If all donations stopped, I think the ultrawealthy would still gain social capital from their wealth in other (worse) ways. Even if I agree capitalistic systems are a horrible trap, I’m not sure that altruistic donations have that much to do with perpetuating capitalism. I don’t think capitalism would be closer to falling apart or be revealed as a scam. I don’t think charitable giving especially subverts judgement of societies/cultures/systems. Its an observed rate of donations. We can compare it to rates of charitable effort under other systems/cultures/situations/regulations. If its better, its better. If its worse, its worse.
I am not being as effective as I could be—AI x-risk is real enough that it’s the only thing I ought to be doing. Funding is not a serious obstacle in my life path.
I have skills and interest in ecology and believe it is an area I can have outsized impact by 1) updating philosophy across the field 2) improving longterm outcomes via gearing up highly effective projects and 3) leading by example. (I might be foolish) Sometimes it seems this is a crowded endeavor and sometimes it seems extremely neglected. In either case it makes me very happy so perhaps this is simply my selfish life path.
I would not be working on my current ecology-related job, but somehow researching and experimenting with ecological outcomes if I had infinite money and no reputational risk. However I am learning important things in my current role and researching other people’s ecological experiments in my free time so it’s close to optimal. I can pursue my altruistic goals more directly in a few years time when I have more experience, alliances, and specific concrete steps ready.
I enjoyed “Arts and Minds: How the Royal Society of Arts Changed a Nation” about the British Royal society. I don’t know about the other movements/organizations more directly tied to EA sadly, but maybe someone else does?
My gut reaction is that EA wants to convince people for the right reasons and clips seem too short to really engage with that.
Assuming TikTok videos are very good at changing behavior, I have some doubts they would be convincing people of the reasons and methods that make EA, EA. I feel worried that if we made a bunch of tiktok videos, and they were successful, they would shortly be overrun by more successful tiktok videos that replicate all the good sounding stuff, without having the core substance. Maybe that’s a silly thing to get hung up on. I don’t feel the same way about youtube explainers. And I can see a bunch of short clips that snappily intro people to single EA ideas being amazing.
I don’t really have a strong opinion on the what the right move is, but I wanted to communicate why I am hesitating.
Total donations, total donors, number of positive articles written, number of ea adjacent orgs, number of organizations mentioning their dalys/qalys? I’m not sure, but those are some ideas.
Just want to circle back around and say that I appreciate your points and have updated to be more in line with your position. I am still unconvinced by your strongest claims, but agree with most of the base assertions. For example I think nicotine is a lot less addictive and abuse is much less harmful than I previously thought. Some minor points that contributed to causing me to re-evaluate:
All the users who had self control to stop due to cancer warnings/social stigma did stop; leaving those with worse self control to continue making cigarettes look much more addictive on average than they necessarily are.
Comparing the community of users with other communities: Highly addicted smoker’s lives are not falling apart, unlike other addictive substance users.
Now my support hinges on a complicated calculus over how many and how bad the most abusive users are, how good and general the positive use is, how cheap combustibles are versus cartridges, and other specifics. But in principle we now agree.
Northeastern USA Wild Plant Identification
Corrections welcome: Yes, please correct my deck!
Comprehensiveness: I think I have 90% of the non-rare forbs. It will vary as it is across a large region. On the plus side it applies somewhat outside the intended region as well...
Quality: High for Anki! High for its intended personal use: identifying based on plants as encountered in the wild, e.g. without flowers. Medium for general use (cards definitely could use duplicate photos).
Plants are better learned opportunistically in the field with apps like Seek, but if you want a crash course in Anki—have at it.Experience: First real deck, and I didn’t realize you could have multiple answers. I’m sure there are more features I missed.
Reviews: Please review my deck!
As far as I can tell low level desire is present early in amoeba and moss, and smoothly escalates into higher level desires in fish and koalas etc. I don’t think it makes sense to talk about the presence or absence of desire, but maybe other qualities like agency, selfhood, or need. It’s difficult with any of them...