Working in nuclear power.
Will Kirkpatrick
I appreciate the links, these are exactly what I was looking for! I’ll be browsing through them as I get some time!
It seems like you’re on the “expert-master scale” to my “novice—apprentice” level. Philosophy ultimately won’t ever be much more than a fun hobby of mine, but I’ve always loved diving into some of the deeper stuff. Would you be open to me reaching out and talking with you as I comb through this and come up with questions?
I understand you’re probably busy, so if you have recommendations for some other resources or places to engage people with ideas like this (even if just to read what they write), I would appreciate those too!
I’m afraid that despite professing to be a utilitarian, I’m far from an expert. If you’ve got a moment, could you help me poke a little more into a niche section of this?
Is there some overlap between Hare’s two-level utilitarian framework and what is being proposed in this article? It doesn’t seem like they’re arguing directly for a framework, more explaining why and how they chose their virtues.
I’ve always found virtue ethics interesting, my first foray into reading philosophy on my own was focused on it, and I wouldn’t have really described myself as a utilitarian until my later teens.
When I stumbled across Hare’s arguments, I began to think about ways to reconcile his “archangel and prole” analogy with the way we tend to primarily communicate (at least in my view) via intuitions and stories regarding character virtues.
I’ve done some basic searching, nothing too in-depth. I haven’t really found much engagement. Do you have any ideas for further reading? I’d be interested in reading other examples of what people think of for utilitarian virtues!
Not here to weigh in on the pro/anti nuclear arguments.
I just wanted to thank you for posting and engaging with the forum about your thoughts! I think that this style of post is one of the most useful because it leads to a better understanding for all involved.
I’m sure you’ve all seen the EA hub post that was put up about a month ago. But it’s worth re-stating that it’s hard to find someone specific in EA sometimes.
I sometimes use the forum when I’m trying to get in contact with people, primarily by searching their name!
I also filled out the form, so apologies if this is a double entry!
Cotton Bot
Economic growth
Problem: In 2021, a mere 30% of the world’s cotton harvest was gathered by machinery. This means
that over 60% of the 2021 worldwide supply of cotton was harvested using the same methods as
American slaves in the 1850’s. A significant amount of the hand harvesting includes forced labor.Solution: The integration of existing technologies can provide a modular, robust, swarming team of
small-scale, low-cost harvesters. Thoughtful system design will ensure the harvesters are simple to
operate and maintain while still containing leading edge technical capability.How to: The project is focused on developing a single row, robotic harvester that meets key
performance parameters and system attributes to allow operation in the most technologically remote
areas of the world with little or no logistics tail. The single row harvesters can intuitively communicate to swarm harvest in teams from two to two hundred independent systems.Background: My father has been the REDACTED for a few years now. We have been talking about how much cotton gets wasted in the field near our house for years now, but this grant strikes me as a perfect to see if a prototype could be built.
Pluses:
1. He is not an EA (though he is adjacent, mostly from my prodding), so it’s an opportunity to drag a non-EA to work on our projects.
2. He has no desire to develop the business after making prototype and proving use case, so the patent would come back to the FTX future fund as investors.
3. He has a lot of experience doing exactly this, so he will most likely be able to execute.
Cons:
1. It’s expensive because he intends to hire employees to work on it full time.
2. He isn’t an EA, so he may not perfectly represent EA interests in this (somewhat mitigated because I will also be working on it.)
3. He has no desire to develop the business after making, so we’ll have to have someone do that (or give away the tech for free.)
His name is REDACTED, and he works at the REDACTED in case anyone wants to look him up!
I had a similar idea, and I think that a few more things need to be included in the discussion of this.
There are multiple levels of ideas in EA, and I think that a red team becomes much more valuable when they are engaging with issues that are applicable to the whole of EA.
I think ideas like the institutional critique of EA, the other heavy tail, and others are often not read and internalized by EAs. I think it is worth having a team that makes arguments like this, then breaks them down and provides methods for avoiding the pitfalls pointed out in them.
Things brought up in critique of EA should be specifically recognized and talked about as good. These ideas should be recognized, held up to be examined, then passed out to our community so that we can grow and overcome the objections.
I’m almost always lurking on the forum, and I don’t often see posts talking about EA critiques.
That should change.
Led to personal lifestyle changes, bought an air purifier and gave them as gifts to friends and family.
Glad I’m not the only one who sees it! I’m a low risk style investor, but I’ve sold everything I have and I’m doing cash covered put spreads, We’ll see how it all turns out.
Thanks for sharing your thoughts.
I don’t know if altruistic, truth seeking, and self aware are all necessary requirements though. It seems very much so to me that we’re never going to be able to convince the vast majority of people to have the excited attitude about EA that most of us do now. Maybe the right focus of the “altruism” meme like this should be on spreading the first two, altruistic and truth seeking.
Self awareness seems almost contrary to the idea of a meme like this, given that it relies on the spreading without too much questioning. Ideas with altruistic frameworks have done well in the past, i.e. ALS ice bucket challenge, but I don’t know how you would go about including a second idea into the existing matrix of a meme like that.
A scientific approach to memetics, I love our weird ideas!
Keep up the good posts Owen!
I’ve actually had some experiences with things like this as well. I first got into meditation by having someone hypnotize me, as an example.
I think that most things like this have a little bit of truth to them, but that because there’s so much extra attached to the concepts that it’s hard to separate them out.
To use a personal example, the other day I was wiki diving and I discovered chaos magic (link below.) I proceeded to pretty much immediately make a sigil. I don’t believe in chaos magic by any means, I really think this is just an application of the placebo effect/some positive thinking to my wall. An example of the “nugget of truth” that I was talking about.
But it was fun, so I did it.
https://en.wikipedia.org/wiki/Chaos_magic
All this said, I don’t think this type of activity really has a place in EA. It’s just something that I thought was kind of crafty and fun to burn a bored afternoon on. With regards to this forum though I would recommend posting things like this somewhere else. EA is really focused on trying to do our best, which means that people tend to dislike that type of science-adjacent thinking.
I’ve never really gone looking for groups devoted to practicing or debunking ideas like this, really just something I see every now and then in my endless wiki reading. I do think it would be interesting if you could try to pull those “nuggets” out of the ideas though, because it is an interesting way to look at stuff like that.
Typing up and talking about how to use that kind of information might be kind of fun!
I agree with you whole heartedly! I definitely feel the pressure to narrow down and it’s hard to keep my “eye on the prize” so to speak.
I try to remind myself that I’m here to make “this” better, and it doesn’t matter how I do it. So I’ve been trying to diversify my overall look at the world.
I like the list of ideas, I hadn’t considered doing an internship or research project, it’s not something I’m very familiar with, so I’ll have to put a little more thought into it!
I definitely need to sit down and read everything 80K hours has put out, it’s pretty good advice (career and life!)
I’m kind of overwhelmed by the number of options I have, so I’ll have to put a lot of thought into it! Luckily for me I’ve got another year between now and when I have to start really making choices. A little time is better than none!
I was one of those kids who was told they were smart and didn’t have to do much in high-school. As a result I got hit pretty hard in the face by the requirement of actually trying in college. Combine this with the fact that I didn’t do well away from a support network and you have a pretty bad downward spiral. I eventually recovered, but boy was it a rough couple of years!
Right now I’m looking at either technical work or more general purpose studying:
The difference between those is a kind of along the Engineering/computer science or Economics/Business divide.
I’m currently thinking that because I already have a background in engineering type work that maybe getting an economics/buisness degree to round myself out would be a good choice.
I’ll throw myself out there!
I’ve always thought of myself as most likely a Earn to give type person, but I’m looking at starting college in the next year or so and I realized that I’m not a bad candidate for some really important sounding colleges. (I.E. I imagine Oxford is a long shot, but it’s not unimaginable.)
EA seems to be talent constrained in a lot of ways, so if I get into a good college. Should I go direct work? And if so, what degree is most applicable?
Of note: I’m not turned off by the relative hardness of the degree to earn. So stick me in whatever hellish degree program turns out the best people for the job!
Previous experience:
2 years college with bad grades (I didn’t like it)
6 years naval nuclear experience as a reactor operator.
Anyone financially strapped? PM me and I’ll venmo you cash to cover it!
Does anyone know how the 25$ credit is rewarded? I.E. is it directly applied to the donation you made or is it credited to the account that makes the donation?
This does sound like one of those rare cases where a little effort can mean a lot of impact, where would you recommend we focus our time and funds?
What’s limiting you and how can I help?
Become a crisis counselor.
Crisis text line is a non-profit organization devoted to the idea of providing someone to talk to when you really need one. Typically as a crisis counselor you will log on and join the “queue” of people waiting to talk to someone who needs it. When people feel overwhelmed (In crisis) they’ll text in, those texts are sent directly to the web browser of the person next in line, and pop up as a chat box.
Pros:
Immediate ability to help someone in need: delay times of as little as 10 minutes, including the time it takes you to get out your computer.
Can be done from your house, directly from your computer.
Very emotionally satisfying: There’s not many places you can actively talk someone out of a panic attack, off the ledge, or just chat with someone who needs it.
Awesome swag: Used in this instance to mean related clothing, coffee mugs, etc. My favorite hoodie is my 200 hour hoodie from CTL!
Good training: certification as a crisis counselor is surprisingly good training for an online course. I’ve considered recommending it to EAs in general for this reason. (Also empathy building)
Hours track directly: so reporting any volunteer service is very easy with this (good for resumes and similar things.)
Good support network and chat on the platform.
Cons:
Very emotionally demanding/frustrating: often times you’ll be upset about how others handle their problems. I put this first because I struggle with this most of all, I want to just shake people and tell them how to get their life together. But as the CC that’s not your role (you’ll learn more about this in training.)
Requires “decent” internet connection.
“30 hour” course at the beginning: took me between 6 and 10 hours to complete, so it’s not that bad.
Background check (not crazy rigorous, but it is a text line)
Not super effective, at most you can achieve a 3-1 time ratio (if you’re really good)
Emotionally draining, I list this again for a different reason. Sometimes talking to someone else about their problems can be overwhelming. Suicide, anxiety, depression, and abuse are common problems that you will confront directly. Don’t drown trying to save anyone. I’ve been there, and you’re too important for that. Trust me, seriously, you’re too important for that.
I’m a big fan of Crisis text line, though I’m certainly biased because I volunteer there. If you’re looking for a list of mental health resources to peruse. Their list of referrals is pretty good.
https://www.crisistextline.org/referrals
You can also text them at 741741 and be connected with someone to text real time, their goal is to get you to someone in under 5 minutes, though they struggle during high traffic hours due to volume.
I’m going to be making a post about them (as a volunteer opportunity) at some point in the future, though work is incredibly demanding right now, so it might be a little while.
Perhaps incentive drift is more accurate, but it certainly seems to rob the individual of their agency. I know I am a collection of the circumstances I was raised in, however, that does not mean that I can pass blame onto those around me when I choose something wrong.
Perhaps the choice between the two words is a difference between Instrumental and Value rationalistic choices. Where a Value rationalist would prefer to use the term Incentive drift because it more accurately describes the reality of this “drift.” An Instrumental rationalist would prefer to use the term Value drift, because it is more likely to result in individuals taking precautions, and therefore a better long term outcome for EA as a whole.
As I am an Instrumental rationalist, I believe that sticking with the term “Value drift” would place the emphasis on the individual in circumstances where it matters. We could then use the term “Incentive drift” to refer to the overall effect on people in the community of different features we have. (Thus enabling us to retain the benefits of it’s use to describe effects on the community.)
For example, the lack of “Praise” as you refer to it in your link is something that has pushed many individuals away from effective altruism and rationality in general. To use the new word, it causes incentive drift away from EA.
Value drift is a much more individual term to me. The major fear here is no longer contributing to those things that I previously considered valuable. This might be a result of incentive drift, but it is my values that have changed.
Regardless of whether my thoughts are accurate, thank you for taking the time to post today. These are the kinds of posts that keep me coming back to the EA community and I appreciate the time and effort that went into it.
TLDR because I got long-winded: If you ever find yourself planning to commit some morally horrible thing in the name of a good outcome, stop. Those kinds of choices aren’t made in the real world, they are a thought exercise (normally a really stupid one too.)
Long version:
Sorry that you got downvoted hard, keep in mind that knee-jerk reactions are probably pretty strong right now. While the disagrees are justified, the downvotes are probably not (I’m assuming this is a legit question.)
I’m constantly looking to learn more about ethics, philosophy, etc and I recently got introduced to this website: What is Utilitarianism? | Utilitarianism.net which I really liked. There are a few things that I disagree with or feel could have been more explored, but I think it’s overall good.
To restate and make sure that I understand where you’re coming from, I think that you’re framing the current objections like a trolley problem, or its more advanced version the transplant case. (Addressed in 8. Objections to Utilitarianism and Responses – Utilitarianism.net second paragraph under “General Ways of Responding to Objections to Utilitarianism”) if I was going to reword it, I would put it something like this:
“When considered in large enough situations, the ideal of precommitment would be swamped by the potential utility gains for defecting.”
This is the second response commonly used in defense of the utilitarian framework “debunk the moral intuition” (paragraph 5 in the same chapter and section.)
I believe, and I think most of us believe that this isn’t the appropriate response (to this situation) because in this case, the moral intuition is correct. Any misbehavior on this scale results in a weaker economic system, harms thousands if not millions of people, and erodes trust in society itself.
A response you might think would be something like “but if the stakes were even higher.”
And I agree, it would be pretty ridiculous if after the Avengers saved NYC from a chitauri invasion someone tried to sue the Hulk for using his car to crush an alien or something. We would all agree with you there, the illegal action (crushing a car) is justified by the alternative (aliens killing us all.)
The problem with that kind of scale, however, is that if you ever find yourself in a situation where you think “I’m the only one that can save everyone, all it takes is ‘insert thing that no one else wants me to do.’” stop what you’re doing and do what the people around you tell you to do.
If you think you’re Jesus, you’re probably not Jesus. (or in this case the Hulk.)
That’s why the discussions of corrupted hardware and the unilateralist’s curse (links provided by OP) are so important.
For more discussion on this you can look in Elements and Types of Utilitarianism – Utilitarianism.net “Multi-level Utilitarianism Versus Single-level Utilitarianism.”
One must-read section says that “In contrast, to our knowledge no one has ever defended single-level utilitarianism, including the classical utilitarians.26 Deliberately calculating the expected consequences of our actions is error-prone and risks falling into decision paralysis.”
I would encourage you to read that whole section (and the one that follows it if you think much of rule utilitarianism) as I think one of the most common problems with most people’s understanding of utilitarianism is the single-level vs multi-level distinction.