I’m not sure how to word this properly, and I’m uncertain about the best approach to this issue, but I feel it’s important to get this take out there.
Yesterday, Mechanize was announced, a startup focused on developing virtual work environments, benchmarks, and training data to fully automate the economy. The founders include Matthew Barnett, Tamay Besiroglu, and Ege Erdil, who are leaving (or have left) Epoch AI to start this company.
I’m very concerned we might be witnessing another situation like Anthropic, where people with EA connections start a company that ultimately increases AI capabilities rather than safeguarding humanity’s future. But this time, we have a real opportunity for impact before it’s too late. I believe this project could potentially accelerate capabilities, increasing the odds of an existential catastrophe.
I’ve already reached out to the founders on X, but perhaps there are people more qualified than me who could speak with them about these concerns. In my tweets to them, I expressed worry about how this project could speed up AI development timelines, asked for a detailed write-up explaining why they believe this approach is net positive and low risk, and suggested an open debate on the EA Forum. While their vision of abundance sounds appealing, rushing toward it might increase the chance we never reach it due to misaligned systems.
I personally don’t have a lot of energy or capacity to work on this right now, nor do I think I have the required expertise, so I hope that others will pick up the slack. It’s important we approach this constructively and avoid attacking the three founders personally. The goal should be productive dialogue, not confrontation.
Does anyone have thoughts on how to productively engage with the Mechanize team? Or am I overreacting to what might actually be a beneficial project?
The situation doesn’t seem very similar to Anthropic. Regardless of whether you think Anthropic is good or bad (I think Anthropic is very good, but I work at Anthropic, so take that as you will), Anthropic was founded with the explicitly altruistic intention of making AI go well. Mechanize, by contrast, seems to mostly not be making any claims about altruistic motivations at all.
What concerns are there that you think the mechanize founders haven’t considered? I haven’t engaged with their work that much, but it seems like they have been part of the AI safety debate for years now, with plenty of discussion on this Forum and elsewhere (e.g. I can’t think of many AIS people that have been as active on this Forum as @Matthew_Barnett has been for the last few years). I feel like they have communicated their models and disagreements a (more than) fair amount already, so I don’t know what you would expect to change in further discussions?
You make a fair point, but what other tool do we have than our voice? I’ve read Matthew’s last post and skimmed through others. I see some concerning views, but I can also understand how he arrives at them. But what puzzles me often with some AI folks is the level of confidence needed to take such high-stakes actions. Why not err on the side of caution when the stakes are potentially so high?
Perhaps instead of trying to change someone’s moral views, we could just encourage taking moral uncertainty seriously? I personally lean towards hedonic act utilitarianism, yet I often default to ‘common sense morality’ because I’m just not certain enough.
I don’t have strong feelings on know how to best tackle this. I won’t have good answers to any questions. I’m just voicing concern and hoping others with more expertise might consider engaging constructively.
Two of the Mechanize co-founders were on Dwarkesh Patel’s podcast recently to discuss AGI timelines, among other things: https://youtu.be/WLBsUarvWTw
(Note: Dwarkesh Patel is listed on Mechanize’s website as an investor. I don’t know if this is disclosed in the podcast.)
I’ve only watched the first 45 minutes, but it seems like these two co-founders think AGI is decades away (e.g. one of them says 30-40 years). Dwarkesh seems to believe AGI will come much sooner and argues with them about this.
I was going through Animal Charity Evaluators’ reasoning behind which countries to prioritize (https://animalcharityevaluators.org/charity-review/the-humane-league/#prioritizing-countries) and I notice they judge countries with a higher GNI per capita as more tractable. This goes against my intuition, because my guess is your money goes further in countries that are poorer. And also because I’ve heard animal rights work in Latin America and Asia is more cost-effective nowadays. Does anyone have any hypotheses/arguments? This quick take isn’t meant as criticism, I’m just informing myself as I’m trying to choose an animal welfare org to fundraise for this week (small, low stakes).
When I have more time I’d be happy to do more research and contact ACE myself with these questions, but right now I’m just looking for some quick thoughts.
Hey Jeroen! I’m a researcher at ACE and have been doing some work on our country prioritization model. This is a helpful question and one that we’ve been thinking about ourselves.
The general argument is that strong economic performance tends to correlate with liberalism, democracy, and progressive values, which themselves seem to correlate with progressive attitudes towards, and legislation for, animals. This is why it’s included in Mercy For Animals’ Farmed Animal Opportunity Index (FAOI), which we previously used for our evaluations and which our current country prioritization model is still loosely based on.
The relevance of this factor depends on the type of intervention being used—e.g., economic performance is likely to be particularly relevant for programs that depend on securing large amounts of government funding. For a lot of programs it won’t be very relevant, and for some a similar but more relevant indicator of tractability could be the percentage of income not spent on food (which we also use), as countries are probably more likely to allocate resources to animal advocacy if their money and mental bandwidth aren’t spent on securing essential needs. (Because of these kinds of considerations, this year we took a more bespoke approach when considering the likely tractability of each charity’s work, relying less on the quantitative outputs of the country prioritization framework.)
Your intuition about money going further in poorer countries (everything else being equal) makes sense. We seek to capture this where possible on a charity-by-charity basis in our Cost-Effectiveness Assessments. For country prioritization more broadly, in theory it’s possible to account for this using indices like the OECD’s Purchasing Power Parities (PPP) Index. Various issues have been raised with the validity of PPP measurements (some examples here), which is one of the reasons we haven’t included it to date in our prioritization model, but for next year we plan to explore those issues in more detail and what the trade-offs are.
Glad this question-and-answer happened! A meta note that sometimes people post questions aimed at an organization but don’t flag it to the actual org. I think it’s a good practice to flag questions to the org, otherwise you risk: - someone not at the org answers the question, often with information that’s incorrect or out of date - the org never sees the question and looks out-of-touch for not answering— comms staff at the org feel they need to comb public spaces for questions and comments about them, lest they look like they’re ignoring people
(This doesn’t mean you can’t ask questions in public places, but email the org sending them the link!)
Thanks for pointing this out! I wasn’t really sure where my question fell on the axis of “general EA animal welfare knowledge” (ex. prioritizing chickens > cows) to “specific detail about how ACE evaluates charities”. By posting a quick take on the forum, I was hoping it was closer to the former, that I was just missing something obvious and that ACE wouldn’t even have to be bothered. I shouldn’t have overlooked the possibility that it might be more complicated!
Don’t forget to go to http://www.projectforawesome.com today and vote for videos promoting effective charities like Against Malaria Foundation, The Humane League, GiveDirectly, Good Food Institute, ProVeg, GiveWell and Fish Welfare Initiative!
What are some reasons to remain optimistic about the world from an EA perspective? Or how can we keep up with the most important news (ex. USAID / PEPFAR) without drowning in it?
The news is just incredibly depressing. The optimism I once had before the pandemic is just gone. Yeah, global health and development may still continue to improve. And that’s not insignificant. But moral circle expansion? Animal welfare? AI risks?
I’m going to draw an analogy to finance/investments. If I check the level of the stock market every day or multiple times a day, I become acutely aware of increases and decreases. I might feel a rush of adrenaline when the stock market goes up by 2%, and an overwhelming feeling of despair if it drops by 2%. But if I stop checking it frequently, I can “zoom out” and see that the broader trend is upward. It is true that there is a lot of variation on a short timeline, but over decades the trend is quite clearly upward. Like all analogies, this falls somewhat short in a variety of ways, but the idea I want to drive home is that “the news is just incredibly depressing” because we look at the short-term news. We allow ourselves to be emotionally buffeted and battered by what is happening this day or this week rather than paying attention to larger trends. If it really is vital for a job to stay up to date on the latest news, then at least try to keep some perspective: what is and isn’t within your control, and this too shall pass.
One useful framing can be asking yourself if there is anything you can do to affect this, asking why you care about this particular issue, and asking if there is any purpose/outcome in focusing it. I think that people dying in a civil war in Yemen is horrible because I detest suffering in general, but I have no influence to affect that at all, and my worrying about it doesn’t serve any purpose. I think that the world will be a worse place if USAID funding is reduced, but there isn’t any benefit to me stressing out about that. There are a million things that I would like to see different in the world, but most of them are very much outside my scope of influence.
I made a spreadsheet of all EA-aligned video content creators that I’m aware of. This doesn’t mean they make EA content necessarily, just that they share EA values. If I’ve missed anyone, let me know!
In case you feel like adding another feature, it might be nice to include an example or two of each channel’s EA-related content in another column. It’s easy to tell how Rational Animations is EA-focused, but I wasn’t sure which content I should look at for e.g. the person whose TikTok account was largely focused on juggling.
In case you’re interested in supporting my EA-aligned YouTube channel A Happier World:
I’ve lowered the minimum funding goal from $10,000 to $2,500 to give donors confidence that their money will directly support the project. Because if the minimum funding goal isn’t reached, you won’t get your money back. Instead it will go back in your Manifund balance for you to spend on a different project. I understand this may have been a barrier for some, which is why I lowered the minimum funding goal.
At this point, I’d be willing to buy out credit from anyone who obtains credit on Manifund, applies said credit to this project, and the project doesn’t fund. Hopefully Manifund will find a more elegant solution for this kind of issue (there was a discussion on Discord last week) but this should work as a stopgap.
(Offer limited to $240, which is the current funding gap between current offers and the $2500 minimum.)
Summary: I propose a view combining classic utilitarianism with a rule that says not to end streams of consciousness.
Under classic utilitarianism, the only thing that matters is hedonic experiences. People with a person affecting view object to this, but that view comes with issues of its own.
To solve the tension between these two philosophies, I propose a view that adds a rule to classical utilitarianism disallowing directly ending streams of consciousness (SOC)
This is a way to bridge the gap between the person-affecting view and ‘personal identity doesn’t exist’ view and tries to solve some population ethics issues.
I like the simplicity of classic utilitarianism. But I have a strong intuition that a stream of consciousness is valuable intrinsically, meaning that it shouldn’t be stopped/destroyed. Creating a new stream of consciousness isn’t intrinsically valuable (except for the utility it creates).
A SOC isn’t infinitely valuable. Here are some exceptions: 1. When not ending a SOC would result in more SOCs ending (see trolley problem): basically you want to break the rule as little as possible 2. The SOC experiences negative utility and there are no signs it will become positive utility (see euthanasia) 3. Ending the SOC will create at least 10x its utility (or a different critical level)
I believe this is compatible with the non-identity problem (it’s still unclear who’s you if you’re duplicated or if you’re 20 years older). But I’ve never felt comfortable with the teleportation argument, and this intuition explains why (as a SOC is being ended).
So generally this means: Making current population happier (or making sure few people die) > increasing amount of people
Future people don’t have SOCs as they don’t exist yet, but it’s still important to make their lives go well.
Say we live in a simulation. If our simulation gets turned off and gets replaced by a different one of equal value (pain/pleasure wise), there still seems to be something of incredible value lost.
Still, if the simulation gets replaced by a sufficiently more valuable one it could still be good, hence exception number 3. The exception also makes sure you can kill someone to prevent future people never coming into existence (for example: someone is about to spread a virus that makes everyone incapable of reproducing).
I don’t think adding this rule changes the EV calculations regarding increasing pain/pleasure of present and future beings when it doesn’t involve ending streams of consciousness (I could be wrong though).
This rule doesn’t solve the repugnant conclusion, but I don’t think it’s repugnant in the first place. I think my bar for a life worth living seems higher than those of other people.
How I came to this: I really liked this forum post arguing “Making current population happier > increasing amount of people”. But if I agree it means there’s something of value besides pure pleasure/pain. This is my attempt at finding what it is.
One possible major objection: If you’re giving birth you’re essentially causing a new SOC to be ended (as long as aging isn’t solved). Perhaps this is solved by saying you can’t directly end a stream of consciousness, but you can ignore second/third order effects (though I’m not sure how to make sense of that).
I’d love to hear your thoughts on these ideas. I don’t think these thoughts are good enough or polished enough to deserve a full forum post. I wouldn’t be surprised if the first comment under this shortform would completely shatter this idea.
I’m not sure how to word this properly, and I’m uncertain about the best approach to this issue, but I feel it’s important to get this take out there.
Yesterday, Mechanize was announced, a startup focused on developing virtual work environments, benchmarks, and training data to fully automate the economy. The founders include Matthew Barnett, Tamay Besiroglu, and Ege Erdil, who are leaving (or have left) Epoch AI to start this company.
I’m very concerned we might be witnessing another situation like Anthropic, where people with EA connections start a company that ultimately increases AI capabilities rather than safeguarding humanity’s future. But this time, we have a real opportunity for impact before it’s too late. I believe this project could potentially accelerate capabilities, increasing the odds of an existential catastrophe.
I’ve already reached out to the founders on X, but perhaps there are people more qualified than me who could speak with them about these concerns. In my tweets to them, I expressed worry about how this project could speed up AI development timelines, asked for a detailed write-up explaining why they believe this approach is net positive and low risk, and suggested an open debate on the EA Forum. While their vision of abundance sounds appealing, rushing toward it might increase the chance we never reach it due to misaligned systems.
I personally don’t have a lot of energy or capacity to work on this right now, nor do I think I have the required expertise, so I hope that others will pick up the slack. It’s important we approach this constructively and avoid attacking the three founders personally. The goal should be productive dialogue, not confrontation.
Does anyone have thoughts on how to productively engage with the Mechanize team? Or am I overreacting to what might actually be a beneficial project?
The situation doesn’t seem very similar to Anthropic. Regardless of whether you think Anthropic is good or bad (I think Anthropic is very good, but I work at Anthropic, so take that as you will), Anthropic was founded with the explicitly altruistic intention of making AI go well. Mechanize, by contrast, seems to mostly not be making any claims about altruistic motivations at all.
You’re right that this is an important distinction to make.
What concerns are there that you think the mechanize founders haven’t considered? I haven’t engaged with their work that much, but it seems like they have been part of the AI safety debate for years now, with plenty of discussion on this Forum and elsewhere (e.g. I can’t think of many AIS people that have been as active on this Forum as @Matthew_Barnett has been for the last few years). I feel like they have communicated their models and disagreements a (more than) fair amount already, so I don’t know what you would expect to change in further discussions?
You make a fair point, but what other tool do we have than our voice? I’ve read Matthew’s last post and skimmed through others. I see some concerning views, but I can also understand how he arrives at them. But what puzzles me often with some AI folks is the level of confidence needed to take such high-stakes actions. Why not err on the side of caution when the stakes are potentially so high?
Perhaps instead of trying to change someone’s moral views, we could just encourage taking moral uncertainty seriously? I personally lean towards hedonic act utilitarianism, yet I often default to ‘common sense morality’ because I’m just not certain enough.
I don’t have strong feelings on know how to best tackle this. I won’t have good answers to any questions. I’m just voicing concern and hoping others with more expertise might consider engaging constructively.
Two of the Mechanize co-founders were on Dwarkesh Patel’s podcast recently to discuss AGI timelines, among other things: https://youtu.be/WLBsUarvWTw
(Note: Dwarkesh Patel is listed on Mechanize’s website as an investor. I don’t know if this is disclosed in the podcast.)
I’ve only watched the first 45 minutes, but it seems like these two co-founders think AGI is decades away (e.g. one of them says 30-40 years). Dwarkesh seems to believe AGI will come much sooner and argues with them about this.
I was going through Animal Charity Evaluators’ reasoning behind which countries to prioritize (https://animalcharityevaluators.org/charity-review/the-humane-league/#prioritizing-countries) and I notice they judge countries with a higher GNI per capita as more tractable. This goes against my intuition, because my guess is your money goes further in countries that are poorer. And also because I’ve heard animal rights work in Latin America and Asia is more cost-effective nowadays. Does anyone have any hypotheses/arguments? This quick take isn’t meant as criticism, I’m just informing myself as I’m trying to choose an animal welfare org to fundraise for this week (small, low stakes).
When I have more time I’d be happy to do more research and contact ACE myself with these questions, but right now I’m just looking for some quick thoughts.
Hey Jeroen! I’m a researcher at ACE and have been doing some work on our country prioritization model. This is a helpful question and one that we’ve been thinking about ourselves.
The general argument is that strong economic performance tends to correlate with liberalism, democracy, and progressive values, which themselves seem to correlate with progressive attitudes towards, and legislation for, animals. This is why it’s included in Mercy For Animals’ Farmed Animal Opportunity Index (FAOI), which we previously used for our evaluations and which our current country prioritization model is still loosely based on.
The relevance of this factor depends on the type of intervention being used—e.g., economic performance is likely to be particularly relevant for programs that depend on securing large amounts of government funding. For a lot of programs it won’t be very relevant, and for some a similar but more relevant indicator of tractability could be the percentage of income not spent on food (which we also use), as countries are probably more likely to allocate resources to animal advocacy if their money and mental bandwidth aren’t spent on securing essential needs. (Because of these kinds of considerations, this year we took a more bespoke approach when considering the likely tractability of each charity’s work, relying less on the quantitative outputs of the country prioritization framework.)
Your intuition about money going further in poorer countries (everything else being equal) makes sense. We seek to capture this where possible on a charity-by-charity basis in our Cost-Effectiveness Assessments. For country prioritization more broadly, in theory it’s possible to account for this using indices like the OECD’s Purchasing Power Parities (PPP) Index. Various issues have been raised with the validity of PPP measurements (some examples here), which is one of the reasons we haven’t included it to date in our prioritization model, but for next year we plan to explore those issues in more detail and what the trade-offs are.
Hope that helps!
Thank you so much for this elaborate and insightful response, Max! I understand the argument much better now.
Glad this question-and-answer happened!
A meta note that sometimes people post questions aimed at an organization but don’t flag it to the actual org. I think it’s a good practice to flag questions to the org, otherwise you risk:
- someone not at the org answers the question, often with information that’s incorrect or out of date
- the org never sees the question and looks out-of-touch for not answering—
comms staff at the org feel they need to comb public spaces for questions and comments about them, lest they look like they’re ignoring people
(This doesn’t mean you can’t ask questions in public places, but email the org sending them the link!)
Thanks for pointing this out! I wasn’t really sure where my question fell on the axis of “general EA animal welfare knowledge” (ex. prioritizing chickens > cows) to “specific detail about how ACE evaluates charities”. By posting a quick take on the forum, I was hoping it was closer to the former, that I was just missing something obvious and that ACE wouldn’t even have to be bothered. I shouldn’t have overlooked the possibility that it might be more complicated!
Don’t forget to go to http://www.projectforawesome.com today and vote for videos promoting effective charities like Against Malaria Foundation, The Humane League, GiveDirectly, Good Food Institute, ProVeg, GiveWell and Fish Welfare Initiative!
How does one vote? (Sorry if this is super obvious and I’m just missing it!)
+1. I went to the Effective Altruism Barcelona Give Directly video, and the voting link just took me to the givewell homepage.
What are some reasons to remain optimistic about the world from an EA perspective? Or how can we keep up with the most important news (ex. USAID / PEPFAR) without drowning in it?
The news is just incredibly depressing. The optimism I once had before the pandemic is just gone. Yeah, global health and development may still continue to improve. And that’s not insignificant. But moral circle expansion? Animal welfare? AI risks?
I’m going to draw an analogy to finance/investments. If I check the level of the stock market every day or multiple times a day, I become acutely aware of increases and decreases. I might feel a rush of adrenaline when the stock market goes up by 2%, and an overwhelming feeling of despair if it drops by 2%. But if I stop checking it frequently, I can “zoom out” and see that the broader trend is upward. It is true that there is a lot of variation on a short timeline, but over decades the trend is quite clearly upward. Like all analogies, this falls somewhat short in a variety of ways, but the idea I want to drive home is that “the news is just incredibly depressing” because we look at the short-term news. We allow ourselves to be emotionally buffeted and battered by what is happening this day or this week rather than paying attention to larger trends. If it really is vital for a job to stay up to date on the latest news, then at least try to keep some perspective: what is and isn’t within your control, and this too shall pass.
One useful framing can be asking yourself if there is anything you can do to affect this, asking why you care about this particular issue, and asking if there is any purpose/outcome in focusing it. I think that people dying in a civil war in Yemen is horrible because I detest suffering in general, but I have no influence to affect that at all, and my worrying about it doesn’t serve any purpose. I think that the world will be a worse place if USAID funding is reduced, but there isn’t any benefit to me stressing out about that. There are a million things that I would like to see different in the world, but most of them are very much outside my scope of influence.
EA-aligned video content creators
I made a spreadsheet of all EA-aligned video content creators that I’m aware of. This doesn’t mean they make EA content necessarily, just that they share EA values. If I’ve missed anyone, let me know!
https://docs.google.com/spreadsheets/d/1ukTCN4ADCkTLw9onQO-sTeDQfZQBz3bn_vjFVw6rqTQ/edit?usp=sharing
This was a really cool thing to do!
In case you feel like adding another feature, it might be nice to include an example or two of each channel’s EA-related content in another column. It’s easy to tell how Rational Animations is EA-focused, but I wasn’t sure which content I should look at for e.g. the person whose TikTok account was largely focused on juggling.
Like I said, they don’t necessarily make EA content. I think I’ll add a column specifying whether they do or not.
Responding as per Samuel Shadrach’s suggestion:
Neil Halloran seems like a good addition.
He doesn’t seem to be an EA, yet he’s rigorously writing on some EA aligned topics.
https://www.youtube.com/channel/UCtbym4p03AxE1vF9QB4wB5A
See here: https://forum.effectivealtruism.org/posts/matte7zzExKaZiTNo/charles-he-s-shortform?commentId=WPnGpLGr88afdsjyc
Added :)
Today we celebrate Petrov day: The day that Stanislav Petrov potentially saved the world from a nuclear war. 40 years ago now.
I made a quick YouTube Short / TikTok about it: https://www.youtube.com/shorts/Y8bnqxAbMNg https://www.tiktok.com/@ahappierworldyt/video/7283112331121347873
I’d love to do more weekly coworkings with people! If you’re interested in coworking with me, you can book a session here: https://app.reclaim.ai/m/jwillems/coworking
We can try it out and then decide if we want to do it weekly or not.
More about me: I run the YouTube channel A Happier World (youtube.com/ahappierworldyt) so I’ll most likely be working on that during our sessions.
In case you’re interested in supporting my EA-aligned YouTube channel A Happier World:
I’ve lowered the minimum funding goal from $10,000 to $2,500 to give donors confidence that their money will directly support the project. Because if the minimum funding goal isn’t reached, you won’t get your money back. Instead it will go back in your Manifund balance for you to spend on a different project. I understand this may have been a barrier for some, which is why I lowered the minimum funding goal.
Manifund fundraising page
EA Forum post announcement
At this point, I’d be willing to buy out credit from anyone who obtains credit on Manifund, applies said credit to this project, and the project doesn’t fund. Hopefully Manifund will find a more elegant solution for this kind of issue (there was a discussion on Discord last week) but this should work as a stopgap.
(Offer limited to $240, which is the current funding gap between current offers and the $2500 minimum.)
An unpolished attempt at moral philosophy
Summary: I propose a view combining classic utilitarianism with a rule that says not to end streams of consciousness.
Under classic utilitarianism, the only thing that matters is hedonic experiences.
People with a person affecting view object to this, but that view comes with issues of its own.
To solve the tension between these two philosophies, I propose a view that adds a rule to classical utilitarianism disallowing directly ending streams of consciousness (SOC)
This is a way to bridge the gap between the person-affecting view and ‘personal identity doesn’t exist’ view and tries to solve some population ethics issues.
I like the simplicity of classic utilitarianism. But I have a strong intuition that a stream of consciousness is valuable intrinsically, meaning that it shouldn’t be stopped/destroyed. Creating a new stream of consciousness isn’t intrinsically valuable (except for the utility it creates).
A SOC isn’t infinitely valuable. Here are some exceptions:
1. When not ending a SOC would result in more SOCs ending (see trolley problem): basically you want to break the rule as little as possible
2. The SOC experiences negative utility and there are no signs it will become positive utility (see euthanasia)
3. Ending the SOC will create at least 10x its utility (or a different critical level)
I believe this is compatible with the non-identity problem (it’s still unclear who’s you if you’re duplicated or if you’re 20 years older).
But I’ve never felt comfortable with the teleportation argument, and this intuition explains why (as a SOC is being ended).
So generally this means: Making current population happier (or making sure few people die) > increasing amount of people
Future people don’t have SOCs as they don’t exist yet, but it’s still important to make their lives go well.
Say we live in a simulation. If our simulation gets turned off and gets replaced by a different one of equal value (pain/pleasure wise), there still seems to be something of incredible value lost.
Still, if the simulation gets replaced by a sufficiently more valuable one it could still be good, hence exception number 3. The exception also makes sure you can kill someone to prevent future people never coming into existence (for example: someone is about to spread a virus that makes everyone incapable of reproducing).
I don’t think adding this rule changes the EV calculations regarding increasing pain/pleasure of present and future beings when it doesn’t involve ending streams of consciousness (I could be wrong though).
This rule doesn’t solve the repugnant conclusion, but I don’t think it’s repugnant in the first place. I think my bar for a life worth living seems higher than those of other people.
How I came to this: I really liked this forum post arguing “Making current population happier > increasing amount of people”. But if I agree it means there’s something of value besides pure pleasure/pain. This is my attempt at finding what it is.
One possible major objection: If you’re giving birth you’re essentially causing a new SOC to be ended (as long as aging isn’t solved). Perhaps this is solved by saying you can’t directly end a stream of consciousness, but you can ignore second/third order effects (though I’m not sure how to make sense of that).
I’d love to hear your thoughts on these ideas. I don’t think these thoughts are good enough or polished enough to deserve a full forum post. I wouldn’t be surprised if the first comment under this shortform would completely shatter this idea.
Reason why I call it a “stream of consciousness”: Streams change over time. Conscious beings do too. They can also split, multiply or grow bigger.
One thing I worry about though: Does your consciousness end when sleeping? Does it end when under anesthesia? These thoughts frighten me.
I added the transcription of my newest video on sentientism and moral circle expansion to the EA Forum post :) https://forum.effectivealtruism.org/posts/2kNeKoCcHAHQRjRRH/new-a-happier-world-video-on-sentientism-and-moral-circle