Working in healthcare technology.
MSc in applied mathematics/theoretical ML.
Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help “neartermist” causes.
Working in healthcare technology.
MSc in applied mathematics/theoretical ML.
Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help “neartermist” causes.
Is not every moral theory based on assumptions that X must be better than Y, around which some model is built?
No, that’s not what I think. I think it’s rather dangerous and probably morally bad to seek out “negative lives” in order to stop them. And I think we should not be interfering with nature in ways we do not really understand. The whole idea of wild animal welfare seems to me not only unsupported morally but also absurd and probably a bad thing in practice.
If I somehow ran into such a dog and decided the effort to take them to an ultrasound etc. was worth it, then probably yes—but I wouldn’t start e.g. actively searching for stray dogs with cancer in order to do that.
In principle—though I can’t say I’ve been consistent about it. I’ve supported ending our family dog’s misery when she was diagnosed with pretty bad cancer, and I still stand behind that decision. On the other hand I don’t think I would ever apply this to an animal one has had no interaction with.
On a meta level, and I’m adding this because it’s relevant to your other comment: I think it’s fine to live with such contradictions. Given our brain architecture, I don’t expect human morality to be translatable to a short and clear set of rules.
I assume you’re looking for a rational explanation, but it’s rather based on personal experience. It’s because I think my life with constant chronic pain has more negative experiences than positive ones but I have decided I should keep on living.
I don’t think there’s such a thing as a negative life.
While I didn’t karma-vote on the main post, I downvoted this comment because I think the idea of net-negative lives for naturally occurring creatures is not only false but even harmful.
I don’t think this is a point against valuing animal lives (to some extent) as much as it’s a point against utilitarianism. Which I agree with. I didn’t downvote because I don’t think a detailed calculation in itself is harmful, but when you reach these kinds of conclusions is probably the point to acknowledge pure utilitarianism might be a doomed idea.
Vote power should scale with karma
It’s Ok to give users with really small karma less power, but otherwise EA has the wrong idea that if someone has read much/thought a lot about something it means they understand it better.
15 months later, I see Ezrah updated his post to say his views have changed, and so have mine. I think you were basically right. Not about “pro-Israel propaganda talking points”, because I believe Ezrah was genuine; but you were right about the urgent need, in that time already, for a ceasefire.
In the turmoil following the Oct 7 massacre I was far too optimistic about the possibility of the Israeli war effort being guided by restrained and relatively benign figures. It took me another couple months after the post to start protesting for a ceasefire myself, and another few months to basically give up. Then, Israel breaking the ceasefire a few weeks ago was the final straw.
I only learned from this post that Moskowitz left the forum, and it makes me somewhat sad. On the one hand, I’m barely on the forum myself and I might have made the same decision in his position. On the other hand, I thought it very important that he was participating in the discourse about the projects he was funding, and now the two avenues of talking with him (through DEAM and the forum) are gone. I’m not sure these were the right platforms to begin with, but it’d be nice if there were some other public platform like that.
Interesting, I’ve lived in Haifa my whole life and never heard of it.
Israeli-daycared
That’s a new one. What does it mean?
I think it was “will replace” when I wrote the comment but now it’s “must replace”? If that’s the case, it’s better now.
Positive suggestion, but the title for the post is confusing
Zero effect is not the worst case.
Upvoted because I’m glad you answered the question (and didn’t use EA grant money for this).
Disagreevoted because as an IMO medalist, I neither think science olympiad medalists are really such a useful audience, nor do I see any value in disseminating said fanfiction to potential alignment researchers.
Personally I don’t believe in a “trusted person”, as a concept. I think EA has had its fun trying to be a high trust environment where some large things are kept private, and it backfired horribly.
I’ll take <agree> <disagree> votes to indicate how compelling this would be to readers.
That was the aim of my comment as well, so I do hope more people actually vote on it.
I was initially impressed and considered donating to the fund in the future, but then noticed the ~$300K grant without a public report. I can’t see myself donating to a fund that doesn’t say what it’s doing with almost 30% of its disbursed funds.
Yes, but if at some point you find out, for example, that your model of morality leads to a conclusion that one should kill all humans, you’d probably conclude that your model is wrong rather than actually go through with it.
It’s an extreme example, but at its basis every model is somehow an approximation stemming from our internal moral intuition. Be it that life is better than death, or happiness better than pain, or satisfying desires better than frustration, or that following god’s commands is better than ignoring them, etc.