Probably, yeah. But that seems hard to square with a consistent theory of moral value, given that there’s a continuum between “good” and “bad” experiences.
Isaac King
I would add to #2 that the number of shrimp being farmed is equally if not more relevant than brain size. The total number of experiences is surely still quite large in normal human terms, but could be small relative to the massive numbers of shrimp in existence.
I didn’t mean it to be evidence for the statement, just an explanation of what I meant by the phrase.
Do you disagree that most people value that? My impression is that wireheading and hedonium are widely seen as undesirable.
On building Omelas for shrimp; the implications of diversity-oriented theories of moral value on factory farming
How well do you think EA handled the FTX scandal?
Yeah, I don’t do it on any non-LW/EAF post.
Yeah, most of the p(doom) discussions I see taking place seem to be focusing on the nearer term of 10 years or less. I believe there are quite a few people (e.g. Gary Marcus, maybe?) who operate under a framework like “current LLMs will not get to AGI, but actual AGI will probably be hard to align), so they may give a high p(doom before 2100) and a low p(doom before 2030).
Oh, I agree. Arguments of the form “bad things are theoretically possible, therefore we should worry” are bad and shouldn’t be used. But “bad things are likely” is fine, and seems more likely to reach an average person than “bad things are 50% likely”.
Stop talking about p(doom)
Isn’t that what the strong upvote is for?
The fact that this comment is at −160 karma is frankly shocking to me. Trust is important in any community, and the fact that people appear to be completely dismissing the fact that Owen and anon had established an atypical norm of honesty and forthcomingness about their thoughts and emotions and are choosing to frame this norm being followed as somehow inappropriate makes me expect that any agreements I might make with another EA will likely be broken as soon as it’s socially advantageous to do so. I enjoy and appreciate the EA community because, at least up until recently, people here seemed committed to caring about what’s true and what will maximally benefit the world, not jumping on the social shaming bandwagon. I sincerely hope I’m misunderstanding what’s going on here, because if EA really is giving up their epistemic integrity as it looks to me like is happening here, that indicates a much darker future than I expected.
(It’s theoretically possible that all the downvotes are people disagreeing with the last paragraph only, and I think disagreement with that one is indeed much more justified. But the first four paragraphs are clearly the main point of this comment, and if someone agrees with those and disagrees with the last one, I don’t think they should be downvoting the comment overall.)
I can tell you why I downvoted it.
Cryptocurrency doesn’t actually work
False, it works just fine. It’s a token that can’t be duplicated and people can send to each other without any centralized authority.
and only is there for scams and fraud.
There are indeed a lot of those, but scams and fraud were very clearly not the intention of its creators. Realistically they were cryptography nerds who wanted to make something cool, or libertarians with overly-idealistic visions of the future.
Not surprising that FTX collapsed.
Clear hindsight bias. This person should have made some money betting against FTX before it collapsed and then I’d take them more seriously.
Basically, the comment is just your standard “cryptocurrency bad” take, without any attempt at justifying their claims or even saying much of anything other than expressing in an inflammatory way that they don’t like cryptocurrency.
Very reasonable! I understand you feel like you have to walk a fine line in order to not trigger social disapproval of your words; I think that’s bad, and to be clear, I did not mean to make it seem like I disapproved of your comment. I wish EA could be a place where everyone felt comfortable speaking naturally without having to add a bunch of disclaimers.
I just wanted to mention that this comment tripped my “bravery debate” detector. I still upvoted it because honestly the bravery debate framing seems correct here, and I said something similar in my own comments earlier. But then again, everyone who engages in bravery debates thinks their framing is accurate. So let’s be careful not to give posts additional weight just because they’re speaking against majority EA opinion.
I see it’s now up to +18, which is promising. Implies that people who vote without fully reading the post are more likely to downvote than upvote.
A summary of sorts is being compiled here:
[Question] What are some high impact short-term actions for people who don’t want to make long commitments?
And on a personal note, I aspire to create a lot of value for the world, and direct it towards doing lots of good. Call me overconfident, but I expect to be a billionaire someday. The way EA treats SBF here sets a precedent: if the EA community is happy to accept money when the going is good, but then is ready to cut ties once the money dries up… you can guess how excited I would be to contribute in the first place.
This is a weird paragraph. If your goal were doing the most good, why would it matter how you expect EA to treat you in the case of failure? It kinda sounds like your goal is social status among the EA community.
This isn’t to say that you don’t have a good point. If people are donating to EA because they want social status, that’s still money going towards good causes, and perhaps we should reward them for that in order to encourage more people to do so. But I’d have a hard time calling that “altruistic behavior” on their part.
Yeah, reading further, I definitely don’t agree with a lot of these claims. But the fact that I feel like I have to post this clarification in order to avoid getting downvoted myself is something I think needs to be talked about. The original post is now down to −15, and I haven’t even finished reading it.
Downvoting as you seem to have not read or chosen to ignore the first section; I explain in that section exactly why it would matter less to torture a copy. I can’t meaningfully respond to criticisms that don’t engage with the argument I presented.