Yeah, reading further, I definitely don’t agree with a lot of these claims. But the fact that I feel like I have to post this clarification in order to avoid getting downvoted myself is something I think needs to be talked about. The original post is now down to −15, and I haven’t even finished reading it.
Isaac King
Stop talking about p(doom)
[Question] What are some high impact short-term actions for people who don’t want to make long commitments?
I can tell you why I downvoted it.
Cryptocurrency doesn’t actually work
False, it works just fine. It’s a token that can’t be duplicated and people can send to each other without any centralized authority.
and only is there for scams and fraud.
There are indeed a lot of those, but scams and fraud were very clearly not the intention of its creators. Realistically they were cryptography nerds who wanted to make something cool, or libertarians with overly-idealistic visions of the future.
Not surprising that FTX collapsed.
Clear hindsight bias. This person should have made some money betting against FTX before it collapsed and then I’d take them more seriously.
Basically, the comment is just your standard “cryptocurrency bad” take, without any attempt at justifying their claims or even saying much of anything other than expressing in an inflammatory way that they don’t like cryptocurrency.
And on a personal note, I aspire to create a lot of value for the world, and direct it towards doing lots of good. Call me overconfident, but I expect to be a billionaire someday. The way EA treats SBF here sets a precedent: if the EA community is happy to accept money when the going is good, but then is ready to cut ties once the money dries up… you can guess how excited I would be to contribute in the first place.
This is a weird paragraph. If your goal were doing the most good, why would it matter how you expect EA to treat you in the case of failure? It kinda sounds like your goal is social status among the EA community.
This isn’t to say that you don’t have a good point. If people are donating to EA because they want social status, that’s still money going towards good causes, and perhaps we should reward them for that in order to encourage more people to do so. But I’d have a hard time calling that “altruistic behavior” on their part.
Any atom that isn’t being used in service of the AI’s goal could instead be used in service of the AI’s goal. Which particular atoms are easiest to access isn’t relevant; it will just use all of them.
How well do you think EA handled the FTX scandal?
On building Omelas for shrimp; the implications of diversity-oriented theories of moral value on factory farming
Very reasonable! I understand you feel like you have to walk a fine line in order to not trigger social disapproval of your words; I think that’s bad, and to be clear, I did not mean to make it seem like I disapproved of your comment. I wish EA could be a place where everyone felt comfortable speaking naturally without having to add a bunch of disclaimers.
Oh, I agree. Arguments of the form “bad things are theoretically possible, therefore we should worry” are bad and shouldn’t be used. But “bad things are likely” is fine, and seems more likely to reach an average person than “bad things are 50% likely”.
I have the opposite issue with my Macbook: The screen brightness settings range only from “bright” to “extremely bright”. When I’m using it in a dark room I’d like to be able to dim the screen down to a reasonable level, but that’s simply not possible.
Just pick a human to upload and let them recursively improve themselves into an SAI. If they’re smart enough to start out with, they might be able to keep their goals intact throughout the process.
(This isn’t a strategy I’d choose given any decent alternative, but it’s better than nothing. Likely to be irrelevant though, since it looks like we’re going to get GAI before we’re even close to being able to upload a human.)
I didn’t mean it to be evidence for the statement, just an explanation of what I meant by the phrase.
Do you disagree that most people value that? My impression is that wireheading and hedonium are widely seen as undesirable.
Yeah, most of the p(doom) discussions I see taking place seem to be focusing on the nearer term of 10 years or less. I believe there are quite a few people (e.g. Gary Marcus, maybe?) who operate under a framework like “current LLMs will not get to AGI, but actual AGI will probably be hard to align), so they may give a high p(doom before 2100) and a low p(doom before 2030).
But anecdotally, many EAs still feel uncomfortable quantifying their intuitions and continue to prefer using words like “likely” and “plausible” which could be interpreted in many ways.
This issue is likely to get worse as the EA movement attempts to grow quickly, with many new members joining who are coming in with various backgrounds and perspectives on the value of subjective credences
Don’t take this as a serious criticism; I just found it funny.
multiple steps to be taken at simultaneously
Typo
Downvoting as you seem to have not read or chosen to ignore the first section; I explain in that section why it would matter less to torture a copy. I can’t meaningfully respond to criticisms that don’t engage with the argument I presented.
Isn’t that what the strong upvote is for?
Thank you for posting this. I haven’t yet read through the whole thing yet, and I don’t necessarily agree with it, but I think it’s important that people feel comfortable expressing their opinions here. The fact that within minutes of posting this has gotten −8 votes is something I find concerning, as I doubt those people have even had time to read and process what you said before voting and I suspect they’re voting based on anger and groupthink. I hope the community will be able to have a productive conversation in these comments.