They did not enjoying doing so
Typo here.
They did not enjoying doing so
Typo here.
Out of curiosity, were the lumiqs inspired by Dust in His Dark Materials?
This is great! One minor flaw I noticed is that clicking the “^” to take me back to the footnote reference puts that reference at the top of the page, which means it’s hidden behind the header. I have to scroll up a few lines before I can continue where I left off.
on the ride
I think this should be “rise”
we know that the chance of an Earth-impact for asteroids 1-10km in diameter is about 1 in 6,000, and about 1 in 1.5 million for asteroids larger than 10km across
I don’t know how I’m supposed to interpret this statistic without a time frame. Is this supposed to be per century?
>A visual depiction of what it could potentially look like from the ground if the Mosul Dam were to collapse.
This link appears to be broken, it just links back to this page.
For comparison, this analysis finds a 0.4% yearly risk, which is in line with the EA survey and other estimates I’ve seen, so I’m strongly inclined to think that the 0.1%-1% order of magnitude is the correct place to be.
Any atom that isn’t being used in service of the AI’s goal could instead be used in service of the AI’s goal. Which particular atoms are easiest to access isn’t relevant; it will just use all of them.
Just pick a human to upload and let them recursively improve themselves into an SAI. If they’re smart enough to start out with, they might be able to keep their goals intact throughout the process.
(This isn’t a strategy I’d choose given any decent alternative, but it’s better than nothing. Likely to be irrelevant though, since it looks like we’re going to get GAI before we’re even close to being able to upload a human.)
multiple steps to be taken at simultaneously
Typo
I have the opposite issue with my Macbook: The screen brightness settings range only from “bright” to “extremely bright”. When I’m using it in a dark room I’d like to be able to dim the screen down to a reasonable level, but that’s simply not possible.
What’s the significance of the two different columns under the heading “Billion tonnes of carbon” in the first table? What does it mean for the number to be in one or the other?
But anecdotally, many EAs still feel uncomfortable quantifying their intuitions and continue to prefer using words like “likely” and “plausible” which could be interpreted in many ways.
This issue is likely to get worse as the EA movement attempts to grow quickly, with many new members joining who are coming in with various backgrounds and perspectives on the value of subjective credences
Don’t take this as a serious criticism; I just found it funny.
I’ll just note that I have a prediction market on this here, which is currently at a 7% chance of some prominent event causing mainstream AI capabilities researchers to start taking the risk more seriously by 2028.
Thank you for posting this. I haven’t yet read through the whole thing yet, and I don’t necessarily agree with it, but I think it’s important that people feel comfortable expressing their opinions here. The fact that within minutes of posting this has gotten −8 votes is something I find concerning, as I doubt those people have even had time to read and process what you said before voting and I suspect they’re voting based on anger and groupthink. I hope the community will be able to have a productive conversation in these comments.
Yeah, reading further, I definitely don’t agree with a lot of these claims. But the fact that I feel like I have to post this clarification in order to avoid getting downvoted myself is something I think needs to be talked about. The original post is now down to −15, and I haven’t even finished reading it.
And on a personal note, I aspire to create a lot of value for the world, and direct it towards doing lots of good. Call me overconfident, but I expect to be a billionaire someday. The way EA treats SBF here sets a precedent: if the EA community is happy to accept money when the going is good, but then is ready to cut ties once the money dries up… you can guess how excited I would be to contribute in the first place.
This is a weird paragraph. If your goal were doing the most good, why would it matter how you expect EA to treat you in the case of failure? It kinda sounds like your goal is social status among the EA community.
This isn’t to say that you don’t have a good point. If people are donating to EA because they want social status, that’s still money going towards good causes, and perhaps we should reward them for that in order to encourage more people to do so. But I’d have a hard time calling that “altruistic behavior” on their part.
This seems a bit naive to me. Most big companies come up with some generic nice-sounding reason why they’re helping people. That doesn’t mean the people in charge honestly believe that; it could easily just be marketing.