Aspiring EA-adjacent trying to make the singularity further away
Pato
it appears many EAs believe we should allow capabilities development to continue despite the current X-risks.
Where do you get this from?
Also, this:
have < ~ 0.1% chance of X-risk.
means p(doom) <~ 0.001
[Question] Why not to solve alignment by making superintelligent humans?
I think once we know how to align a really powerful AI and we create it we can use it to create good policies and systems to prevent others misaligned AIs from emerging and gaining more knowledge and intelligence than the one aligned.
I also want all kind of people in this community. And I believe that not matter your intelligence you can have a good impact in the world and most even a job that’s EA. For example I feel like community building could be a place for people with low level of studies to do valuable work, and even to solve this particular problem (make EA more accessible). I think that creating more of those jobs would make EA more popular and that is the way of getting the most people to do direct work, GPR, donating, going vegan and voting good while also making a lot of them happier by giving them purpose and a community they can be part of.
There is ways where that can be bad though, like taking too many resources or falling in the meta trap.
Cool! I just hope that in the future you crosspost all the posts here. I like the forum and don’t like to have to check different external blogs.
Thank you for answering, it was helpful. So, with “other from of donations” I was referring to “one time donations”. So both of your questions are about the same thing.
I understand that “earning to give” refers only to the donations that came from people who give a percentage of their income every month. At least it sounds like donations from people who are pledging on giving.
Either way if it is actually included or not, Nuno says that it’s irrelevant.
I’m new to the movement so I have a couple of questions. Is it earning to give the only form of donations? Is there no one time big donors? and is Open Philanthropy included in there? and Givewell?
I created an account and I’m pretty sure I still can’t change or add anything.
Meat with that label comes from farms that have stricter regulations for mutilation (dehorning, castration, debeaking, tail docking) and better air quality. The animals on those farms have more space and barn enrichments (e.g. toys, animal brushes, hay) and fewer diseases. Suppose meat with this animal welfare label costs 50% more than meat without an animal welfare label, and animal suffering for meat with the animal welfare label is half the amount of animal suffering for meat without a label.
Only half the suffering? With that description I was thinking on lives with a lot less suffering. And even worth living. I don’t know much about animal welfare so where that suffering is coming from?
Thank you a lot! I wasn’t expecting a summary, I wrote it so maybe you could have it as a consideration for future posts, so I guess I should have written less simple.
I really liked the axis that you presented and the comparision between a version of the community that is more cause oriented vs member oriented.
The only caveat that I have is that I don’t think we can define a neutral point in between them that allows you to classify communities as one type or the other.
Luckily, I think that is unnecesary because even though the objective of EA is to have the best impact in the world and not the greatest number of members, I think we all think the best decision is to have a good balance between cause and member oriented. So the question that we should ask is should EA be MORE big tent, weird, or do we have a good balance right now?
And to achieve that balance we can be more big tent in some aspects, moments and orgs and weirder in others.
I think that by far the less intuitive thing about AI X-risk is why AIs would want to kill us instead of doing what they would be “programmed” to do.
I would give more importance to that part of the argument than the “intelligence is really powerful” part.
Why is it considered bad timing?
The challenge isn’t figuring out some complicated, nuanced utility function that “represents human values”; the challenge is getting AIs to do what it says on the tin—to reliably do whatever a human operator tells them to do.
Why do you think this? I infer for what I’ve seen written in other posts and comments that this is a common belief but I don’t find the reasons why.
The fact that there are specific really difficult problems with aligning ML systems doesn’t mean that the original really difficult problem with finding and specifying the objectives that we want for a superintelligence were solved.
I hate it because it makes it seems like alignment is a technical problem that can be solved by a single team and as you put it in your other post we should just race and win against the bad guys.
I could try to envision what type of AI you are thinking of and how would you use it, but I would prefer if you tell me. So, what would you ask your aligned AGI to do and how would it interpret that? And how are you so sure that most alignment researchers would ask it the same things as you?
easily best intro to agi safety
Thanks! I think I understood everything now and in a really quick read.
I’m still learning basic things about AI Alignment, but it seems to me that all AIs (and other technologies) already don’t give us exactly what we want but we don’t call that outer misaligned because they are not “agentic” (enough?). The thing is that I don’t know if there’s a crucial? onthologic? property that make something agentic really, I think it could be just some type of complexity that we give a lot of value to.
And also ML system are inner misaligned in a way because they can’t generalize to everything from examples and we can see that when we don’t like the results to a particular task that they give us. I don’t think misaligned is maybe the word for these technologies, but really the important thing is that they don’t do what we want them to do.So the question about AI risk really is: are we going to build a superintelligent technology? Because that is the significant difference with the previous technologies. If that’s the case, we are not going to be the ones influencing the future the most, building little by little what we actually want and stopping the use of technologies whenever they aren’t useful. We are going to be the ones turned off.
We shouldn’t limit to Twitter, but what Youtube channels, Instagrams and more we should follow to increase their reach and learn from them
I don’t understand your logic at all. How is it contributing from your POV?
Personally I have trouble understanding this post. Could you write simpler?