Aspiring EA-adjacent trying to make the singularity further away
Pato
Thank you for answering, it was helpful. So, with “other from of donations” I was referring to “one time donations”. So both of your questions are about the same thing.
I understand that “earning to give” refers only to the donations that came from people who give a percentage of their income every month. At least it sounds like donations from people who are pledging on giving.
Either way if it is actually included or not, Nuno says that it’s irrelevant.
The Forward Party seems really cool. I wonder if it has chances in the long-term.
Good question! I think I understand where you are coming from but I don’t think we should do that. We are used to not really care or think about future people but I think there are other reasons behind it.
A very important one is that we don’t see future people or their problems, so we don’t sympathize with them like we do with the rest. We have to make an effort to picture them and their troubles. Like if we didn’t have enough in the here and now!
Another one is the odds of those futures. Any prediction is less likely to occur the further it is in the future.
And lastly, we have to take into consideration how the influence of any action diminish through time.
So it’s not the value of a person that changes if they haven’t been borned yet, but the chances of helping them. And when we decide how to use our resources we should have the two things in mind so we can calculate what is the “expected value” of every possible action and choose the one with the highest.
So why do many effective altruists want to focus in those causes if there is a discount caused by lower probabilities? Because they believe that there could be many many more people in the future than have ever existed so the value of helping them and saving them from existential risks is higher to the point of turning arround the results.
Of course that is really difficult to make good predictions and there is no concensus on how important is longtermism, but I think we should always take into account that most of the time our emotions and desires will favor short term things and won’t care about the issues they don’t see.
I can’t think of many examples where I agreed with a position but didn’t want to see it or wanted to see a position that I disagreed with. I think that I’ve only experienced the latter case when I want to see discussions about the topic. In those cases I feel like you should balance between the good and the bad on upvoting and choose between the 5 levels (if you take into account the strong votes and no vote) that the current system provides. Also, if you believe that a topic that you want to talk about (and believe that others too) is going to be divised, you can just write “Let’s discuss about X” and then reply it with your opinion.
I read examples on the comments that I disagreed with and I feel more comfortable counterarguing them all in this comment:
Useful information for an argument that people disagrees with: Then how is it useful?
Critical posts which you disagree with that you appreciate and want other people to read: Then why do you appreciate them? It seems you like them in part but not fully, I would just not vote them. And why do you want people to read it? Seems like a waste of time.
Voting something for quality, novelty or appreciation: I believe that the voting system is better as a system where you vote what you want other people to read or what you enjoy seeing. And I think that we should appreciate each other in other ways or places (like in the comments).
Unpopular opinions that people still found enlightening should get marginally more karma: That sounds like opinions that change the minds of some people, but get little karma or even negative points. I don’t know how would the people that disagrees with it would downvote it less than other opinions which they disagree with. In other words, I don’t know how exactly the “enlightenment” is seen by the ones blind to it lol, or what would “enlightening” mean.
And we should be optimising for increased exposure to information that people can update on in either direction, rather than for exposure to what people agree with: How is that useful? I’m not that familiar with the rationalist community so maybe this is obvious, or maybe I’m misunderstanding. Are you saying that you agree with some arguments (so you update beliefs) but not all of them and you don’t change the conclusion? That probably would mean no vote at all from me, and depending the specifics weak upvote or downvote.
It prompts people to distinguish between liking and agreeing: Why would you like a contribution to a discussion when you don’t agree with the contribution?
There would be fewer comment sections that turn into ‘one side mass-downvotes the other side, the other side retaliates, etc.’: Why would there be a difference with this new axis?
Agree with:
Goodwill and trust is generated when people are upvoted in spite of having an unpopular view.
But I believe that the downsides are worse. So, if you were to encourage people to upvote unpopular views, then they could get even more points that the popular views, no matter how “bad” they are. Also there could be more bad arguments at the top than good ones. That sounds pretty confusing and annoying honestly. I think better options are to reply to those comments and upvote good replies and to not show points below 0 nor hide those comments.
Also:
It sounds to me that to vote in a two vote system would be to vote something and then to think if I agree or disagree with the comment and then to vote again >95% of the time for kinda the same thing (agree after like, disagree after dislike) and then to see the same number repeated or to see a difference in them and wonder about what does it mean and if it exists because people are voting in just one system.
Really bad for new people.
There are cases where there isn’t anything to agree or disagree with, like information and jokes.
I also want all kind of people in this community. And I believe that not matter your intelligence you can have a good impact in the world and most even a job that’s EA. For example I feel like community building could be a place for people with low level of studies to do valuable work, and even to solve this particular problem (make EA more accessible). I think that creating more of those jobs would make EA more popular and that is the way of getting the most people to do direct work, GPR, donating, going vegan and voting good while also making a lot of them happier by giving them purpose and a community they can be part of.
There is ways where that can be bad though, like taking too many resources or falling in the meta trap.
I don’t know if it’s something good or bad lol. I felt hypnotized by it.
I’m not sure what you mean by “it deserves criticism” I think Infinite Ethics is a serious subject, but which we should study in the Long Reflection.
Oh, wait, I thought Infinite Ethics included all moral math with infinites, like Pascal’s Mugging.
Honestly, personally I think we should focus on AI and community building and everything else seems almost irrelevant.
I think once we know how to align a really powerful AI and we create it we can use it to create good policies and systems to prevent others misaligned AIs from emerging and gaining more knowledge and intelligence than the one aligned.
I don’t understand your logic at all. How is it contributing from your POV?
We shouldn’t limit to Twitter, but what Youtube channels, Instagrams and more we should follow to increase their reach and learn from them
[Question] Why not to solve alignment by making superintelligent humans?
I created an account and I’m pretty sure I still can’t change or add anything.
I’m still learning basic things about AI Alignment, but it seems to me that all AIs (and other technologies) already don’t give us exactly what we want but we don’t call that outer misaligned because they are not “agentic” (enough?). The thing is that I don’t know if there’s a crucial? onthologic? property that make something agentic really, I think it could be just some type of complexity that we give a lot of value to.
And also ML system are inner misaligned in a way because they can’t generalize to everything from examples and we can see that when we don’t like the results to a particular task that they give us. I don’t think misaligned is maybe the word for these technologies, but really the important thing is that they don’t do what we want them to do.So the question about AI risk really is: are we going to build a superintelligent technology? Because that is the significant difference with the previous technologies. If that’s the case, we are not going to be the ones influencing the future the most, building little by little what we actually want and stopping the use of technologies whenever they aren’t useful. We are going to be the ones turned off.
Wow, I didn’t expected a response. I didn’t know shortforms were that accessible and I thought I was just rambling in my profile. So I should clarify that when I say “what we actually want” I mean our actual terminal goals (if we have those).
So what I’m saying is that we are not training AIs or creating any other technology to do our terminal goals but to do other things (of course they’re specific because they don’t have high capabilities). But in the moment that we create something that can take over the world, all of the sudden the fact that we didn’t create it to do our terminal goals becomes a problem.
I’m not trying to explain why present technologies have failures, but that misalignment is not something that appears with the creation of powerful AIs but that that is the moment when it becomes a problem, and that’s why you have to create it with a different mentality than any other technology.
I think EA has the resources to make the alignment problem viral or at least in STEM circles. Wouldn’t that be good? I’m not asking if it would be an effective way of doing good, just a way.
Because I’m surprise that not even AI doomers seem to be trying to reach the mainstream.
Interesting.
I’m not sure I understood the first part and what f(A,B) is. In the example that you gave B is only relevant with respect of how much it affects A (“damage the reputability of the AI risk ideas in the eye of anyone who hasn’t yet seriously engaged with them and is deciding whether or not to”). So, in a way you are still trying to maximize |A| (or probably a subset of it: people who can also make progress on it (|A’|)). But in “among other things” I guess that you could be thinking of ways in which B could oppose A, so maybe that’s why you want to reduce it too. The thing is: I have problems visualizing most of B opposing A and what that subset (B’) could even be able to do (as I said, outside reducing |A|). I think that that is my biggest argument, that B’ is a really small subset of B and I don’t fear them.
Now, if your point is that to maximize |A| you have to keep in mind B and so it would be better to have more ‘legitimacy’ on the alignment problem before making it viral you are right. So is there progress on that? Is the community building plan to convert authorities in the field to A before reaching the mainstream then?
Also are people who try to disprove the alignment problem in B? If that’s the case I don’t know if our objective should be to maximize |A’|. I’m not sure if we can reach a superintelligence with AI, so I don’t know if it wouldn’t be better to think about maximize the people trying to solve OR dissolve the alignment problem. If we consider that most people probably wouldn’t feel strongly about one side of the other (debatable), then I don’t think is that big of a deal bringing the discussions more to the mainstream. If AI risk arguments include that not matter how uncertain researchers are about the problem giving what’s at stake we should lower the chances, then I even see B and B’ smaller. But maybe I’m too optimistic/ marketplacer/ memer.
Lastly, the maximum size of A is smaller the shorter the timelines. Are people with short timelines the ones trying to reach the most people in the short term?
Oh, I didn’t know that the field was so against AI X-risks. Because when I saw this https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/ 5-10% of X-risks seemed enough to take them seriously. Is that survey not representative? Or is there a gap between people recognizing the risks and giving them legitimacy?
When people want an apology, they expect you to say that you’re sorry and you were wrong. But I have also read in response of every apology ever written or said in the history of the internet that the wrong doers in question don’t actually take responsibility for their actions and I never understood what they meant for that. Do they expect the person to punish themselves? To say “I take responsibility for my actions”? To not express their reasoning behind their actions? I honestly don’t know.
I’m new to the movement so I have a couple of questions. Is it earning to give the only form of donations? Is there no one time big donors? and is Open Philanthropy included in there? and Givewell?