Aspiring EA-adjacent trying to make the singularity further away
Pato
Yeah, you are right. I guess that I was trying to say that I haven’t heard of projects that try to do it from a “hardware” standpoint. Considering the limitations that the human brain has in relation with the scalable computers and AIs.
I didn’t know that in Superintelligence Bostrom talked about other paths to superintelligence, I need to read it ASAP.
This doesn’t make much sense to me; I’m not aware of relevant work or reasons to believe this is promising.
Yeah, you are probably right, and I guess what I was trying to say was that the thing that pops in my mind when I think about possible paths to make us superintelligent is a hybrid between BCI and brain emulations.
And I was imagining that maybe neuron emulations could not be that difficult or that signals from AI “neurons” (something similar to present day NN) could be enough to be recognize as neurons by the brain.
Genetic enhancement could be really useful but I feel that there are different levels of “superintelligence” we are talking about. One can help you to do research and the other takes over the world (aligned or not). People should try to do this type of intelligence augmentation but the level of intelligence that takes over the world is probably quite far in IQ points and I would guess impossible to reach with just human brains.
[Question] Why not to solve alignment by making superintelligent humans?
I understand that, and I guess 3 and even 2 could be not that effective, but it is weird to me that there isn’t an org doing good Youtube videos that EAs could share and put on their profiles or something. With the message of sharing it inside them to try to do snowball effects.
Yeah, but they’re not the EA channel or the EA podcast in the same way that the EA forum exists. And in the case of the Youtube channels they don’t have introductions to EA, its causes and how to contribute to them.
Why is there not a good EA Youtube Channel, with short and scripted videos in the style of Crash Course or Kurzgesagt, with sharable introductions to EA in general and all the causes inside longer playlists about them?
There is also not a podcast or social media accounts that seem to be trying to get big and reach people to do EA. Why is that the case?
I’ve seen ads of 80k hours and Givewell, but why isn’t there of the whole EA movement?
Sorry for making multiple questions in one, but I feel that the answers may be related. I separated them in the case you want to answer them individually. Feel free to answer only one or two.
We shouldn’t limit to Twitter, but what Youtube channels, Instagrams and more we should follow to increase their reach and learn from them
I don’t understand your logic at all. How is it contributing from your POV?
I think once we know how to align a really powerful AI and we create it we can use it to create good policies and systems to prevent others misaligned AIs from emerging and gaining more knowledge and intelligence than the one aligned.
I also disagree with all the answers that I’ve read but I have thought of one that works for me and hope it’ll work for you too: I think there’s a lot more probabilities of achieving infinite value in this universe, for me and/or for future people, making progress in this civilization ignoring religions, than the probabilities of achieving it following them + the probabilities of any other complex imaginary world after dead.
Now, this doesn’t seem to work for Pascal’s Mugging, so I hope to find another answer to that or to not get poor because of it lol.
Oh, wait, I thought Infinite Ethics included all moral math with infinites, like Pascal’s Mugging.
Honestly, personally I think we should focus on AI and community building and everything else seems almost irrelevant.
I’m not sure what you mean by “it deserves criticism” I think Infinite Ethics is a serious subject, but which we should study in the Long Reflection.
I don’t know if it’s something good or bad lol. I felt hypnotized by it.
I also want all kind of people in this community. And I believe that not matter your intelligence you can have a good impact in the world and most even a job that’s EA. For example I feel like community building could be a place for people with low level of studies to do valuable work, and even to solve this particular problem (make EA more accessible). I think that creating more of those jobs would make EA more popular and that is the way of getting the most people to do direct work, GPR, donating, going vegan and voting good while also making a lot of them happier by giving them purpose and a community they can be part of.
There is ways where that can be bad though, like taking too many resources or falling in the meta trap.
I can’t think of many examples where I agreed with a position but didn’t want to see it or wanted to see a position that I disagreed with. I think that I’ve only experienced the latter case when I want to see discussions about the topic. In those cases I feel like you should balance between the good and the bad on upvoting and choose between the 5 levels (if you take into account the strong votes and no vote) that the current system provides. Also, if you believe that a topic that you want to talk about (and believe that others too) is going to be divised, you can just write “Let’s discuss about X” and then reply it with your opinion.
I read examples on the comments that I disagreed with and I feel more comfortable counterarguing them all in this comment:
Useful information for an argument that people disagrees with: Then how is it useful?
Critical posts which you disagree with that you appreciate and want other people to read: Then why do you appreciate them? It seems you like them in part but not fully, I would just not vote them. And why do you want people to read it? Seems like a waste of time.
Voting something for quality, novelty or appreciation: I believe that the voting system is better as a system where you vote what you want other people to read or what you enjoy seeing. And I think that we should appreciate each other in other ways or places (like in the comments).
Unpopular opinions that people still found enlightening should get marginally more karma: That sounds like opinions that change the minds of some people, but get little karma or even negative points. I don’t know how would the people that disagrees with it would downvote it less than other opinions which they disagree with. In other words, I don’t know how exactly the “enlightenment” is seen by the ones blind to it lol, or what would “enlightening” mean.
And we should be optimising for increased exposure to information that people can update on in either direction, rather than for exposure to what people agree with: How is that useful? I’m not that familiar with the rationalist community so maybe this is obvious, or maybe I’m misunderstanding. Are you saying that you agree with some arguments (so you update beliefs) but not all of them and you don’t change the conclusion? That probably would mean no vote at all from me, and depending the specifics weak upvote or downvote.
It prompts people to distinguish between liking and agreeing: Why would you like a contribution to a discussion when you don’t agree with the contribution?
There would be fewer comment sections that turn into ‘one side mass-downvotes the other side, the other side retaliates, etc.’: Why would there be a difference with this new axis?
Agree with:
Goodwill and trust is generated when people are upvoted in spite of having an unpopular view.
But I believe that the downsides are worse. So, if you were to encourage people to upvote unpopular views, then they could get even more points that the popular views, no matter how “bad” they are. Also there could be more bad arguments at the top than good ones. That sounds pretty confusing and annoying honestly. I think better options are to reply to those comments and upvote good replies and to not show points below 0 nor hide those comments.
Also:
It sounds to me that to vote in a two vote system would be to vote something and then to think if I agree or disagree with the comment and then to vote again >95% of the time for kinda the same thing (agree after like, disagree after dislike) and then to see the same number repeated or to see a difference in them and wonder about what does it mean and if it exists because people are voting in just one system.
Really bad for new people.
There are cases where there isn’t anything to agree or disagree with, like information and jokes.
Good question! I think I understand where you are coming from but I don’t think we should do that. We are used to not really care or think about future people but I think there are other reasons behind it.
A very important one is that we don’t see future people or their problems, so we don’t sympathize with them like we do with the rest. We have to make an effort to picture them and their troubles. Like if we didn’t have enough in the here and now!
Another one is the odds of those futures. Any prediction is less likely to occur the further it is in the future.
And lastly, we have to take into consideration how the influence of any action diminish through time.
So it’s not the value of a person that changes if they haven’t been borned yet, but the chances of helping them. And when we decide how to use our resources we should have the two things in mind so we can calculate what is the “expected value” of every possible action and choose the one with the highest.
So why do many effective altruists want to focus in those causes if there is a discount caused by lower probabilities? Because they believe that there could be many many more people in the future than have ever existed so the value of helping them and saving them from existential risks is higher to the point of turning arround the results.
Of course that is really difficult to make good predictions and there is no concensus on how important is longtermism, but I think we should always take into account that most of the time our emotions and desires will favor short term things and won’t care about the issues they don’t see.
The Forward Party seems really cool. I wonder if it has chances in the long-term.
Thank you for answering, it was helpful. So, with “other from of donations” I was referring to “one time donations”. So both of your questions are about the same thing.
I understand that “earning to give” refers only to the donations that came from people who give a percentage of their income every month. At least it sounds like donations from people who are pledging on giving.
Either way if it is actually included or not, Nuno says that it’s irrelevant.
Maybe that doesn’t sound promising, but without having much knowledge in AI alignment, outer alignment sounds already like aligning human neural networks with an optimizer. And then to inner align you have to align the optimizer with an artificial neural network. This to me sound simpler: to align a type of NN with another.
But maybe it is wrong to think about the problem like that and the actual problem is easier.