ClaraBot: Report duplicated in title!
Clara Torres Latorre šø
I would like to see more low quality /ā unserious content. Mainly, to lower the barrier to entry for newcomers and make it more welcoming.
Very unsure if this is actually a good idea.
I appreciate the irony and see the value in this, but Iām afraid that youāre going to be downvoted into oblivion because of your last paragraph.
āAt high levels of uncertainty, common sense produces better outcomes than explicit modellingā
Hey can you include a link to the blog?
Fantastic post!
Iām trying to put myself in the shoes of someone that is new around here, and I would appreciate some definitions or links for acronyms (GHD, AIS), and meat eater problem. Maybe others as well, I havenāt been thorough.
Can you please update the post, it would be even better in my opinion.
I would be very surprised if [neuron count + noiciceptive capacity as moral weight] are standard EA assumptions. I havenāt seen this in the people I know nor in the major funders, who seem to be more pluralistic to me.
My main critique to this post is that there are different claims and itās not very clear which arguments are supporting what conclusions. I think your message would be more clear after a bit of rewriting, and then it would be easier to have an object-level discussion.
Hey, kudos to you for writing a longform about this. I have talked to some self-identified negative utilitarians, and I think this is a discussion worth having.
I think this post is mixing two different claims.Critiquing āminimize suffering as the only terminal value ā extinction is optimalā makes sense.
But that doesnāt automatically imply that some suffering-reduction interventions (like shrimp stunning) are not worth it.You can reject suffering-minimization-as-everything and still think that large amounts of probable suffering in simple systems matter at the margin.
Also I appreciated the discussion of depth, but have nothing to say about it here.
I would appreciate:
- Any negative utilitarian or person knowledgeable about negative utilitarianism commenting on why NU doesnāt necessarily recommend extinction.
- The OP clarifying the post by making more explicit the claims.
I like your post, especially the vibe of it.
At the same time, I have a hard time understanding what does āquit EAā even mean:
Stop saying youāre EA? I guess thatās fine.
Stop trying to improve the world using reason and evidence? Very sad. Probably read this post x50 and I hope it convinces you otherwise.
99% karma-weighted of tagged posts about AI seems wrong
if you check the top 4 posts of all time, the 1st and 3rd are about FTX, the 2nd about earning to give and the 4th about health, totalling > 2k karma
might want to check for bugs
I started, and then realised how complicated is to choose a set of variables and weights to make sense of āhow privileged am Iā or āhow lucky am Iā.
I have an MVP (but ran out of free LLM assistance), and right now the biggest downside is that if I include several variables, the results tend to be far from the top. And I donāt know what to do about this.
For instance, letās say that in āhealthcare accessā, having good public coverage puts you in the top 10% bracket (number made up). Then, if you pick 95% as the reference point for that any weighted average including this will miss on some distance to the top.
So just a weighted average of different questions is not good enough I guess.
We can discuss and workshop it if you want.
I love the sentiment of the post, and tried it myself.
I think a prompt like this makes answers less extreme than what they actually are, because itās like a vibes-based answer instead of a model-based answer. I would be surprised if you are not in the top 1% globally.
I would really enjoy something like this but more model-based, as the GWWC calculator. Does anyone know of something similar? Should I vibe code it and then ask for feedback here?
I tried this myself and I got āyouāre about 10-15% globallyā, which I think is a big underestimate.
For context, pp adjusted income is top 2%, I have a PhD (1% globally? less?), live alone in an urban area.
Asking more, a big factor pushing down is that I rent the place that I live in instead of owning it (which, donāt get me started on this from a personal finance perspective, but shouldnāt be that big of a gap I guess?).
How can I cross text on a comment?
I donāt identify as EA. You can check my post history. I try to form my own views and not defer to leadership or celebrities.
I agree with you that thereās a problem with safetywashing, conflicts of interest and bad epistemic practises in mainstream EA AI safety discourse.
My problem with this post is that the way of presenting the arguments is like āwake up, Iām right and you are wrongā, directed to a group of people that includes people that have never thought about what youāre talking about, and people that agree with you.
I also agree that the truth sometimes irritates, but that doesnāt mean that if something irritates I should trust it more.
I think there is a problem with the polls showing all the same titlefixed
I feel lumped in with them because you use second person plural. Itās not a glitch, itās a direct consequence of how you write.
What I say is: maybe youāre right with the pause agenda, I donāt know.
But if you come to a group of people saying āyou are just wrongā this is not engaging, and then I feel irritated instead of considering your case.
Thereās many different people in EA with different takes.
By claiming āyou are just wrongā in second person plural you are making it harder to people that are not in the āwant to build AIā camp to engage with your object level arguments.
Why donāt you defend your point?
I imagine the people that are not part of the AI safety memeplex already could find them convincing. Why not engage with then?
Btw Iām undecided on what the right marginal actions are wrt AI and am trying to form my inside view.
Iām in academia and my plan A is to pivot my research focus to something impactful.
Time will tell though, Iām open to considering other options if they arise.
Iām curious about the distribution of time spent reading the forum, posts read and such. Just nerding for the sake of nerding.
My answer is largely based on my view that short-timeline AI risk people are more dominant in the discourse that the credence I give them, ymmv