Soaking screams food poisoning to me; especially with unclean water. Perhaps this is not a risk if done right, but this could be why it’s not done.
Max Clarke
Definitely, for example if people are bikeshedding (vigorously discussing something that doesn’t matter very much)
Another proposal: Visibility karma remains 1 to 1, and agreement karma acts as a weak multiplier when either positive or negative.
So:
A comment with [ +100 | 0 ] would have a weight of 100
A comment with [ +100 | 0 ] but with 50✅ and 50❌ would have a weight of 100 + log10(50 + 50) = 200
A comment with [ +100 | 100✅ ] would have a weight of say 100 * log10(✓100) = 200
A comment with [+0 | 1000✅ ] would have a weight of 0.
Could also give karma on that basis.
However thinking about it, I think the result would be people would start using the visibility vote to express opinion even more...
Would you gift your karma if that option was available?
This is good for calibrating what the votes mean across the responses
A little ambiguous between “disagree karma & upvote karma should have equal weight” and “karma should have equal weight between people”
I think because the sorting is solely on karma, the line is “Everything above this is worth considering” / “Everything below this is not important” as opposed to “Everything above this is worth doing”
It’s karma—which is kind of wrong here.
One situation I use strong votes for is whenever I do “upvote/disagree” or “downvote/agree”. I do this to offset others who tend not to split their votes.
I think some kind of “strong vote income”, perhaps just a daily limit as you say, would work.
People who read this far seem to have upvoted
I would have expected the opposite corner of the two axis voting (because I think people don’t like the language)
seems he has ended up giving more to the democratic party than ea lol
There seems to be two different conceptual models for AI risk.
The first is a model like in his report “Existential risk from power-seeking AI”, in which he lays out a number of things, which, if they happen, will cause AI takeover.
The second is a model (which stems from Yudkowsky & Bosteom, and more recently in Michael Cohen’s work https://www.lesswrong.com/posts/XtBJTFszs8oP3vXic/?commentId=yqm7fHaf2qmhCRiNA ) where we should expect takeover by malign AGI by default, unless certain things happen.
I personally think the second model is much more reasonable. Do you have any rebuttal?
Likewise, I have a post from January suggesting that crypto assets are over-represented in the EA funding portfolio.
Probably the number of people actually pushing the frontier of alignment is more like 30, and for capabilities maybe 3000. If the 270 remaining alignment people can influence those 3000 (biiiig if, but) then the odds aren’t that bad
Thinking about
Not sure what Rob is referring to but there are a fair few examples of org/people’s purposes slipping from alignment to capabilities, eg. OpenAI
I myself find it surprisingly difficult to focus on ideas that are robustly beneficial to alignment but not to capabilities.
(E.g. I have a bunch of interpretability ideas. But interpretability can only have no impact on, or accelerate timelines)
Do you know if any of the alignment orgs have some kind of alignment research NDA, with a panel to allow any alignment-only ideas be public, but keep the maybe-capabilities ideas private?
I think probably this post should be edited and “focus on low risk interventions first” put in bold in the first sentence and put right next to the pictures. Because the most careless people (possibly like me...) are the ones that will read that and not read the current caveats
Just posting my reactions to reading this:
That’s really high?? Oh—this is not the giving what we can pledge😅
At what stage of YC? I guess that will be answered later. EDIT:
.
Random, alphabetical, or date ordered? Not that it will really matter—although I guess I would expect the earlier pledgers to be more altruistic, maybe more risk taking though.
Ohhh ok 😂😅 Yeah that is funny and sad.
😑
Agree
This is interesting, and is naturally raised by this post (v. interesting by the way). It makes me wonder about their screening practices. I’m guessing a random like me can’t sign up (they check one’s net wealth somehow?) but perhaps that’s all? If any billionaire can sign up, then maybe it’s not really the giving pledge that one should criticize?