University student in Budapest, Hungary. I currently learn philosophy, understand pure math; know a lot of computer science. Wannabe AI alignment researcher.
Coafos
Teach a man to fish, they’ll still starve in the jungle.
Note: I tried to do it on mobile, and it’s not working everywhere? I tried to tap on post karma or question answer karma but it did not show total vote count.
(On my laptop it works.)
Cap the number of strong votes per week.
Strong votes with large weights have their uses in uncommon situations. But these situations are uncommon, so instead of weakening strong votes, make them rarer.
The guideline says use them only in exceptional cases, but there is no mechanism enforcing it: socially, strong votes are anonymous and look like standard votes; and technically, any number of them could be used. They could make a comment section appear very one-sided, but with rarity, some ideas can be lifted/hidden, and the rest of the section can be more diverse.
I do not think this is a problem now, because current power users are responsible. But this is our fortune and not a fact, and could change in the future. Incidentally, this would also set a bar for what is considered exceptional, like this comment is in the top X this week.
If without restraints then note: this opens up an influence market, which could lead into plutocracy.
I, as an individual, agree with the statement. No one is infallible, every organization has smaller or bigger problems.
On the other hand, idols and community leaders provide an easy point for concentration of force. A few big coalitions have a larger impact than many scattered small groups, and if someone wants to organize a campaign a few leaders can reach a decision much faster than a large group of individuals.
If no one agrees on what to do then the movement of the Movement will grind to a halt. That’s why there is value in keeping EA high-trust, and somehow accepting the word of the few at the top.
but given the amount of scandals in the last few months maybe they overshoot this high-trusting thingy a little bit, imho a bit more transparency would be nice
This is my favourite drama. In my interpretation it’s more about AI risk (the last idea we need, the invention of all inventions), but Durrenmatt was limited by the technology of his age. I mean, if you think Solomon is the AI character, then the end of the play is about Solomon excaping the “box” while trapping their creators inside.
I like conspiracy theories, but an economic one is more probable than a political governmental affair. I think Coinbase and Binance may did something in the no-law-only-code world of crypto, but the target was FTX, a very visible competitor; the funds for EA activities were just collateral damage. To mitigate risks like that, EA aligned organizations should not rely on single source for funding.
I agree. The motto is “doing good better” not doing good the best.
I make a big assumption, that the utility gains are multiplied together. There is some basis to it like if there are some independent sources of fatality, the chance to survive all of them is the product of the survival chances for each fatality source.
If you want to maximise the result of the multiplication, then take the logarithm, and it turns into a sum. In that formulation, you can see that it’s not the absolute change that is important, but the relative one. Here I wanted to show an example of it, like a risky vs safe bet over 1 vs 50 year, but I kinda got stuck, and realized I don’t really understand it, so I retract, but thanks for the question.
Could you describe in other words what you mean by “friend group”?
While a group formed around hiking, tabletop games or some fanfic may not solve AI (ok the fanfic part might), but friends with a common interest in ships and trains probably have an above-average shot at solving global logistic problems.
While I think this post touches on some very important points—the EA, as a movement, should be more conscious about its culture—the proposed solution would be terrible in my opinion.
Splitting up EA would mean losing a common ground. Currently, resource allocation for different goals can be made under the “doing good better” principles, whatever that means. Without that, the causes would compete with each other for talent, donors, etc., and with that networks would fragment, and efficiency would decrease.
However, the EA identifying people should more clearly think about what are these common principles; and should be more intentional about creating the culture, to avoid some of the described problems in the EA community.
Your first posts will be cringe. It’s fine.
Probability theoretic “better” is intransitive. See non-transitive dice
Imagine your life is a dice, and you have three options:
4 4 4 4 4 1
You live a mostly peaceful life, but there is a small chance of doom.
5 5 5 2 2 2
You go on a big adventure: either a trasure or a disappointment.
6 3 3 3 3 3
You put all your cards in a lottery for epic win, but on fail, you will carry that with you.
If we compare them: peace < adventure < lottery < peace, so I would deny transitivity.
You say the first throw has an expected value of 693,5 (=700•215/216 −700•1/216) QALY, but it is not precise. The first throw has has an expected value of 693,5 QALY if your policy is to stop after the first throw.
If you continue, then the QALY gained from these new people might decrease, because in the future there is a greater chance that this 10 new people disappear, therefore decreasing the value of creating them.
That could be a plus. If you’re running a local group, and lend books at some public event, (like tabling), then this will incentivise the takers to attend the next local EA event too, where they can bring the books back.