Talk to me about cost benefit analysis !
Charlie_Guthmann
The problem is that EA in its current form is a perverted version of the central Idea/question. We are largely a utilitarian group and it’s deeply unfair imo that we are using the EA name. I do think EA should be democratic or quasi democratic but I think that requires taking a step back and re-envisioning. Anyway here is my half baked idea.
I think “EA” should be an umbrella term for a bunch of different sub orgs. The requirements for your sub org to be a part of “EA” should be
(1) You believe that evidence and logic need to be a part of making decisions
(2) You want to make things better/be altruistic.
(3) (optional) some commitment to not breaking laws/non violence.
A sub org needs to have 2 things
(1) Moral framework
(2) Political Framework
e.g. Monarchical dog lovers, Democratic republic utilitarians, technocratic christians, etc.
Now each sub org will compose of members. Their onboarding process can be largely unique but perhaps there should be some things all members must vow, like a commitment to evidence based reasoning or open mindedness or altruism or something. To stay a member of an EA suborg, you must contribute 1% of yourself every year. This can be some combination of 1% of your money, time, or effort ( donate .5% of your income and volunteer .5% of your time would work).
Suborgs get to vote for the president every x years. The amount of votes a sub org gets is the product of
(1) time volunteered
(2) money donated by its members
(3) number of members
This ensures well roundedness in votes and so you can’t just pump votes with populism or a single billionare. (i’ll get around to how we stop the christians from getting all the votes). How the sub group allocates their votes is of course up to their political system. Maybe one person chooses, maybe the members vote, maybe it is a smart contract.
The president can appoint judges or accounts or whatever to some term, these people are in charge of auditing orgs financing and claims of bad behavior. Also they appoint the people who run EAG and perhaps also the heads of all the local groups.
Then there is a house of arts and a house of probabilistic facts.
The members of the house of arts is voted for by strict democracy, every member of the EA coalition gets one vote (or maybe artists get 3-5, idk). This is a 2 year residency in a house with like 10-20 people who make art about the movment.
The house of probabilistic facts is a group of technocratically appointed people (i.e. everyone gets 1 vote, college ed get 2, phd 3, stem 5, prof 10). This will be a group of 5-10, they will vote on important facts about the world by submitting their probabilities for the chance of said facts, and either the arithemetic or geometric mean of their guessing will go into a book called the fact book. The can call upon the house of justice to levy voting fines on groups whose facts are “sufficiently far” from their estimate.
i.e. if they avg that there is a 1% chance of the christian god being real and christians say or are deemed by the house of justice to assign a 100% chance of this, there will be some function that takes the absolute difference f(abs diff) and spits out a number (0,1) that is multiplied to the groups upcoming vote share. This function should be relatively loose such that moderate or major disagreements don’t result in almost any punishment. It should be tuned so that only blatant disregard of facts or belief in magic revokes voting power. The whole point of this is both to be on the record as a shared group about science but also to be inclusive to religious and other groups if they want to be involved while still acknowledging our priorities.
I think the majority of people on this forum would join some form of utilitarian sup group. Which sect of it exactly they would want to throw their weight behind (suffering focused, average, etc.), which political forms will people want to throw their weight behind, and how much fracturing would occur are interesting open questions. But I think that in itself would make this whole ordeal much less of a clusterfuck to those trying to understand what EA exactly is.
Really like the idea. Also I would say yes you need to keep this to an extremely limited domain otherwise I would assume the main crux will just be the llm vs human analysis of different cause areas relative value. Agree with 2⁄3 though.
Seems like there are two different broad ideas at play here. How good is the blog post fixing the topic and how important is the topic. I suppose you can try to tackle both at once but I feel like that might be biting off a lot at once?
A topic I personally like to think about and could gather 20 quite related posts are those relating to % chance of extinction risk and/or relative value of s-risk vs extinction risk reduction.
alien counterfactuals
ok if we are talking about that level of deepfake I feel like that does already existed and has even existed before AI tools. “Fake News” is not an AI specific phenomena, though it makes it easier. Photo shop and captioning photos with lies do exist and go viral all the time. Boomers on facebook are fed a diet of bullshit. It’s possible op doesn’t use FB type sites and/or the sites have identified they don’t like bs so that isn’t what the algo feeds them.
Are you sure the technology is there yet? I don’t know if I have actually seen an extremely convincing deepfake.
Today was a pretty bad day for American democracy IMO. The guy below me got downvoted and yea his comment wasn’t the greatest but I directionally agree with him.
Pardons are out of control: Biden starts the day pardoning people he thinks might be caught in the political crossfire (Fauci, Milley, others) and more of his family members. Then Trump follows it up by pardoning close to all the Jan 6 defendants. The ship has sailed on whatever “constraints” pardons supposedly had, although you could argue Trump already made that true 4 years ago.
Ever More Executive Orders: Trump signed ~25 executive orders (and even more “executive actions”—don’t worry about the difference unless you like betting markets). This included withdrawing from WHO and ending birthright citizenship, though the latter is unlikely to stick since it’s probably unconstitutional. I haven’t had time to wade through all the EOs but like the pardons, this seems to be a cancerous growth of executive encroachment on the other branches with no clear end in sight.
Pre$idential $hitcoins: To be fair, the $TRUMP coin happened a few days ago, with $MELANIA following more recently. I’m not sure if people remember this, but it was a genuine scandal when Trump didn’t release his tax returns in 2016. Now, 8 years later, he is at best using his office to scam American citizens. Less charitably he has created a streamlined bribery pipeline. It was a blip in the news cycle.
Multiple reasons.
1. Your style of writing doesn’t meet the standards of this forum. It’s vague and memey. I’m inclined not to be that bothered but it definitely is outside of the expected decorum of the forum.
2. You aren’t adding much to the conversation here, this is a pretty liberal forum and we already know Trump acts like a buffon and Elon is an anti-woke troll who has recently supported the AfD (I generally think people here are too quick to downvote things that feel low-effort, it’s probably directionally better if people spew a bit more garbage if that would generate more discussion and cause people to second-guess commenting less—and second I don’t think it’s so irrelevant that he did this although their are lots of other reasons to be concerned that are more concrete).
3. People here are on net not super involved in politics and might feel offended that you imply it is important since it hurts their ego, and since you write in a style that is not within the accepted standards it’s easy for them to express their discontent with your writing style even if the impetus to downvote is partially that they think you are being reactionary and disagreeing with their priors.
But yea I’d recommend writing something like ~ “The trump inauguration festivities have caused me to update towards thinking American politics is a more important cause area because x”. It still probably won’t be received particularly well here without being more quantitative/fleshing it out more but you won’t get 26 downvotes.
To rank interventions or causes as a whole (so not just making comparisons of outputs apples to apples), you need to have a moral framework. Unless you (1) believe there is an objectively correct moral framework and (2) trust that EA is both good at cost-benefit analysis and moral philosophy, I think you may be hoping for too much.
I’m pretty sure the personal benefits of getting the flu vaccine for a male in their 20-30s is not much higher than the costs. Agree on the bike helmet thing though.
I don’t know unfortunately, basically just going off trusting the leadership to be cost effective plus they are in a really good position to influence policy/executive orders.
I haven’t quite finished donating because waiting on a final input on whether rand actually needs funding but I expect my final donations to look more or less like below. I don’t really believe in spreading this amount of funding on the object level but it’s more fun and allows me to tell my friends that I think these issues are all important. I gave about 10% of my income.
RAND- earmarked for emerging risks
Chicago growth project (YIMBY/good governance PAC)
ARI
Horizon
rethink
ACE
Givewell unrestricted
I probably won’t be donating again for at least a year or two because I left my trading job to start a startup—happy new year.
Yea I have no idea if they actually need money but if they still want to hire more people to the AI team wouldn’t it be better to give the money to RAND to hire those policymakers rather than like the Americans for Responsible Innovation—which open phil currently recommends but is much less prestigious and I’m not sure if they are working side by side with legislators. The fact that open phil gave grants but doesn’t currently recommend for individual donors makes me think you are right that they don’t need money atm but it would be nice to be sure.
Haven’t seen anyone mention RAND as a possible best charity for AI stuff and I guess I’d like to throw their hat in the ring or at least invite people to tell me why I’m wrong. My core claims are approximately:
Influencing the US (federal) government is probably one of the most scalable cost-effective routes for AI safety.
Think tanks are one of the most cost-effective ways to influence the US government.
The prestige of the think tank matters for getting into the room/influencing change.
Rand is among the most prestigious think tank doing AI safety work.
It’s also probably the most value-aligned, given Jason Matheny is in charge.
You can earmark donations to the catastrophic risks/emerging risks departments
I’ll add I have no idea if they need/have asked for marginal funding.
The get-out-of-RSP-free card
If I’m reading this correctly it seems quite vaguely written, so expecting them to pull this out literally whenever they want but maybe I’m overly skeptical. Bush invading Iraq vibes.
do you feel confident about your moral philosophy?
adding on that wholefoods https://www.wholefoodsmarket.com/quality-standards/statement-on-broiler-chicken-welfare
has made some commitments to switching breeds, we discussed this briefly at a Chicago EA meeting. I didn’t get much info but they said that going and protesting/spreading the word to whole foods managers to switch breeds showed some success.
No I don’t but effective altruism should not be a small movement. I think about 1⁄3 of all people could get on board. Applied utilitarianism should be a small movement, and probably not democratic. I’ll just write up a more coherent version of my vision and make a quick take or post though. I would agree democracy is not great for a small movement though I’m not expert.
It was such a token effort though. I’m literally giving that much away myself. How about every single person at an ea org steps down and we have an election for the new boards, or they can drop the ea name? (I’m only half joking)
Yea I think I can feel better about giving to manifund so that’s a good shout. Functionally giving money to them still feels like I’m contributing to the larger ea political oligopoly though. I want to enrich a version of ea with real de jure democratic republic institutions
left side is people/acts/vibes of altruism, right side is people/acts/vibes of science, evidence based mindset, rationality, middle is combination of the two. photo below a loose style guide. Could come off as pretentionous but I think you can avoid that by just having all the people in the middle be historical examples and not from the current ea movement.