Data scientist working on AI governance at MIRI, previously forecasting at Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.
Robi Rahman🔸
US tax rules for donations changing next year
Can I take you up on the offer to do a video call and see if we can install it on Chrome OS? Will DM you
In the same way that two human super powers can’t simply make a contract to guarantee world peace, two AI powers could not do so either.
That’s not true. AI can see (and share) its own code.
Just want to note that I think this comment has basically been vindicated in the three years since FTX.
I love this idea, and I think you’re on to something with
We don’t notice how much of EA’s “independent thinking” comes from people who can afford to do it.
(but I disagree-voted because I don’t think “EA should” do this; I doubt it’s cost-effective)
I got to the terminal but wasn’t able to access the download and gave up at that step because for some reason I assumed it would only install the app for the linux development environment as opposed to the rest of Chrome OS. I’ll try again, and email you if I can’t get it working.
Is it possible to use it on Chrome OS somehow? It auto-detects that as Linux but I think it won’t work if I use the Linux installer. I’m pretty sure it would be installable as a browser add-on but then not sure if it would work when you’re using other programs.
This isn’t deontology, it’s lexical-threshold negative utilitarianism.
https://reducing-suffering.org/three-types-of-negative-utilitarianism/
For me, it was a moderate update against “bycatch” amongst LTFF grantees (an audience which, in principle, should be especially vulnerable to bycatch)
Really? I think it would be the opposite: LTFF grantees are the most persistent and accomplished applicants and are therefore the least likely to end up as bycatch.
Strongly agree with this post. I think my session at EAG Boston 2024 (audience forecasting, which was fairly group-brainstormy) was suboptimal for exactly the reasons you mentioned.
I think most of us should get direct work jobs, and the E2G crowd should do high-EV careers (to the extent that they’re personally sustainable), even if risky.
No, that wouldn’t prove moral realism at all. That would merely show that you and a bunch of aliens happen to have the same opinions.
Morality is Objective
There’s no evidence of this, and the burden of proof is on people who think it’s true. I’ve never even heard a coherent argument in favor of this proposition without assuming god exists.
This doesn’t answer the question for people who live in high-income countries and don’t feel envy. Should they abstain? Should they answer about whether they would envy someone in their own position if they were less advantaged?
If you’re someone with an impressive background, you can answer this by asking yourself if you feel that you would be valued even without that background. Using myself as an example, I...
went to a not so well-known public college
worked an unimpressive job
started participating in EA
quit the unimpressive job, studied at fancy university
worked at high-status ingroup organizations
posted on the forum and got upvotes
Was I warmly accepted into EA back when my resume was much weaker than it is now? Do I think I would have gotten the same upvotes if I had posted anonymously? Yes and yes. So on the question of whether I’m valued within EA regardless of my background, I voted agree.
EA Forum posts have been pretty effective in changing community direction in the past, so the downside risk seems low
But giving more voting power to people with lots of karma entrenches the position/influence of people who are already high in the community based on its current direction, so it would be an obstacle to the possibility of influencing the community through forum posts.
If you think it’s important for forum posts to be able to change community direction, you should be against vote power scaling with karma.
Vote power should scale with karma
@Ben Kuhn has a great presentation on this topic. Relatedly, nonprofits have worse names: see org name bingo
Hey! You might be interested in applying to the CTO opening at my org:
https://careers.epoch.ai/en/postings/f5f583f5-3b93-4de2-bf59-c471a6869a81
I’m not a lawyer but this sounds… questionably legal.