If you’re super focused on that issue, then it will definitely be better to spend your money on actual AI research, or on some kind of direct effort to push the government to consider the issue (if such an effort exists).
I am, and that’s what I’m wondering. The “definitely” isn’t so obvious to me. Another $20 million to MIRI vs. an increase in the probability of Yang’s presidency by, let’s say, 5%--I don’t think it’s clear cut. (And I think MIRI is the best place to fund research).
What about simply growing the EA movement? That clearly seems like a more efficient way to address x-risk, and something where funding could be used more readily.
That is plausible. But “definitely” definitely wouldn’t be called for when comparing Yang with Grow EA. How many EA people who could be sold on an AI PhD do you think could recruited with $20 million?
I meant that it’s definitely more efficient to grow the EA movement than to grow Yang’s constituency. That’s how it seems to me, at least. It takes millions of people to nominate a candidate.
Well, there are >100 million people who have to join some constituency (i.e. pick a candidate), whereas potential EA recruits aren’t otherwise picking between a small set of cults philosophical movements. Also, AI PhD-ready people are in much shorter supply than, e.g. Iowans, and they’d be giving up much much much more than someone just casting a vote for Andrew Yang.
There are numerous minor, subtle ways that EAs reduce AI risk. Small in comparison to a research career, but large in comparison to voting. (Voting can actually be one of them.)
I am, and that’s what I’m wondering. The “definitely” isn’t so obvious to me. Another $20 million to MIRI vs. an increase in the probability of Yang’s presidency by, let’s say, 5%--I don’t think it’s clear cut. (And I think MIRI is the best place to fund research).
What about simply growing the EA movement? That clearly seems like a more efficient way to address x-risk, and something where funding could be used more readily.
That is plausible. But “definitely” definitely wouldn’t be called for when comparing Yang with Grow EA. How many EA people who could be sold on an AI PhD do you think could recruited with $20 million?
I meant that it’s definitely more efficient to grow the EA movement than to grow Yang’s constituency. That’s how it seems to me, at least. It takes millions of people to nominate a candidate.
Well, there are >100 million people who have to join some constituency (i.e. pick a candidate), whereas potential EA recruits aren’t otherwise picking between a small set of cults philosophical movements. Also, AI PhD-ready people are in much shorter supply than, e.g. Iowans, and they’d be giving up much much much more than someone just casting a vote for Andrew Yang.
There are numerous minor, subtle ways that EAs reduce AI risk. Small in comparison to a research career, but large in comparison to voting. (Voting can actually be one of them.)