Thanks, done.
Larks
The OpenAI Foundation has announced its first round of grants
Yes I find it bizarre that many people seem to be simultaneously opposed to the War on Drugs and also support extending it to a new drug.
A final way the tobacco industry is dodging the rules is with e-cigarettes, or “vapes.” Using marketing that illegally targets children, they peddle vapes as a “safer” way to smoke. This tactic has proved alarmingly successful.
SMA condemn the tobacco companies for claiming that vapes are safer, but don’t discuss whether this key claim is actually true. Yet as far as I can see it clearly is true. There is debate about exactly how much safer they are—e.g. how convincing we should find the NHS claim that vapes are 95% safer—but I haven’t seen any credible argument that vapes aren’t safer at all. It’s not ‘dodging’ safety rules to release a considerably safer product.
Further, I think vapes are also pretty good evidence again SMA’s defense of paternalism. If smoking cigarettes wasn’t really a choice, why has the availability of vapes and pouches been associated with a decline in cigarettes? The most natural explanation here is that previously people choose to smoke cigarettes, and then a superior product came along, so people started choosing that instead.
If you think it’s merely ‘arguable’ that OpenAI has had had a significant negative impact through acceleration then I think you are significantly more positive than the median EA.
Thanks for sharing! TruthSocial having more positive engagement is interesting.
I agree this post is within scope, but that is because it is about AI and policy. It’s not because StopAI has any nontrivial EA support.
A StopAI organizer has posted here before, and received a mixed reaction from the community.
The post got −29 karma, which is an extreme outlier for how negative it is. Unless by “mixed” you mean ‘not literally everyone disliked it’, I think by any reasonable account the post received a decidedly negative response. ‘a guy who wrote a downvoted post has a co-organizer who did something bad’ is not enough to make something notable—if our standards were that low, almost anything would be within scope.
To give one example of practical relevance, the post immediately above this one (on my current feed) considers financially supporting StopAI, although it expresses concerns about their tactics.
I think this is an unfair summary, making the post sound significantly more positive to StopAI than it is. Michael considered donating, having decided not to in the past, and then decided to continue not donating, as he had “become more confident in [his] skepticism”.
so voter preferences cannot be opposite of what is best for human welfare, by definition.
This is clearly not true. The example I gave was foreign aid, which benefits foreigners at the expense of citizens. Since only one of these groups can vote, there is little reason to think that the preferences of this subgroup will align with overall human welfare. And we know it doesn’t—hence the polling data.
This is true for most EA cause areas. Existential risk work is about protecting the interests of future generations; animal welfare work is about protecting the interests of animals—neither of which groups can vote.
the page directly addresses that question quite incisively, citing the bayesian regret figures.
No methodology or source is given for why we should expect a 5% decline in the risk of 2 billion deaths.
bayesian regret figures by princeton math phd warren smith show that approval voting roughly doubles the human welfare impact of democracy.
Their result is that, in their model, outcomes more closely match voter preferences. But my example is one where voter preferences are opposite to what many EAs think is best for human welfare.
doing some ballpark math to see how many lives that would save:
Suppose the USA, by adopting range voting and thus making better decisions, lowers the risk of a 2-billion population crash in 50 years, by 5%. I consider this a conservative estimate.
These numbers just seem totally made up. Why should we believe that approval voting has anything like such a large impact?
I think it’s better to try to keep the upvote/downvote and agree/disagree axis more distinct—to express normative vs positive evaluation. So in this case I would think that downvoting would be the natural way to express disapproval.
I don’t point this out very often, so it seems pretty plausible that you might always appreciate it, so I will agree-vote your comment!
Fixing The Bottleneck Behind Every EA Cause: Our Broken Voting System
I don’t think you have really provided much evidence to think that voting systems are the bottleneck behind every EA cause, which is an extremely very strong claim. In some cases, I suspect the opposite is true. For example, I think most EAs were supportive of the UK’s 0.7% GDP foreign aid commitment that the Conservative government made. But this was never popular with voters, and probably lasted as long as it did only because the government was not maximally accountable to the desires of voters. The “full and honest preferences” of the electorate would almost definitely have been to cut aid significantly.
Thanks for the update Aaron!
One of the oddities of the EA forum is people voting disagree on posts like this… I can understand downvoting if you thought it was a bad change (though this seems a bit mean-spirited to me), but it seems hard to imagine thinking that the core point of the post is false!
As a first approximation, the answer to all “why aren’t people talking about X” questions is the same: because there are a lot of potential topics to discuss, people are busy, and no-one, including you, has written a post about it yet. If you want to discuss X, it is generally better to just write a post about X, rather than a meta-post about why no-one is discussing X.
[Also EAs have discussed this a bunch! Just not on the forum.]
This demographic [white, male, and tech-focused] has historically been disconnected from social impact
This seems extremely false to me. What evidence do you have for this being true?
I’m not aware of anyone working on this presently. There is a lot of previous discussion of similar ideas under ‘Certificates of Impact’.
Effective altruism has become a think tank of endless debate—people arguing about which cause deserves the top spot while the world keeps burning.
This seems quite false to me? My impression is most people are busy working on their specific cause areas. Relatively little time is spent arguing for one major cause area over another. (This post, of course, fits into that category).
We need to align on one clear, shared goal—something tangible that unites the major cause areas and shows what coordinated altruism can actually do. Otherwise, this movement will slip into obscurity.
This also seems false to me. EA has not had one single object-level objective for the previous 15 years and it does not seem to have caused obscurity slippage thus far, and it’s not clear why we should expect this to change.
I feel like you switch back and forth a bit here between causal and evidential:
failure to end factory farming is evidence that future steering efforts will go badly
vs
failure to end factory farming will cause future steering efforts to go badly
You’re right, I had forgotten that retail customer deposits count as stable funding under the liquidity regulations.
Thanks, very strange. I definitely selected linkpost; the GUI appears to have forgotten this, possibly when I tried to edit the thumbnail. Fixed.