A similar position to David’s might be that bioethics institutions are bad for the world, while being agnostic about academia. I don’t know much about academic bioethicists and you might be right that their papers as a whole aren’t bad for the world. But bioethics think tanks and NGOs seem terrible to me: for example, here’s a recent report I found pretty appalling (short version, 300-page version).
cflexman
Looks like a great idea, very glad someone is pursuing the roll-up-your-sleeves method here.
I think the best addition to this that you could make is a business plan—basically, how much would it cost to replicate how many studies, how would you best choose studies for replication to maximize efficiency / impact, how much / how long until you were replicating 1 or 10% of top studies, etc. I’d also personally like to see a different version of “what has been achieved” that didn’t lean as much on collaborations / work of collaborators, as I find these basically meaningless.
This seems like a really great thing to try at small scale first. Seems important to have a larger vision but make Little Bets to start, as Peter Sims or Cal Newport would say. You don’t want to start with 30+ people with serious expertise at 90% likelihood of conversion because you want to anneal into a good structure, not bake your early mistakes into the lasting organizational culture. (Maybe you had already planned on this but seems worth clarifying as one of the most common mistakes made by EAs.)
Single issue lobbying group called “2%”, perhaps. Or 5% if NGDP.
Some other possible takeaways that I would lean toward:
Try to fund groups which will pivot on their advocacy faster
Fund advocacy of the opposite, now
Go further and try funding or creating a think tank that is actually committed to targets instead of unidirectional force
I definitely agree re antitrust, it seems like a slam-dunk. If I have time after this case I was thinking about trying to slowly reach out to try to elicit an American version from someone, or finding out why that’s not on the table. I’ve been made quite aware of how much I don’t know about ongoing projects in this space.
I did email ~20 of them about drafting amicus briefs and didn’t get any takers; plausibly they would be down to give some sort of lesser help if you had ideas for what to ask for.
Good idea, I’ll forward this. I’m focusing on US/Western profs for now because A) many Indian institutes are already involved, and India’s profs seem to know about the case, and Sci-Hub’s lawyers are much better connected there, and B) I think international/Western backing is an important source of clout diversification. Many Indian Supreme Court cases actually cite American amici as an important legal source.
I think backups of Sci-Hub would be a good idea if you can find any legal avenues to create them. I’m not sure if that’s very tractable, and it doesn’t appear to be all that neglected (though these are probably mostly in illegal jurisdictions).
Re scientific progress, I agree that it’s not obviously a good thing, but after thinking about this extensively with little resolution, my conclusion is roughly: given that we cannot reasonably learn enough to resolve this uncertainty, and we can’t coordinate on acting as if scientific progress is a negative thing, and it would hamstring us in many ways to act as such, I think we should basically treat “generally advancing science” as a fine/good thing. We can circumscribe areas like AI capabilities and gain-of-function as specifically bad, for better results and a more reasonable stance.
Sci-Hub sued in India
I don’t think the issue is that we don’t have any people willing to be radicals and lose credibility. I think the issue is that radicals on a certain issue tend to also mar the reputations of their more level-headed counterparts. Weak men are superweapons, and groups like PETA and Greenpeace and Westboro Baptist Church seem to have attached lasting stigma to their causes because people’s pattern-matching minds associate their entire movement with the worst example.
Since, as you point out, researchers specifically grow resentful, it seems really important to make sure radicals don’t tip the balance backward just as the field of AI safety is starting to grow more respectable in the minds of policymakers and researchers.
I really want to pull good insights out of this to improve the movement. However, the only thing I’m really getting is that we should think more about systemic change, which a) already seems to be the direction we’re moving in and b) doesn’t seem amenable to too much more focus than we are already liable to give it, i.e., we should devote some resources but not very much. My first reaction was that maybe Doing Good Better should have spent a little bit of time mentioning why this is difficult, but it’s a book, and really had to make sacrifices when choosing what to focus on, so I don’t think that’s even a possibe improvement. I think the best thing to come from this is your realization of potential coordination problems.
While I encourage well-thought-out criticism of the movement and different viewpoints for us to build off of, I can’t help but echo kbog’s sentiment that this seems a bit too continental to learn from. The feeling I get is that this is one of the many critiques I’ve encountered that find themselves vaguely uncomfortable with our notions and then paint a gestalt that can be slowly and assiduously associated with various negatives. There’s a lot of interplay between forest and trees here, but it’s really difficult to communicate when one is trying to work with concrete claims and another is trying to work with associations.
In summation, I think on most of these points (individualism, demandingness, systemic change, x-risk) we are pretty aware of the risky edges we walk along, and can’t really improve our safety margins much without violating our own tenets.
I think it’s very good Matthews brought this point up so the movement can make sure we remain tolerant and inclusive of people mostly on our side but differing in a few small points. Especially those focused on x-risk, if he finds them to be most aggressive, but really I think it should apply to all of us.
That being said, I wish he had himself refrained from being divisive with allegations that x-risk is self-serving for those in CS. Your point about CS concentrators being “damned if you do, damned if you don’t” is great. Similarly, the point (you made on facebook?) about many people converting from other areas into computer science as they realize the risk is a VERY strong counterargument to his. But more generally, it seems like he is applying asymmetric standards here. It seems the x-risk crowd no more deserves his label of biased and self-serving as the animal rights crowd, or the global poverty crowd; many of the people in those subsets also began there, so any rebuttal could label them as self-serving for promoting their favored cause if we wanted. Ad hominem is a dangerous road to go down, and I wish he would refrain from critiquing the people and stick to critiquing the arguments (which actually promotes good discussion from people like you and Scott Alexander in regards to his pseudo-probability calculation, even if we’ve been down this road before).
If big donors feel better and donate more, I’m not convinced that is a neutral thing. If running a matching donation drive doesn’t get more donations from the matchees but does pull more money from the matchers, that may have a fairly large effect. I have certainly thought about donating more money than I otherwise would have when I heard it could be used to run a matching fundraiser. If they truly don’t attract more matchee funds then I suppose it is epistemically unvirtuous to ask matchers to donate, since this implies it has an effect, but nonetheless a mechanism like this to get matchers to donate more seems not too different than the original deal (where it seems like the matchees are kind of being deluded into giving more anyways).
I find another motte-and-bailey situation more striking: the motte of “make your donations count by going to the most effective place” and the bailey of “also give all your money!”
I personally know a lot of people who have been turned off of effective altruism by the bailey here, and while some seem to disagree with the motte, they are legions fewer. In the discussion about how to present EA to those we know, I think in many circumstances I’d recommend sticking with the motte, especially until you know they are very on board with that and perhaps come up with the bailey on their own.
Has anyone done an EA evaluation of (formerly B612) Sentinel Mission’s expected value?
I also find that it’s frequently the most helpful to be only a little weird in public, but once you have someone’s confidence you can start being significantly more weird with them because they can’t just write you off. Most of the best of both worlds.
I’m a physics undergrad who is very interested in quantum computing. Interested to hear thoughts on it from someone who is a rationalist; if you would email me at Connor_Flexman AT brown DOT edu, it would be wildly helpful.
I’ve heard from several of my friends that EA is frequently introduced to them in a way that seems elitist and moralizing. I was wondering if there was any data on how many people learned about it through which sources. One possibility that came up was running tv/radio/internet ads for it (in a more gentle, non-elitist manner), in the hopes that the outreach and potentially recruited donors would more than pay back the original cost. Thoughts?
I really appreciate you looking into this topic. I think you want to have much much bigger error bars on these, however. Interventions like this are known to have massive selection effects and difficulty with determining causality—giving point estimates is kind of sweeping under the rug the main thing that I’m interested in regarding whether these interventions work.
For example, ACE had a problem similar to this when it was beginning. For one of the charities, they relied on survey data to look for an effect and gave estimates of how effective interventions were based on this, but all of the interesting question was basically “whether we should believe at all the type of conclusion they drew from the surveys”. In the end, of course the answer was no.
I didn’t read the whole post but the reasoning in the summary and early sections seemed to be centered around point estimates and taking-data-at-face-value. The type of analysis that would convince me to change my actions here would be reliability analysis, seeking to show any place within this domain that has extremely clear support for a real effect. By default this basically doesn’t exist for social interventions ime, so the conclusions are unfortunately more affected by the vagaries of the input data rather than the underlying reality.