Head of Lightcone Infrastructure. Wrote the forum software that the EA Forum is based on. Often helping the EA Forum with various issues with the forum. If something is broken on the site, it’s a good chance it’s my fault (Sorry!).
Habryka
- Plus #1: I assume that anything the animal industry doesn’t like would increase costs for raising chickens. I’d correspondingly assume that we should want costs to be high (though it would be much better if it could be the government getting these funds, rather than just decreases in efficiency).
I think this feels like a very aggressive zero-sum mindset. I agree that sometimes you want to have an attitude like this, but I at least at the present think that acting with the attitude of “let’s just make animal industry as costly as possible” would understandable cause backlash, make it harder to come to agreements, and I think a reasonable justice system would punish people who do such things (even if they think they are morally in the right).
Wow, yeah, I was quite misled by the lead. Can anyone give a more independent assessment of what this actually means legally?
Does someone have a rough fermi on the tradeoffs here? On priors it seems like chickens bred to be bigger would overall cause less suffering because they replace more than one chicken that isn’t bread to be as big, but I would expect those chickens to suffer more. I can imagine it going either way, but I guess my prior is that it was broadly good for each individual chicken to weigh more.
I am a bit worried the advocacy here is based more on a purity/environmentalist perspective where genetically modifying animals is bad, but I don’t give that perspective much weight. But it could also be great from a more cost-effectiveness/suffering-minimization oriented perspective, and would be curious in people’s takes.
(Molly was asked this question in a previous post two months ago, but as far as I can tell responded mostly with orthogonal claims that don’t really engage with the core ethical question, so am curious in other people’s takes)
Announcing the Q1 2025 Long-Term Future Fund grant round
I don’t think anyone uses “valuable” in that way. Saying “the most valuable cars are owned by Jeff Bezos” doesn’t mean that in-aggregate all of his cars are more valuable than other people’s cars. It means that the individual cars that Jeff Bezos owns are more valuable than other cars.
I agree that this is what the post is about, but the title and this[1] sentence do indeed not mean that, under any straightforward interpretation I can think of. I think bad post titles are quite costly (cf. lots of fallout from “politics is the mindkiller” being misapplied over the years), and good post titles are quite valuable.
- ^
“This points to an important conclusion: The most valuable dollars to aren’t owned by us. They’re owned by people who currently either don’t donate at all, or who donate to charities that are orders of magnitude less effective than the ones we typically discuss here.
- ^
The title and central claim of the post seems wrong, though my guess is you mean it poetically (but poetry that isn’t true is I think worse, though IDK, it’s fine sometimes, maybe it makes more sense to other people).
Clearly the dollars you own are the most valuable. If you think someone else could do more with your dollars, you can just give them your dollars! This isn’t guaranteed to be true (you might not know who would ex-ante best use dollars, but still think you could learn about that ex-post and regret not giving them your money after the opportunity has passed), but I think it’s almost always true.
The correct title and argument would be “influencing other people’s donation decisions is often more valuable than improving your own”, but I think that is a very different claim from the title and central bolded sentence.
“Thinking someone else’s dollars are more valuable than your own” would IMO clearly imply that you would prefer the world where they had more money, and you had less money. But that’s not what the post is talking about (and is I think wrong in almost all cases). Or maybe alternatively that you would prefer having their dollars instead of your current dollars (though given that dollars are fungible, that seems kind of weird).
I agree that all-things-considered they say that, but I am objecting to “one of the things to consider”, and so IMO it makes sense to bracket that consideration when evaluating my claims here.
But I was first! I demand the moderators transfer all of the karma of Jeff’s comment to mine :P
Accolades for intellectual achievements by tradition go to the person who published them first.
Clearly you believe that probabilities can be less than 1%, reliably. Your probably of being struck by lightning today is not “0% or maybe 1%”, it’s on the order of 0.001%. Your probability of winning the lottery is not “0% or 1%” it’s ~0.0000001%. I am confident you deal with probabilities that have much less than 1% error all the time, and feel comfortable using them.
It doesn’t make sense to think of humility as something absolute like “don’t give highly specific probabilities”. You frequently have justified belief of a probability being very highly specific (the probability that random.org’s random number generator will generate “2” when asked about a random number between 1 and 10 is exactly 10%, not 11%, not 9%, exactly 10%, with very little uncertainty about that number).
You can sort by “oldest” and “newest” in the comment-sort order, and see that mine shows up earlier in the “oldest” order, and later in the “newest” order.
I agree that this is an inference. I currently think the OP thinks that in the absence of frugality concerns this would be among the most cost-effective uses of money by Open Phil’s standards, but I might be wrong.
University group funding was historically considered extremely cost-effective when I talked to OP staff (beating out most other grants by a substantial margin). Possibly there was a big update here on cost-effectiveness excluding frugality-reputation concerns, but currently think there hasn’t been (but like, would update if someone from OP said otherwise, and then I would be interested in talking about that).
I was first! :P
Copying over the rationale for publication here, for convenience:
Rationale for Public Release
Releasing this report inevitably draws attention to a potentially destructive scientific development. We do not believe that drawing attention to threats is always the best approach for mitigating them. However, in this instance we believe that public disclosure and open scientific discussion are necessary to mitigate the risks from mirror bacteria. We have two primary reasons to believe disclosure is necessary:
1. To prevent accidents and well-intentioned development
If no serious concerns are raised, the default course of well-intentioned scientific and technological development would likely result in the eventual creation of mirror bacteria. Creating mirror life has been a long-term aspiration of many academic investigators, and efforts toward this have been supported by multiple scientific funders.1 While creating mirror bacteria is not yet possible or imminent, advances in enabling technologies are expected to make it achievable within the coming decades. It does not appear possible to develop these technologies safely (or deliberately choose to forgo them) without widespread awareness of the risks, as well as deliberate planning to mitigate them. This concern is compounded by the possibility that mirror bacteria could accidentally cause irreversible harm even without intentional misuse. Without awareness of the threat, some of the most dangerous modifications would likely be made for well-intentioned reasons, such as endowing mirror bacteria with the ability to metabolize ᴅ-glucose to allow growth in standard media.
2. To build guardrails that could reliably prevent misuse
There are currently substantial technical barriers to creating mirror bacteria. Success within a decade would require efforts akin to those of the Human Genome Project or other major scientific endeavors: a substantial number of skilled scientists collaborating for many years, with a large budget and unimpeded access to specialized goods and services. Without these resources, entities reckless enough to disregard the risks or intent upon misuse would have difficulty creating mirror bacteria on their own. Disclosure therefore greatly reduces the probability that well-intentioned funders and scientists would unwittingly aid such an effort while providing very little actionable information to those who may seek to cause harm in the near term.
Crucially, maintaining this high technical barrier in the longer term also appears achievable with a sustained effort. If well-intentioned scientists avoid developing certain critical components, such as methods relevant to assembling a mirror genome or key components of the mirror proteome, these challenges would continue to present significant barriers to malicious or reckless actors. Closely monitoring critical materials and reagents such as mirror nucleic acids would create additional obstacles. These protective measures could likely be implemented without impeding the vast majority of beneficial research, although decisions about regulatory boundaries would require broad discussion amongst the scientific community and other stakeholders, including policymakers and the public. Since ongoing advances will naturally erode technical barriers, disclosure is necessary in order to begin discussions while those barriers remain formidable.
I agree that things tend to get tricky and loopy around these kinds of reputation-considerations, but I think at least the approach I see you arguing for here is proving too much, and has a risk of collapsing into meaninglessness.
I think in the limit, if you treat all speech acts this way, you just end up having no grounding for communication. “Yes, it might be the case that the real principles of EA are X, but if I tell you instead they are X’, then you will take better actions, so I am just going to claim they are X’, as long as both X and X’ include cost-effectiveness”.
In this case, it seems like the very people that the club is trying to explain the concepts of EA to, are also the people that OP is worried about alienating by paying the organizers. In this case what is going on is that the goodness of the reputation-protecting choice is directly premised on the irrationality and ignorance of the very people you are trying to attract/inform/help. Explaining that isn’t impossible but it does seem like a particularly bad way to start of a relationship, and so I expect consequences-wise to be bad.
“Yes, we would actually be paying people, but we expected you wouldn’t understand the principles of cost-effectiveness and so be alienated if you heard about it, despite us getting you to understand them being the very thing this club is trying to do”, is IMO a bad way to start off a relationship.
I also separately think that optimizing heavily for the perception of low-context observers in a way that does not reveal a set of underlying robust principles, is bad. I don’t think you should put “zero” weight on that (and nothing in my comment implied that), but I do think it’s something that many people put far too much weight on (going into detail of which wasn’t the point of my comment, but on which I have written plenty about in many other comments).
There is also another related point in my comment, which is that “cost-effectiveness” is of course a very close sister concept to “wasting money”. I think in many ways, thinking about cost-effectiveness is where you end up if you think carefully about how you can avoid wasting money, and is in some ways a more grown-up version of various frugality concerns.
When you increase the total cost of your operations (by, for example, reducing the cost-effectiveness of your university organizers, forcing you to spend more money somewhere else to do the same amount of good) in order to appear more frugal, I think you are almost always engaging in something that has at least the hint of deception.
Yes, you might ultimately be more cost-effective by getting people to not quite realize what happened, but when people are angry at me or others for not being frugal enough, I think it’s rarely appropriate to ultimately spend more to appease them, even if doing so would ultimately then save me enough money to make it worth it. While this isn’t happening as directly here as it was with other similar situations, like whether the Wytham Abbey purchase was not frugal enough, I think the same dynamics and arguments apply.
I think if someone tries to think seriously and carefully through what it would mean to be properly frugal, I don’t think they would endorse you sacrificing the effectiveness of your operations causing you to ultimately spend more to achieve the same amount of good. And if they learned that you did, and they think carefully about what this implies about your frugality, they would end up more angry, not less. That, I think, is a dynamic worth avoiding.
In survey work we’ve done of organizers we’ve funded, we’ve found that on average, stipend funding substantively increased organizers’ motivation, self-reported effectiveness, and hours spent on organizing work (and for some, made the difference between being able to organize and not organizing at all). The effect was not enormous, but it was substantive.
[...]
Overall, after weighing all of this evidence, we thought that the right move was to stick to funding group expenses and drop the stipends for individual organizers. One frame I used to think about this was that of “spending weirdness points wisely.” That is, it would be nice for student organizers, who are discussing often-unconventional ideas within effective altruism or AI safety, to not also have to discuss (or feel that they need to defend) stipends.I think it’s a mistake to decide to make less cost-effective grants, out of a desire to be seen as more frugal (or to make that decision on behalf of group organizers to make them appear more frugal). At the end of the day making less cost-effective grants means you waste more money!
I feel like on a deeper level, organizers now have an even harder job explaining things. The reason for why organizers get the level of support they are getting no longer has a straightforward answer (“because it’s cost-effective”) but a much more convoluted answer (“yes, it would make sense to pay organizers based on the principles this club is about, but we decided to compromise on that because people kept saying it was weird, which to be clear, generally we think is not a good reason for not engaging in an effective interventions, indeed most effective interventions are weird and kind of low-status, but in this case that’s different”).
More broadly, I think the “weirdness points” metaphor has caused large mistakes in how people handle their own reputation. Controlling your own reputation intentionally while compromising on your core principles generally makes your reputation worse and makes you seem more shady. People respect others having consistent principles, it’s one of the core drivers of positive reputation.
My best guess is this decision will overall be more costly from a long-run respect and reputation perspective, though I expect it to reveal itself in different ways than the costs of paying group organizers, of course.
I donate more to Lightcone than my salary, so it doesn’t really make any sense for me to receive a salary, since that just means I pay more in taxes.
I of course donate to Lightcone because Lightcone doesn’t have enough money.
Lightspeed Grants and the S-Process paid $20k honorariums to 5 evaluators. In addition, running the round probably cost around 8-ish months of Lightcone staff time, with a substantial chunk of that being my own time, which is generally at a premium as the CEO (I would value it organizationally at ~$700k/yr on the margin, with increasing marginal costs, though to be clear, my actual salary is currently $0), and then it also had some large diffuse effects on organizational attention.
This makes me think it would be unsustainable for us to pick up running Lightspeed Grants rounds without something like ~$500k/yr of funding for it. We distributed around ~$10MM in the round we ran.
Some of my thoughts on Lightspeed Grants from what I remember: I don’t think it’s ever a good idea to name something after the key feature everyone else in the market is failing at. It leads to particularly high expectations and is really hard to get away from. (Eg OpenAI) The S-process seemed like a strange thing to include for something intended to be fast. As far as I know the S-process has never been done quickly.
You seem to be misunderstanding both Lightspeed Grants and the S-Process. The S-Process and Lightspeed Grants both feature speculation/venture grants which enable a large group of people to make fast unilateral grants. They are by far the fastest grant-decision mechanism that I know out there, and it’s been going strong for multilple years now. If you need funding quickly, an SFF speculation grant is by far the best bet, I think.
It ended up taking much longer than expected for decisions but still pretty quick overall.
I think we generally stayed within our communicated timelines, or only mildly extended them. We did also end up getting more money which caused us to reverse some rejections afterwards, but we did get back to everyone within 2 weeks on whether they would get a speculation grant, and communicated the round decisions at the deadline (or maybe a week or two later, I remember there was a small hiccup).
I’m interested to know why we haven’t seen Lightspeed Grants again?
Ironically one of the big bottlenecks was funding. OpenPhil was one funder who told us they wouldn’t fund us for anything but our LW work (and that also soon after disappeared) and ironically funding coordination work doesn’t seem to pay well. Distributing millions of dollars also didn’t combine very well with being sued by FTX.
I am interested in picking it back up again, but it is also not clear to me how sustainable working on that is.
Huh, yeah, seems like a loss to me.
Correspondingly, while the OP does not engage in “literally lying” I think sentences like “In light of this ruling, we believe that farmers are breaking the law if they continue to keep these chickens.” and “The judges have ruled in favour on our main argument—that the law says that animals should not be kept in the UK if it means they will suffer because of how they have been bred.” strike me as highly misleading, or at least willfully ignorant, based on your explanation here.