GWWC board member, software engineer in Boston, parent, musician. Switched from earning to give to direct work in pandemic mitigation. Married to Julia Wise. Speaking for myself unless I say otherwise. Full list of EA posts: jefftk.com/ānews/āea
Jeff Kaufman šø
Perhaps delete this one (or move it to your drafts), since I think the other one went up first?
Therefore, veganism does not make sense for someone aiming to maximize their EU. ā¦ The only other factor stopping me from eating meat was a deontological side-constraint.
It looks to me like youāre not including some other pretty important ways veganism can increase expected utility beyond the direct impact of reducing the suffering caused by your diet. For example, itās a clear signal to other people that you think animals matter morally and are willing to make sacrifices for their benefit. And it helps build a norm of not harming animals for human benefit, reducing the risk of locking in speciesist values. I think there are many EAs who wouldnāt make the sacrifice to be vegan in a hypothetical world where no one would ever know their dietary choices, but who think itās a very important thing to do in the world we do live in.
It looks like this posted twice? It seems to be a duplicate of My Problem with Veganism and Deontology
It looks like this posted twice? It seems to be a duplicate of My Problem with Veganism and Deontology
I think the main downside of setting up a sample collection box is that the samples would probably sit a lot longer before being processed, and RNA degrades quickly. I also suspect you wouldnāt get very many samples.
(The process itself is super simple for participants: you just swab your nose.)
They both show up as 2:23 pm to me: is there a way to get second level precision?
Thanks for sharing this, Aaron!
I agree the āRationale for Public Releaseā section is interesting; Iāve copied it here:
Releasing this report inevitably draws attention to a potentially destructive scientific development. We do not believe that drawing attention to threats is always the best approach for mitigating them. However, in this instance we believe that public disclosure and open scientific discussion are necessary to mitigate the risks from mirror bacteria. We have two primary reasons to believe disclosure is necessary:
-
To prevent accidents and well-intentioned development If no serious concerns are raised, the default course of well-intentioned scientific and technological development would likely result in the eventual creation of mirror bacteria. Creating mirror life has been a long-term aspiration of many academic investigators, and efforts toward this have been supported by multiple scientific funders.1 While creating mirror bacteria is not yet possible or imminent, advances in enabling technologies are expected to make it achievable within the coming decades. It does not appear possible to develop these technologies safely (or deliberately choose to forgo them) without widespread awareness of the risks, as well as deliberate planning to mitigate them. This concern is compounded by the possibility that mirror bacteria could accidentally cause irreversible harm even without intentional misuse. Without awareness of the threat, some of the most dangerous modifications would likely be made for well-intentioned reasons, such as endowing mirror bacteria with the ability to metabolize į“ -glucose to allow growth in standard media.
-
To build guardrails that could reliably prevent misuse There are currently substantial technical barriers to creating mirror bacteria. Success within a decade would require efforts akin to those of the Human Genome Project or other major scientific endeavors: a substantial number of skilled scientists collaborating for many years, with a large budget and unimpeded access to specialized goods and services. Without these resources, entities reckless enough to disregard the risks or intent upon misuse would have difficulty creating mirror bacteria on their own. Disclosure therefore greatly reduces the probability that well-intentioned funders and scientists would unwittingly aid such an effort while providing very little actionable information to those who may seek to cause harm in the near term. Crucially, maintaining this high technical barrier in the longer term also appears achievable with a sustained effort. If well-intentioned scientists avoid developing certain critical components, such as methods relevant to assembling a mirror genome or key components of the mirror proteome, these challenges would continue to present significant barriers to malicious or reckless actors. Closely monitoring critical materials and reagents such as mirror nucleic acids would create additional obstacles. These protective measures could likely be implemented without impeding the vast majority of beneficial research, although decisions about regulatory boundaries would require broad discussion amongst the scientific community and other stakeholders, including policymakers and the public. Since ongoing advances will naturally erode technical barriers, disclosure is necessary in order to begin discussions while those barriers remain formidable.
When to work on risks in public vs private is a really tricky question, and itās nice to see this discussion on how this group handled it in this case.
-
Hereās the actual answer, @Denkenbergeršø: https://āāwww.jefftk.com/āāp/āāhistorical-net-worth
Without running the numbers, I think our net worth is decreasing as we pull from savings to donate, but much less then youād guess from analysis like this that excludes unrealized capital gains.
The biggest factor here is our highly leveraged purchase of our house, which has appreciated dramatically (despite needing a lot of money for resolving long-deferred maintenance).
I should calculate and share our mark-to-market net worth over time, though getting the historical data together may be challenging...
DeĀtecĀtion of AsympĀtomatĀiĀcally SpreadĀing Pathogens
It looks like they have one person in common: StopAI team ā© PauseAI team is Guido Reichstadter. But heās listed on the former as āprotestorā and on the latter as āvolunteerā, and I think āseparate outfitā is right.
People who prioritize x-risk often disregard animal welfare (or the welfare of non-human beings, whatever shape those beings might take in the future). ā¦ This isnāt universally trueāI know some people who care about animals but still prioritize x-risk.
For what itās worth this hasnāt been my experience: most of the people I know personally who are working on x-risk (where I know their animal views) think animal welfare is quite important. And for the broader sample where I just know diet the majority are at least vegetarian.
Thanks for trying this!
Reviewing its judgements:
-
I think YIMBY is not very left or right. Hereās how Claude put it:
JK: Where does the YIMBY movement fall on the left-right spectrum in the US? The YIMBY (Yes In My Backyard) movement tends to fall on the center-left to center-right of the political spectrum in the US. YIMBYs generally support increasing housing supply and density to address housing affordability, which aligns with liberal/āprogressive goals. However, their support for market-based solutions and property rights puts them at odds with some further left positions. Overall, YIMBY is considered a centrist or āthird wayā approach to housing and urban development issues.
-
I donāt know much about the CHAI or ASG, but given that they were founded by politicians on the US left it seems reasonable to guess theyāre left of center. Like, I think if OP were recommending grants to equivalent international orgs founded by US right politicians weād count that the other way? Though I think āpolitical think tank or organization within the United Statesā doesnāt really apply.
-
It seems like it thinks animal advocacy and global health are left coded, which on one hand isnāt totally wrong (I expect global health and animal advocates to be pretty left on average), but on the other isnāt really what weāre trying to get at here.
-
Since the GPT-o1-preview response reads to me as āthese grants donāt look politically codedā Iād be curious if youād also get a similar response to:
Here is a spreadsheet of all of Open Philanthropyās grants since January 2024. Could you identify whether any of them might meaningfully constitute a grant to a āleft of centerā political think tank or organization within the United States?
I really appreciate you writing up the Voting Norms section! Making it clear when you see ātacticalā participation as beneficial vs harmful is very helpful.
ConĀsider fundĀing the NuĀcleic Acid ObĀserĀvaĀtory to DeĀtect Stealth Pandemics
if somebody thinks Open Phil is underinvesting in longtermism compared to the ideal allocation, then they should give to longtermist charities- the opportunities available to Open Phil might be significantly stronger than the ones available to donors
āTopping upā OP grants does reasonably well in this scenario, no?
PerĀsonal AI Planning
While I think this piece is right in some sense, seeing it written out clearly it feels like there is something uncooperative and possibly destructive about it. To take the portfolio management case:
-
Why do the other fund managers prefer 100% stocks? Is this a thoughtful decision you are unthinkingly countering?
-
Each fund manager gets better outcomes if they keep their allocation secret from others.
I think Iām most worried about (2): it would be bad if OP made their grants secret or individuals lied about their funding allocation in EA surveys.
Tweaking the fund manager scenario to be a bit more stark:
-
There are 100 fund managers
-
50 of them prefer fully stocks, 50 prefer an even split between stocks and bonds
-
If they each decide individually youād get an overall allocation of 75% stocks and 25% bonds.
-
If instead they all are fully following the lessons of this post, the ones that prefer bonds go 100% bonds, and the overall allocation is 50% stocks and 50% bonds.
It feels to me that the 75-25 outcome is essentially the right one, if the two groups are equally likely to be correct. On the other hand, the adversarial 50-50 outcome is one group getting everything they want.
Note that I donāt think this is an issue with other groups covering the gaps left by the recent OP shift away from some areas. Itās not that OP thought that those areas should receive less funding, but that GV wanted to pick their battles. In that case, external groups that do accept the case for funding responding by supporting work in these areas seems fine and good. Which Moskovitz confirms: āIām explicitly pro-funding by othersā And: āIād much prefer to just see someone who actually feels strongly about that take the wheel.ā
(This also reminds me about the perpetual debate about whether you should vote things on the Forum up/ādown directionally vs based on how close the vote total currently is to where you think it should be.)
-
Sorry! Iāve edited my comment to make it clearer that Iām trying to say that suffering caused by eating meat is not the only factor you should weigh in estimating expected utility.
(For what itās worth I do still think itās likely that, taking these other benefits into account and assuming you think society is seriously undervaluing the moral worth of animals, veganism still doesnāt make sense as a matter of maximizing utility.)