GWWC board member, software engineer in Boston, parent, musician. Switched from earning to give to direct work in pandemic mitigation. Married to Julia Wise. Speaking for myself unless I say otherwise. Full list of EA posts: jefftk.com/ānews/āea
Jeff Kaufman šø
Looking at the two comments, I see:
-
Your comment on a comment on a quick take, suggesting suing OpenAI for violating their charter and including an argument for why. Voted to +4.
-
Aaronās quick take, suggesting suing OpenAI for their for-profit conversion. No argument included. Voted to +173.
I donāt see anything weird here. With the design of the site a quick take is likely to get much more attention than a nested comment on a quick take, and then when people start voting one up this snowballs because the site makes it more visible.
But even if youād posted your comment as your own quick take I think it probably wouldnāt have taken off: it doesnāt give enough context for someone seeing it out of nowhere to figure out if they think itās worth paying attention to, or enough of an explanation for what a suit would look like. You can gloss this as packaging/ārigor, I guess, but I think itās serving a useful purpose.
(I think neither posting is amazing: a few minutes with an LLM asking about what the rules are for converting 501c3s into for-profits would have helped both a lot. Iād hold that against them if they were regular posts but thatās not a standard we do, or should, hold quick takes or comments to.)
I post a fair number of offbeat ideas like this, and they donāt generally receive much attention, which leaves me feeling demoralized
In general, if you want ideas to receive attention you should expect to put in some work preparing them for other peopleās attention: gather the information that will help others evaluate them, make an argument for why these ideas are important. If you do that work, and then post as a quick take or (better, but requires more investment) top-level post, I do think youāll get attention. This is no guarantee of a positive reaction (people may disagree that youāve sufficiently made your case) but I donāt think itās a process that selects against weird ideas.
Thereās a reason people use ālow-effortā as a negative term: you pay with your own effort in a bid on other peopleās attention.
I got downvoted/ādisagreevoted for asking if thereās a better place to post offbeat ideas
Your comment starts with claims about what people want on the forum and a thesis about how to gain karma, and only gets to asking about where to post weird ideas in the last paragraph. I interpret the downvoting and disagree voting as being primarily about the first two paragraphs.
basically acknowledges that this is a hypothetical, and new ideas mostly donāt get posted here
I wasnāt trying to make a claim either way on this in my comment. Instead, I was adding a caveat that I was going by my impression of the site instead of taking the time to look for specific examples that would support or counter my claim, and so people should put less weight on my claim.
Thinking now, some example ideas that were new/āweird in the sense that they were pretty different from the lines of thought Iād seen here before but that still got attention (or at least comments /ā votes):
-
Top level post: Letās think about slowing down AI
-
Quick take: EA Awards
-
Copying Chandlerās response from the comments of the open thread:
Hi Arnold,
Thanks for your question! You are correct that our funds raised for metrics year 2023, $355 million, was below our 10% percentile estimate from our April 2023 blog post. We knew our forecasts were quite uncertain (80% confidence interval), and, looking back, we see two primary reasons that our forecasts were incorrect.
First, we were optimistic about the growth of non-Open Philanthropy funding. Our funds raised in 2023 from sources other than Open Philanthropy was $255 million, which is about at our 10% percentile estimate and is similar to the $253 million we raised in 2022 from sources other than Open Philanthropy (see the bottom chart in the blog post). Weāve continued to expand our outreach team, with a focus on retaining our existing donors and bringing in new donors, and we believe these investments will produce results over the longer term.
Second, Open Philanthropy committed $300 million in October 2023 and gave us flexibility to spend it over three years. We chose to allocate $100 million to 2023, 2024, and 2025, which is less than the $250 million we had forecast for 2023.
We discuss our current funding situation in a recent blog post about our approach to grant deployment timelines. We remain funding constrained at our current cost-effectiveness bar. Raising more money remains our single most important lever for maximizing impactāif we have more funding, weāll be able to make more grants to cost-effective programs that save and improve lives.
I donāt think this is the strongest case for abortion, taking the world view of the protesters as a given. If you presented this BOTEC to them, I think itās very likely that they would tell you that they care much more about humans than chickens.
I would guess that weird EA ideas that were appropriately caveated would do reasonably well here, and the main negative reaction is to weird ideas that are presented overconfidently? But this is just my impression of the Forum, not a result of looking over how various posts have done.
I do list this on my donations page, but Iām trying to be pretty conservative in what I count as my donations: only the actual money I actually donate. So I donāt count it towards my 50% and put it in grey italics like my employer donation matches, donations in exchange for work, the PayPal 1% match, and other counterfactual money moved that I donāt fully include.
I think itās fine (and probably good) if others are less strict about this, though!
VolĀunĀtary Salary Reduction
NAO UpĀdates, JanĀuary 2025
Thanks!
I think itās some combination of temperament (I just really like writing!) and practice (Iāve been writing posts multiple times a week for over a decade)?
I think youāre probably also only seeing my better posts, since I donāt cross-post most things to the Forum?
Wow!
If youād like me to review it for accuracy before you publish it Iād be happy to!
How Much to Give is a PragĀmatic Question
Sorry! Iāve edited my comment to make it clearer that Iām trying to say that suffering caused by eating meat is not the only factor you should weigh in estimating expected utility.
(For what itās worth I do still think itās likely that, taking these other benefits into account and assuming you think society is seriously undervaluing the moral worth of animals, veganism still doesnāt make sense as a matter of maximizing utility.)
Perhaps delete this one (or move it to your drafts), since I think the other one went up first?
Therefore, veganism does not make sense for someone aiming to maximize their EU. ā¦ The only other factor stopping me from eating meat was a deontological side-constraint.
It looks to me like youāre not including some other pretty important ways veganism can increase expected utility beyond the direct impact of reducing the suffering caused by your diet. For example, itās a clear signal to other people that you think animals matter morally and are willing to make sacrifices for their benefit. And it helps build a norm of not harming animals for human benefit, reducing the risk of locking in speciesist values. I think there are many EAs who wouldnāt make the sacrifice to be vegan in a hypothetical world where no one would ever know their dietary choices, but who think itās a very important thing to do in the world we do live in.
- 17 Dec 2024 0:51 UTC; 17 points) 's comment on My ProbĀlem with VeĀganism and Deontology by (
It looks like this posted twice? It seems to be a duplicate of My Problem with Veganism and Deontology
It looks like this posted twice? It seems to be a duplicate of My Problem with Veganism and Deontology
I think the main downside of setting up a sample collection box is that the samples would probably sit a lot longer before being processed, and RNA degrades quickly. I also suspect you wouldnāt get very many samples.
(The process itself is super simple for participants: you just swab your nose.)
They both show up as 2:23 pm to me: is there a way to get second level precision?
Thanks for sharing this, Aaron!
I agree the āRationale for Public Releaseā section is interesting; Iāve copied it here:
Releasing this report inevitably draws attention to a potentially destructive scientific development. We do not believe that drawing attention to threats is always the best approach for mitigating them. However, in this instance we believe that public disclosure and open scientific discussion are necessary to mitigate the risks from mirror bacteria. We have two primary reasons to believe disclosure is necessary:
-
To prevent accidents and well-intentioned development If no serious concerns are raised, the default course of well-intentioned scientific and technological development would likely result in the eventual creation of mirror bacteria. Creating mirror life has been a long-term aspiration of many academic investigators, and efforts toward this have been supported by multiple scientific funders.1 While creating mirror bacteria is not yet possible or imminent, advances in enabling technologies are expected to make it achievable within the coming decades. It does not appear possible to develop these technologies safely (or deliberately choose to forgo them) without widespread awareness of the risks, as well as deliberate planning to mitigate them. This concern is compounded by the possibility that mirror bacteria could accidentally cause irreversible harm even without intentional misuse. Without awareness of the threat, some of the most dangerous modifications would likely be made for well-intentioned reasons, such as endowing mirror bacteria with the ability to metabolize į“ -glucose to allow growth in standard media.
-
To build guardrails that could reliably prevent misuse There are currently substantial technical barriers to creating mirror bacteria. Success within a decade would require efforts akin to those of the Human Genome Project or other major scientific endeavors: a substantial number of skilled scientists collaborating for many years, with a large budget and unimpeded access to specialized goods and services. Without these resources, entities reckless enough to disregard the risks or intent upon misuse would have difficulty creating mirror bacteria on their own. Disclosure therefore greatly reduces the probability that well-intentioned funders and scientists would unwittingly aid such an effort while providing very little actionable information to those who may seek to cause harm in the near term. Crucially, maintaining this high technical barrier in the longer term also appears achievable with a sustained effort. If well-intentioned scientists avoid developing certain critical components, such as methods relevant to assembling a mirror genome or key components of the mirror proteome, these challenges would continue to present significant barriers to malicious or reckless actors. Closely monitoring critical materials and reagents such as mirror nucleic acids would create additional obstacles. These protective measures could likely be implemented without impeding the vast majority of beneficial research, although decisions about regulatory boundaries would require broad discussion amongst the scientific community and other stakeholders, including policymakers and the public. Since ongoing advances will naturally erode technical barriers, disclosure is necessary in order to begin discussions while those barriers remain formidable.
When to work on risks in public vs private is a really tricky question, and itās nice to see this discussion on how this group handled it in this case.
-
I would expect most donations to be in giving season, though, which in 2022 would be after FTX collapsed