AI safety researcher
Thomas Kwa
Isn’t particulate what we care about? The purpose of the filters is to get particulate out of the air, and the controlled experiment Jeff did basically measures that. If air mixing is the concern, ceiling fans can mix air far more than required, and you can just measure particulate in several locations anyway.
A pair of CR boxes can also get 350 CFM CADR at the same noise level for less materials cost than either this or the ceiling fan, and also have much less installation cost. E.g. two of this CleanAirKits model on half speed would probably cost <$250 if it were mass-produced. This is the setup in my group house living room and it works great! DIY CR boxes can get to $250/350 CFM right now.
The key is having enough filter area to make the static pressure and thus power and noise minimal—the scaling works out such that every doubling of filter area at a given CADR decreases noise by 4.5 dB, assuming noise is proportional to power and pressure goes as (face velocity)^1.5, which are common rules of thumb. I’d guess that the pair of CR boxes has 5x more filter area, so an 11dB advantage for the closet sound isolation to make up. MERV filters also get slightly higher efficiency when the face velocity is slower.
I have used inline fans for other purposes and even the air passing through a 6″ duct generates some noise and adds static pressure. With a CR box you’re doing the minimal work necessary to filter air.
Standard HVAC parts do have many advantages though. The aesthetics are unmatched and all parts are likely to be available, and they’re very durable.
I’m a big fan of this. Imagine if this becomes the primary way billionaires are ranked on prestige
Efficiency can decrease too, especially when there are lots of very small particles like smoke. See this reddit thread: https://www.reddit.com/r/crboxes/comments/1fznar2/comment/lr2j404/.
My understanding is the small particles can basically cover the surface area of the fibers and block their electric field. Here’s an image from one of the linked studies showing filters that are (a) clean, (b) after one test, and (c) having absorbed 2 grams / m^2 of smoke and having its efficiency drop from 92% to 33%.
There are at least three common justifications for not donating, each of which can be quite reasonable:
A high standard of living and saving up money are important selfish wants for EAs in AI, just as they are in broader society.
EAs in AI have needs (either career or personal) that require lots of money.
Donations are much lower impact than one’s career.
I don’t donate to charity other than animal product offsets; this is mainly due to 1 and 2. As for 1, I’m still early career enough that immediate financial stability is a concern. Also for me, forgoing luxuries like restaurant food and travel makes me demotivated enough that I have difficulty working. I have tried to solve this in the past but have basically given up and now treat these luxuries as partially needs rather than wants.
For people just above the top-1% threshold of $65,000, 3 and 2 are very likely. $65,000 is roughly the rate paid to marginal AI safety researchers, so donating 20% will bring only 20% of someone’s career impact even if the grantmakers find an opportunity as good as themself. If they also live in a HCOL area, 2 is very likely—in San Francisco the average rent for a 1bed is $2,962/month and an individual making less than $104,000 qualifies for public housing assistance!
But shouldn’t I have more dedication to the cause and donate anyway? I would prefer to instead spend more effort on getting better at my job (since I’m nowhere near the extremely high skillcap of AI safety research) and working more hours (possibly in ways that funge with donations eg by helping out grantmakers). I actually do care about saving for retirement, and finding a higher-paying job at a lab safety team just so I can donate is probably counterproductive, because trying to split one’s effort between two theories of change while compromising on both is generally bad (see the multipliers post). If I happened to get an equally impactful job that paid double, I would probably start donating after about a year, or sooner if donations were urgent and I expected high job security.
If you’re not yet ready to commit to the 💸11% Pledge, consider taking the 🥤Trial Pledge, which obligates you to spend 5.5% of your income on increasing your productivity but offsets the cost by replacing all your food with Huel.
Did you assume the axiom of choice? That’s a reasonable modeling decision—our estimate used an uninformative prior over whether it’s true, false, or meaningless.
Introducing The Spending What We Must Pledge
It was mentioned at the Constellation office that maybe animal welfare people who are predisposed to this kind of weird intervention are working on AI safety instead. I think this is >10% correct but a bit cynical; the WAW people are clearly not afraid of ideas like giving rodents contraceptives and vaccines. My guess is animal welfare is poorly understood and there are various practical problems like preventing animals that don’t feel pain from accidentally injuring themselves constantly. Not that this means we shouldn’t be trying.
The majority of online articles about effective altruism have always been negative (it used to be 80%+). In the past, EAs were coached not to talk to journalists, and perhaps people finally reversing this is why things are getting better, so I appreciate anyone who does it.
Of course there is FTX, but that doesn’t explain everything—many recent articles including this are mostly not about FTX. At the risk of being obvious, for an intelligent journalist (as many are) to write a bad critique despite talking to thoughtful people, it has to be that a negative portrayal of EA serves their agenda far better than a neutral or positive one. Maybe that agenda is advocating for particular causes, a progressive politics that unfortunately aligns with Torres’ personal vendetta, or just a deep belief that charity cannot or should not be quantified or optimized. In these cases maybe there is nothing we can do except promote the ideas of beneficentrism, triage, and scope sensitivity, continue talking to journalists, and fix both the genuine problems and perceived problems created by FTX, until bad critiques are no longer popular enough to succeed.
The Pulse survey has now basically allayed all of my concerns.
Thanks, I’ve started donating $33/month to the FarmKind bonus fund, which is double the calculator estimate for my diet. [1] I will probably donate ~$10k of stocks in 2025 to offset my lifetime diet impact—is there any reason not to do this? I’ve already looked at the non-counterfactual matching argument, which I don’t find convincing.
[1] I basically never eat chicken, substituting it with other meats, so I reduced the poultry category by 2⁄3 and allocated that proportionally between the beef and pork categories.
I disagree with a few points, especially paragraph 1. Are you saying that people were worried about abolition slowing down economic growth and lowering standards of living? I haven’t heard this as a significant concern—free labor was perfectly capable of producing cotton at a small premium, and there were significant British boycotts of slave-produced products like cotton and sugar.
As for utilitarian arguments, that’s not the main way I imagine EAs would help. EA pragmatists would prioritize the cause for utilitarian reasons and do whatever is best to achieve their policy goals, much as we are already doing for animal welfare. The success of EAs in animal welfare, or indeed anywhere other than x-risk, is in implementation of things like corporate campaigns rather than mass spreading of arguments. Even in x-risk, an alliance with natsec people has effected concrete policy outcomes like compute export controls.
To paragraph 2, the number of philosophers is pretty low in contemporary EA. We just hear about them more. And while abolition might have been relatively intractable in the US, my guess is the UK could have been sped up.
I basically agree with paragraph 3, though I would hope if it came to it we would find something more economical than directly freeing slaves.
Overall thanks for the thoughtful response! I wouldn’t mind discussing this more.
I was imagining a split similar to the present, in which over half of EAs were American or British.
How do I offset my animal product consumption as easily as possible? The ideal product would be a basket of offsets that’s
easy to set up—ideally a single monthly donation equivalent to the animal product consumption of the average American, which I can scale up a bit to make sure I’m net positive
based on well-founded impact estimates
affects a wide variety of animals reflecting my actual diet—at a minimum my donation would be split among separate nonprofits improving the welfare of mammals, birds, fish, and invertebrates, and ideally it would closely track the suffering created by each animal product within that category
includes all animal products, not just meat.
I know I could potentially have higher impact just betting on saving 10 million shrimp or whatever, but I have enough moral uncertainty that I would highly value this kind of offset package. My guess is there are lots of people for whom going vegan is not possible or desirable, who would be in the same boat.
Suppose that the EA community were transported to the UK and US in 1776. How fast would slavery have been abolished? Recall that the slave trade ended in 1807 in the UK and 1808 in the US, and abolition happened between 1838-1843 in the British Empire and 1865 in the US.
Assumptions:
Not sure how to define “EA community”, but some groups that should definitely be included are the entire staff of OpenPhil and CEA, anyone who dedicates their career choices or donates more than 10% along EA principles, and anyone with >5k EA forum karma.
EAs have the same proportion of the population as they do now, as well as the same relative levels of wealth, political power, intelligence, and drive.
EAs forget all our post-1776 historical knowledge, including the historical paths to abolition.
EA attention is split among other top causes of the day, like infectious disease and crop yields. I can’t think of a reason why antislavery would be totally ignored by EAs though, as it seems huge in scope and highly morally salient to people like Bentham.
I’m also interested in speculating on other causes, I’ve just been thinking about abolition recently due to the 80k podcast with Prof. Christopher Brown.
Note that (according to ChatGPT) Quakers were more dedicated to abolition than EAs are to animal advocacy, have a much larger population, and deserve lots of moral credit for abolition in real life. But my guess would be that EAs could find some angles the Quakers wouldn’t due to the consequentialist principles of EA. Maybe more evangelism and growth (Quaker population declined in the early 1800s), pragmatism about compensating slaveholders in the US as was done in the UK, or direct political action. Could EAs have gotten the Fugitive Slave Clause out of the Constitution?
It is not clear to me that EA branding is net positive for the movement overall or if it’s been tarnished beyond repair by various scandals. Like, it might be that people should make a small personal sacrifice to be publicly EA, but it might also be that the pragmatic collective action is to completely rebrand and/or hope that EA provides a positive radical flank effect.
The reputation of EA at least in the news and on Twitter is pretty bad; something like 90% of the news articles mentioning EA are negative. I do not think it inherently compromises integrity to not publicly associate with EA even if you agree with most EA beliefs, because people who read opinion pieces will assume you agree with everything FTX did, or are a Luddite, or have some other strawman beliefs. I don’t know whether EAF readers calling themselves EAs would make others’ beliefs about their moral stances more or less accurate.
I don’t think this is currently true, but if the rate of scandals continues, anyone holding on to the EA label would be suffering from the toxoplasma of rage, where the EA meme survives by sounding slightly good to the ingroup but extremely negative to anyone else. Therefore, as someone who is disillusioned with the EA community but not various principles, I need to see some data before owning any sort of EA affiliation, to know I’m not making some anti-useful sacrifice.
Given the Guardian piece, inviting Hannania to Manifest seems like an unforced error on the part of Manifold and possibly Lightcone. This does not change because the article was a hitpiece with many inaccuracies. I might have more to say later.
I want to slightly push back against this post in two ways:
I do not think longtermism is any sort of higher form of care or empathy. Many longtermist EAs are motivated by empathy, but they are also driven by a desire for philosophical consistency, beneficentrism and scope-sensitivity that is uncommon among the general public. Many are also not motivated by empathy—I think empathy plays some role for me but is not the primary motivator? Cold utilitarianism is more important but not the primary motivator either [1]. I feel much more caring when I cook dinner for my friends than when I do CS research, and it is only because I internalize scope sensitivity more than >99% of people that I can turn empathy into any motivation whatsoever to work on longtermist projects. I think that for most longtermists, it is not more empathy, nor a better form of empathy, but the interaction of many normal (often non-empathy) altruistic motivators and other personality traits that makes them longtermists.
Longtermists make tradeoffs between other common values and helping vast future populations that most people disagree with, and without ideosyncratic EA values there is no reason that a caring person should make the same tradeoffs as longtermists. I think the EA value of “doing a lot more good matters a lot more” is really important, but it is still trading off against other values.
Helping people closer to you / in your community: many people think this has inherent value
Beneficentrism: most people think there is inherent value in being directly involved in helping people. Habitat for Humanity is extremely popular among caring and empathic people, and they would mostly not think it is better to make more of an overall difference by e.g. subsidizing eyeglasses in Bangladesh.
Justice: most people think it is more important to help one human trafficking victim than one tuberculosis victim or one victim of omnicidal AI if you create the same welfare, because they place inherent value on justice. Both longtermists and GiveWell think they’re similarly good modulo secondary consequences and decision theory.
Discount rate, risk aversion, etc.: There is no reason that having a 10% chance of saving 100 lives in 6,000 years is better than a 40% chance of saving 5 lives tomorrow, if you don’t already believe in zero-discount expected value as the metric to optimize. The reason to believe in zero-discount expected value is a thought experiment involving the veil of ignorance, or maybe the VNM theorem. It is not caring doing the work here because both can be very caring acts, it is your belief in the thought experiment connecting your caring to the expected value.
In conclusion, I think that while care and empathy can be an important motivator to longtermists, and it is valid for us to think of longtermist actions as the ultimate act of care, we are motivated by a conjunction of empathy/care and other attributes, and it is the other attributes that are by far more important. For someone who has empathy/care and values beneficentrism and scope-sensitivity, preventing an extinction-level pandemic is an important act of care; for someone like me or a utilitarian, pandemic prevention is also an important act. But for someone who values justice more, applying more care does not make them prioritize pandemic prevention over helping a sex trafficking victim, and in the larger altruistically-inclined population, I think a greater focus on care and empathy conflict with longtermist values more than they contribute.
[1] More important for me are: feeling moral obligation to make others’ lives better rather than worse, wanting to do my best when it matters, wanting future glory and social status for producing so much utility.
That’s a box fan CR box; the better design (and the one linked) uses PC fans which are better optimized for noise. I don’t have much first-hand experience with this, but physics suggests that noise from the fan will be proportional to power usage, which is pressure * airflow, if efficiency is constant, and this is roughly consistent with various tests I’ve found online.
Both further upsizing and better sound isolation would be great. What’s the best way to reduce duct noise in practice? Is an 8“ flexible duct quieter than a 6” rigid duct or will most of the noise improvement come from oversizing the termination, removing tight bends or installing some kind of silencer device? I might suggest this to a relative.