GWWC board member, software engineer in Boston, parent, musician. Switched from earning to give to direct work in pandemic mitigation. Married to Julia Wise. Speaking for myself unless I say otherwise. Full list of EA posts: jefftk.com/ānews/āea
Jeff Kaufman šø
After reading through FarmKindās materials, I do think my criticism is accurate. FarmKind has the same issues I previously documented with GivingMultiplier. The ābonusā is presented to users as if it (a) will go in part to their favorite charity and (b) is money that would not otherwise be going to help animals, but neither of these are true. The system is fully explained on other pages, as you say, but the typical person going through the site is donating under a false impression of their effect.
I donāt think the problem is that itās difficult to explain things transparently without putting people off with verbosity, but instead that the core thing that makes the site work (convincing people to give more by giving the impression of greater counterfactual impact) is misleading.
I appreciate that youāre trying to help people give more effectively and improve conditions for animals, but this kind of fundraising is corrosive and does not belong in the EA movement.
- Aug 13, 2024, 3:02 PM; 121 points) 's comment on FarĀmKindās IlluĀsory Offer by (
I agree that this is misleading. If you are telling these new donors that this is a counterfactual match, that is another way of saying that if these new donors do not put up money than the funds in the the matching pool will not go to this good cause. Which is not the case: if there was a real danger that these matching funds would go unallocated then the announcement wouldnāt be asking us not to donate!
would you say that since switching careers, your engagement measured by these kind of metrics (books read, events attended, number of EA friends, etc) has gone up, gone down, or stayed the same?
I think itās up, but a lot of that is pretty confounded by other things going in the community. For example, my five most-upvoted EA Forum posts are since switching careers, but several are about controversial community issues, and a lot of the recency effect goes away when looking at inflation-adjusted voting. I did attended EAG in 2023 for the first time since 2016, though, which was driven by wanting to talk to people about biosecurity.
The second cohort (ālogical ceilingā) are people who basically already run their entire lives around EA principles (and I met several at EAGx). Theyāve taken the 10% pledge, they work at EA orgs, they are vegan, they attend every EA event they reasonably can, they volunteer, they are active online, etc. Itās hard to imagine how people this committed could meaningfully increase their engagement with EA.
I think āengagementā can be a misleading way to think about this: you can be fully engaged, but still increase your impact by changing how you spend you efforts.
Thinking back over my personal experience, three years ago I think I would probably be counted in this āfully engagedā cohort: I was donating 50%, writing publicly about EA, co-hosting our local EA group, had volunteered for EA organizations and at EA conferences, and was pretty active on the EA Forum. But since then Iāve switched careers from earning to give to direct work in biosecurity and am now leading a team at the NAO. I think my impact is significantly higher now (ex: I would likely reject an offer to resume earning to give at 5x my previous donation level), but the change here isnāt that Iām putting more of my time into EA-motivated work, but instead that (prompted by discussion with other EAs, and downstream from EA cause prioritization work) my EA-motivated work time is going into doing different things.
I donāt think āresponsibleā is the right word, but the consequences to the effective altruism project of not catching on earlier were enormous, far larger than to other economic actors exposed to FTX. And I do think we ought to have realized how unusual our situation was with respect to FTX.
I think it depends what sort of risks we are talking about. The more likely Dustin is to turn out to be perpetrating a fraud (which I think is very unlikely!) the more the marginal person should be earning to give. And the more projects should be taking approaches that conserve runway at the cost of making slower progress toward their goals.
Are the high numbers of deaths in the 1500s old world diseases spreading in the new world? If so, that seems to overestimate natural risk: the worldās current population isnāt separated from a larger population that has lots of highly human-adapted diseases.
In the other direction, this kind of analysis doesnāt capture what I personally see as a larger worry: human-created pandemics. I know youāre extrapolating from the past, and itās only very recently that these would even have been possible, but this seems at least worth noting.
other cities across the U.S. (like Boston) ⦠regularly build subway lines for less than $360 million per kilometer
Huh? Boston hasnāt built a subway line in decades, let alone regularly builds them.
It did recently finish a light rail extension in an existing right of way, expanding a trench with retaining walls, but (a) thatās naturally much cheaper than digging a subway and (b) it took 12y longer than planned.
The NAO ran a pilot where we worked with the CDC and Ginkgo to collect and sequence pooled airplane toilet waste. We havenāt sequenced these samples as deeply as we would like to yet, but initial results look very promising.
Militaries are generally interested in this kind of thing, but primarily as biodefense: protecting the population and service members.
As I tried to communicate in my previous comment, Iām not convinced there is anyone who āwill have their plans changed for the better by seeing OpenAI safety positions on 80kās boardā, and am not arguing for including them on the board.
EDIT: after a bit of offline messaging I realize I misunderstood Elizabeth; I thought the parent comment was pushing me to answer the question posed in the great grandcomment but actually it was accepting my request to bring this up a level of generality and not be specific to OpenAI. Sorry!
I think the board should generally list jobs that, under some combinations of values and world models that the job board runners think are plausible, are plausibly one of the highest impact opportunities for the right person. I think in cases like working in OpenAIās safety roles where anyone who is the āright personā almost certainly already knows about the role, thereās not much value in listing it but also not much harm.
I think this mostly comes down to a disagreement over how sophisticated we think job board participants are, and Iād change my view on this if it turned out that a lot of people reading the board are new-to-EA folks who donāt pay much attention to disclaimers and interpret listing a role as saying āsomeone who takes this role will have a large positive impact in expectationā.
If there did turn out to be a lot of people in that category Iād recommend splitting the board into a visible-by-default section with jobs where conditional on getting the role youāll have high positive impact in expectation (Iād biasedly put the NAOās current openings in this category) and a you-need-to-click-show-more section with jobs where you need to think carefully about whether the combination of you and the role is a good one.
Possibly! That would certainly be a convenient finding (from my perspective) if it did end up working out that way.
[I] am slightly confused what this post is trying to get out. I think your question is: will NYC hit 1% cumulative incidence after global 1% cumulative incidence?
Thatās one of the main questions, yes.
The core idea is that our efficacy simulations are in terms of cumulative incidence in a monitored population, but what people generally care about is cumulative incidence in the global (or a specific countryās) population.
online tool
Thanks! The tool is neat, and itās close to the approach Iād want to see.
I think this is almost never ⦠would surprise me
I donāt see how you can say both that it will āalmost neverā be the case that NYC will āhit 1% cumulative incidence after global 1% cumulative incidenceā but also that it would surprise you if you can get to where your monitored cities lead global prevalence?
I havenāt done or seen any modeling on this, but intuitively I would expect the variance due to superspreading to have most of its impact in the very early days, when single superspreading events can meaningfully accelerate the progress of the pandemic in a specific location, and to be minimal by the time you get to ~1% cumulative incidence?
I think this is probably far along youāre fine
Iām not sure what you mean by this?
(Yes, 1% cumulative incidence is highāI wish the NAO were funded to the point that we could be talking about whether 0.01% or 0.001% was achievable.)
I donāt object to dropping OpenAI safety positions from the 80k job board on the grounds that the people who would be highly impactful in those roles donāt need the job board to learn about them, especially when combined with the other factors weāve been discussing.
In this subthread Iām pushing back on your broader āI think a job board shouldnāt host companies that have taken already-earned compensation hostageā.
the bigger issue is that OpenAI canāt be trusted to hold to any deal
I agree thatās a big issue and itās definitely a mark against it, but I donāt think that should firmly rule out working there or listing it as a place EAs might consider working.
Thanks!
Expanded (b) into a full post: Sample Prevalence vs Global Prevalence
Thanks for the response!
I agree that the site gives the impression that part of the bonus goes to the favorite charity, but that isnāt usefully true. I explain this in detail in my GivingMultiplier post, and summarize it as āA simpler way to describe this is that it matches whatever portion you choose to give to the effective charity at 50%, and doesnāt match your donation to the other charity at allā. [EDIT: the FarmKind function is a bit more complex here, sometimes above 50% and sometimes below, but the matched portion to the non-effective charity remains 0%] To take the screenshot case, I think it would be much clearer to describe it to donors as the donor putting $150, of which $90 goes to the favorite charity and $60 to the effective charity, and then FarmKind contributes $30 to the effective charity.
First, the post weāre discussing this on is a counterexample. FarmKind is fundraising from EAs, and writes āIf youāre reading this post, our platform isnāt aimed at you. ⦠you may be interested in giving to them via our bonus fundā. That is, FarmKind does not want EAs (or other people who already plan to give to help animals effectively) to give through the matching platform, but does want them to give to the bonus pool.
Second, I was trying to make a more limited claim, about what happens to the money thatās already been put in the bonus pool. I think we can agree that every dollar donated to the bonus pool is one that will be paid out to effective animal charities regardless of how other donors behave?