Strategy Fellow — cFactual
Principal — Good Structures
I previously co-founded and served as Executive Director at Wild Animal Initiative, and served as the COO of Rethink Priorities from 2020 to 2024.
Strategy Fellow — cFactual
Principal — Good Structures
I previously co-founded and served as Executive Director at Wild Animal Initiative, and served as the COO of Rethink Priorities from 2020 to 2024.
I didn’t know the answer, so asked one of the authors. The short answer is that they are significantly harder to detect using visualization techniques and sequencing the genome is just a lot easier, and requires way fewer insects to do so.
The longer answer is that the way we’d “see” these ion channels in insects is basically creating a fluorescent dye with a molecule that binds to only the specific ion channel you were trying to measure (e.g. cold detection might be different from mechanical injury). Then, you’d take many cross sections of the insect’s body, and hope that one of them intersected with the dyed molecule in the right position in the nociceptor. You’d have to come up with individual molecules that bind with each kind of nociceptor you were trying to detect, and not other ones. Also, there is a strong chance this doesn’t work, so you’d have to do it on many different insects and hope that one produces a good result.
It sounds like for smaller insects, there are some other techniques that allow you to more directly look at these proteins, but mantid bodies are too large for them to work. And, even if looking at them directly, you’re looking at them in a densely clustered surface with moving parts (animal tissue), and hundreds or thousands of other proteins, etc, so it wouldn’t necessarily be easy to differentiate them.
But, sequencing and assembling a genome of an insect is fairly easy—you theoretically only need one individual (though in practice it might be more), and the rest of the process is fairly straightforward and reliable.
The paper wasn’t trying to assess insect sentience, but was evaluating welfare considerations for crickets due to the potential risk of cricket sentience from a precautionary principle perspective. So it doesn’t go into detail on cricket sentience, and primarily refers to this paper as a primer on why we might take insect pain as a potential reality.
For a more thorough background on insect sentience, I recommend Rethink Priorities Invertebrate Sentience series, and Moral Weight Project (though neither looked at crickets specifically).
Removed
Edited to remove my comment since it is off topic. I’m happy to talk about this though if people want to in other contexts! I definitely think this is a pretty important question, and looking into how fiscal sponsorship arrangements are working in reality is important, as I imagine there is high variance in how effective oversight mechanisms are (though I think RP has done this well).
Hi,
(writing as the COO of Rethink Priorities).
Nonlinear is not, and has never been fiscally sponsored by Rethink Priorities. RP has never had a legal or financial connection to Nonlinear.
In the grant round you cite, it looks like the receiving charity is listed as Rethink Charity. RP was fiscally sponsored by RC until 2020, but is no longer legally connected to RC. RC is a separate legal entity with a separate board. RP and RC do not have a legal connection anymore, and have not since 2020.
Not Peter, but looking at the last ~20 roles I’ve hired for, I’d guess that during hiring, maybe 15 or so had an alternative candidate who seemed worth hiring (though perhaps did worse in some scoring system). These were all operations roles within an EA organization. For 2 more senior roles I hired for during that time, there appeared to be suitable alternatives. For other less senior roles there weren’t (though I think the opposite generally tends to be more true).
I do thing one consideration here is we are talking about who looked best during hiring. That’s different than who would be a better employee—we’re assuming our hiring process does a good job of assessing people’s fit / job performance, etc., and we know that the best predictors during hiring are only moderately correlated with later job performance, so it’s plausible that often we think there is a big gap between two candidates, but they’d actually perform equally well (or that someone who seems like the best candidate isn’t). Hiring is just a highly uncertain business, and predicting long-term job performance from like, 10 hours of sample work and interviews is pretty hard — I’m somewhat skeptical that looking at hiring data is even the right approach, because you’d also want to control for things like if those employees always meet performance expectations in the future, etc, and you never actually get counterfactual data on how good the person you didn’t hire was. I’m certain that many EA organizations have hired someone who appeared to be better than the alternative by a wide margin, and easily cleared a hiring bar, but who later turned out to have major performance issues, even if the organization was doing a really good job evaluating people.
The main out of context bit is that Elizabeth’s comment seemed to interpret Marcus as only referring to salary, when the full comment makes it very clear that it wasn’t just about that, which seemed like a strong misreading to me, even if the 10x factor was incorrect.
I suspect the actual “theoretically maximally frugal core EA organization with the same number of staff” is something like 2x-3x cheaper than current costs, if salaries moved to the $50k-$70k range.
That doesn’t seem quite right to me—Longview and EG don’t strike me as being earning to give outreach, though they definitely bring funds into the community. And Founders Pledge is clearly only targeting very large donors. I guess maybe to be more specific, nothing like massive, multi-million dollar E2G outreach has been tried for mid-sized / every day earning to give, as you’re definitely right that effort has gone into bringing in large donors.
I think that one thing I reflect on is how much money has been spent on EA community building over the last 5 years or so. I’m guessing it is several 10s of millions of dollars. My impression (which might not be totally right) is that little of that went to promoting earning to give. So it seems possible that in a different world, where a much larger fraction was used on E2G outreach, we could have seen a substantial increase. The GWWC numbers are hard to interpret, because I don’t think anything like massive, multi-million dollar E2G outreach has been tried, at least as far as I know.
I think broadly, it would be healthy for any organization of RP’s size to not have a single funder giving over 40% of their funding, and ideally less. I assume the realistic version of this that might be possible is something like an $10M org. having $4M from one funder, maybe a couple of $500k-$1M donors, a few more $20k-$500k donors, and a pretty wide base of <$20k donors. So in that world, I’m guessing an organization would want to be generating something like 15%-20% of it’s revenue from these mid-size donors? So definitely still a huge lift.
But, I think one thing worth considering is that while ideally there might be tens of thousands of $20k donors in EA, doing the outreach to get tens of thousands of $20k donors, if successful, will probably also bring in hundreds of <$100k donors, and maybe some handful of <$1M donors. This might not meet the ideal situation I laid out above, but on the margin seems very good.
Yeah, I think there is an open question of whether or not this would cause a decline in the impact of what’s funded, and this reason is one of the better cases why it would.
I think one potential middle-ground solution to this is having like, 5x as many EA Fund type vehicles, with more grant makers representing more perspectives / approaches, etc., and those funds funded by a more diverse donor base, so that you still have high quality vetting of opportunities, but also grantmaking bodies who are responsive to the community, and some level of donor diversity possible for organizations.
Minor downvoted because this comment seems to take Marcus’s comment out of context / misread it:
Catered lunches, generous expense policies, large benefits packages and ample + flexible + paid time off become a pot luck once a week, basic healthcare coverage and 2 weeks of vacation. All of a sudden, running a 10 person organization takes $1M instead of $10M and it becomes much more feasible to get 30 x $10-30k with a couple of 50-100k donations to cover the cost of the organization.
I don’t think the numbers are likely exactly right, but I think the broad point is correct. I think that likely an organization starting with say 70% market rate salaries in the longtermist space could, if it pursued fairly aggressive cost savings, reduce their budget by much more than 30%.
As an example, I was once quoted on office space in the community that cost around $14k USD / month for a four person office, including lunch every day. For a 10 person organization, that is around $420k/year for office and food. Switching to a $300/person/mo office, and not offering the same perks, which is fairly easily findable, including in large cities (though it won’t be Class A office space) would save $384k, which is like, 4 additional staff at $70k/year, if that’s our benchmark.
FWIW, I feel uncertain about frugality of this sort being desirable — but I definitely believe there are major cost savings on the table.
FWIW, my experience (hiring mostly operations roles) is often the opposite—I find for non-senior roles that I usually reach the end of a hiring process, and am making a pretty arbitrary choice between multiple candidates who both seem quite good on the (relatively weak) evidence from the hiring round. But, I also think RP filters a lot less heavily on culture fit / value alignment for ops roles than CEA does, which might be the relevant factor making this difference.
Yeah definitely—that’s a more elegant way.
FWIW, I mildly disagree with this, because a major part of the appeal of donation elections stuff (if done well) is that the results more closely model a community consensus than other giving mechanisms, and being able to donate votes would distort that in some sense. I think I don’t see the appeal of being able to donate votes in this context over just telling Jenifer + Alan that they can control where one donates to some extent, or donating to a fund. Or, if not donating to the election fund, just asking Jenifer + Alan for their opinion and changing your own mind accordingly.
I think since there can be multiple winners, letting people vote on the ideal distribution then averaging those distributions would be better than direct voting, since it most directly represents “how voters think the funds should be split on average” or similar, which seems like what you want to capture? And also is still very understandable I hope.
E.g. if I think 75% of the pool should go to LTFF and 20% to GiveWell, and 5% to the EA AWF, 0% to all the rest, I vote 75%/20%/5%/0%/0%/0% etc. Then, you take the average of those distributions across all voters. I guess it gets tricky if you are only paying out to the top three, but maybe you can just scale their percentage splits? IDK.
If not that or if it is annoying to implement, IMO approval voting or quadratic are probably best, but am not really sure. Ranked choice feels like it is so explicitly designed for single winner elections that it is harder to apply here.
Not weighing in on LTFF specifically, but from having done a lot of traditional nonprofit fundraising, I’d guess two months is a faster response time than 80% of foundations/institutional funders, and one month is probably faster than like 95%+. My best guess at the average for traditional nonprofit funders is more like 3-6 months. I guess my impression is that even in the worst cases, EA Funds has been operating pretty well above average compared to the traditional nonprofit funding world (though perhaps that isn’t the right comparison). Given that LTFF is funding a lot of research, 2 months is almost certainly better than most academic grants.
My impression from what I think is a pretty large sample of EA funders and grants is also that EA Funds is the fastest turnaround time on average compared to the list you mention (which exceptions in some cases in both directions for EA Funds and other funders)
Thanks for adding this feature!
I am also interested in how this is structured from a licensing perspective—this is relevant for content posted with permission, but not owned by the original poster (which is relevant in some cases I’m looking into), and also for people’s old content generally. Would the Forum team be able to clarify who owns the audio versions, and what the licensing on pieces posted prior to the current terms of use was? My impression was that they were owned by the authors, but I can’t find any records of the terms of use prior to this version: https://forum.effectivealtruism.org/termsOfUse. But, I also have old content I posted that now has a recording, and didn’t get asked about this use of it, so am assuming I actually had granted permission to EV to use it at some point? (and I am happy for it to be used this way). If you happen to have a copy of the terms of use from prior periods, I’d be interested in seeing them.
Also, this is minor and somewhat unrelated, but I noticed it while looking into this — in Ben’s announcement post, it says that the CC-BY license kicks in on December 01, 2022, but the current terms of use suggests that it is in place as of October 31, 2022. I’d also be interested in knowing what license governs pieces posted between October 31 and December 1 right now.
(BTW, I’m excited about having the audio recordings, but just trying to work out how the licensing on this works).
I’m curious why people downvoted this comment! (when I posted this, it was at 0 with four votes, and I strongly upvoted it to 7). I think it is an important question and is currently unanswered. For reference on its importance — it’s directly relevant to me in a context related to doing my work for an EA organization, and in particular trying to catalogue historic IP.
I’m not sure about the academic literature, but will add anecdotally that my impression is that the PTC hypothesis is extremely widespread within the advocacy space—people talk about it a ton.
I’ll also add that the “necessary but not sufficient” line feels hard to interpret without more clarification (and a bit meaningless on its own because of this). It would be helpful if people pushing this position could clarify how much of the effort PTC is doing to reach sufficiency. E.g. if one thinks that if we reach PTC parity, and its done like 90% of the work to cause widespread adoption, I think they’re basically agreeing with the positive-PTC hypothesis. But, if PTC parity is required, but is like, 5% of what’s needed, that’s a very different claim.
Finally, the podcast referenced is very much positive-PTC. It feels really misleading to me to claim that podcast is negative-PTC. To the extent it isn’t, it’s strongly in the “PTC is doing 90%+ of the work” direction. E.g., to quote it directly:
...I was on the panel at a Future Food-Tech conference in San Francisco maybe six or eight months ago with Mary Kay James who runs Tyson New Ventures with Tyson Venture Capital Fund and she said, “We are absolutely looking at clean meat,” which she called it clean meat, “as one of the things that we want to invest in.” And she said, “For us, it’s all about choice. We will provide the meat that consumers want.” Well, price, taste, convenience. When clean meat is price and taste competitive, Tyson, Perdue, Hormel, everybody just moves in that direction.
...
when we’re thinking about what it is that we want to eat, every single one of us thinks about the price of the food, we think about how it’s going to taste. We may not be thinking about convenience but convenience is going to be a central factor. If it’s not there, we’re not going to consume it....
The main thing that The Good Food Institute works on, when we say alternatives to the products of conventional animal agriculture, basically what we’re looking for is products that will compete on the basis of price, taste, and convenience as I mentioned.
This isn’t FDIC insured, but the money market fund linked is just in US treasuries so presumably negligible risk.
There are some multi-institution accounts called Insured Cash Sweep you can find to get higher FDIC insurance limits, though I think they generally have lower interest rates. This one from Mercury is an example.