Software engineer in Boston, parent, musician. Recently switched from earning to give to direct work in pandemic mitigation. Married to Julia Wise. Speaking for myself unless I say otherwise.
Full list of EA posts: jefftk.com/news/ea
Software engineer in Boston, parent, musician. Recently switched from earning to give to direct work in pandemic mitigation. Married to Julia Wise. Speaking for myself unless I say otherwise.
Full list of EA posts: jefftk.com/news/ea
I’m sure this varies a lot by person: I was earning to give for ~16y and probably more motivated at the end than when I started. The longer I worked in my field the better I understood it, the more I got to be deciding what I (and later my team) worked on, and the more (non-altruistic) impact I could have.
And if you’re in Boston (Somerville area) and are interested in something similar let me know! (1.5y, 6y, 8y).
I think a lot of people moved from “I agree others matter regardless of where, or when, they are but figuring out how to help people in the future isn’t very tractable” to “ok, now I see some ways to do this, and it’s important enough that we really need to try”.
Or maybe this was just my trajectory (2011, 2018, 2022) and I’m projecting a bit...
early EA summits were pretty important
The first EA summit was the one you linked in summer 2013, so it just wasn’t early enough.
(You could argue that it was important for the movement’s growth)
a lot of prominent public-facing EAs
I wonder if this is that we’re looking at the same numbers and seeing them differently, or whether we think the numbers are different?
If I think of the ten most well known EAs (not sharing the list because I don’t want to be ranking people), 5 are parents. Looking through Wikipedia:People_associated_with_effective_altruism I count 32 people, of which I recognize 8 as parents (and others may be). Of the top 50 posters by karma I recognize (a mostly different) [EDIT: nine] as parents, but there are a lot I don’t know the parental status of.
EA Forum, EA Global meetings, 80k Hours podcasts, etc seem to be relatively childless
I’m not sure what you mean by ‘childless’ here? I agree there aren’t many children participating in these spaces, but that’s also normal in the broader world. Do you mean that we don’t talk about children much? That it’s common for people to assume the audience doesn’t have kids?
AI advocates brushing aside all concerns about ‘technological unemployment’
I see people digging into this and comparing it to other risks, not brushing it aside. For example, here’s Holden (a parent!) making the case that by the time you get much technological unemployment you probably have much larger disruptions.
On your last point, since there are now quite a lot of EAs who are parents, disproportionately senior EAs, I would think we would be well into the diminishing returns in terms of having advocates who parents with that perspective would take seriously?
one person, with narcissistic personality disorder, overbearingly dominated discourse on LessWrong and appropriated EA identity elsewhere. Only recently, when it has become egregiously clear that this person has negative value, has the community done much to counter him.
!?
I don’t feel entitled to contact funders to ask them about their process (perhaps I should feel free to? I’m not sure)
I do think you should feel free to do this! Open Phil, EA Funds, GiveWell, and ACE all have contact pages, and “I’m thinking about how EA funding could be better and I wanted to understand more how you work,” followed by specific questions that aren’t too much work to answer, is something I’d expect to be well received.
On the other hand, I don’t think you have to do this. Your post, as is, still is helpful in describing how the NIH and NSF make funding decisions. Personally, though, I would want to at talk to existing funders and learn a bit about their current process (or learn that they don’t want to talk about it) before including proposals about what I think they should do differently ;)
I think without the “now and in the years to com” part people might think that it’s only about improving the lives of others who currently exist?
For about a year now I’ve had a post kicking around in my head that there’s no EA interest in putting numerical bounds on the value of, for example, a strong tenant movement, the end of mass incarceration, a strong labor movement, the end of the drug war, the end of war in general.
If you do get to writing the post you probably want to include that mass incarceration was something Open Phil looked into in detail and spent $130M on before deciding in 2021 that money to GiveWell top charities went farther. I’d be very interested to read the post!
power-flattering answers that want you to make a lot of money and donate it to EA,
Making money to donate hasn’t been a top recommendation within EA for about five years: it still makes sense for some people, but not most.
When you say “donating to EA” that’s ambiguous between “donating it to building the EA movement” and “donating it to charities that EAs think are doing a lot of good”. If you mean the latter I agree with you (ex: see what donation opportunities GWWC marks as “top rated”).
while making 0 effort to convince your friends or society
When people go into this full time we tend to say they work in community building. But that implies more of an “our goal is to get people to become EAs” than is quite right—things like the 80k podcast are often more about spreading ideas than about growing the movement. And a lot of EAs do this individually as well: I’ve written hundreds of posts on EA that are mostly read by my friends, and had a lot of in-person conversations about the ideas.
Let’s say you had a billion dollars to address “pandemic risk” in the world. Could you actually meaningfully reduce pandemic risk? … This is a class issue, like it or not, and dumping a billion dollars into it won’t solve class.
Effectively addressing risk from future pandemics wouldn’t look like “spend a lot more money on the things we are already doing”. Instead it would be things like the projects listed in Concrete Biosecurity Projects (some of which could be big) or Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics. (Disclosure: I work for a project that’s on both those lists).
Does emotion not guide deworming initiatives? Or are EAs just happy to make a number go up? I can’t tell
Personally my donations to deworming haven’t been guided by my emotional reaction to parasites. My emotions are just not a very good guide to what most needs doing! I’m emotionally similarly affected by harm to a thousand people as a million: my emotions just aren’t able to handle scale. Emotions matter for motivation, but they’re not much help (and can hurt) in prioritization.
You also write both:
EA should’ve been happy to take his money but assumed it was going to collapse.
And then later on:
I like the callout of “theories of change” and “Funding bodies should within 6 months publish lists of sources they will not accept money from, regardless of legality”. Poisonous funders are, IMO: …99.9% of crypto
These seem like they’re in conflict?
Copying the text here in case the archive.is copy is lost:
Whether you have $100 or $100 million that you can apply to improving humanity’s condition, there are more effective and less effective ways to use that money. For example, say you want to donate $1,000 to an AIDS foundation, possibly one that attempts to slow the spread of HIV/AIDS in Africa, or one that works on figuring out how to control HIV itself, so that people can live with the virus. Giving to such a foundation is an altruistic act, a good deed, and thus by itself wonderful and commendable. It’s certainly more altruistic to give $1000 to an AIDS foundation than to buy someone you care for $1000 in material possessions; possessions they don’t need in any important sense. However, if the person with $1000 was aware, or became aware, of a way that they could use their money to help achieve a much greater and longer lasting altruistic impact on humanity, even though that way was less emotionally satisfying than helping AIDS victims, then it would be more helpful, more altruistic, and achieve greater lasting good if they used their money in the second way.
Everything that I know about trying to do what’s good, which includes trying to figure out how to do what’s good, tells me that supporting the second way is preferable to supporting the AIDS foundation (that’s not to say that supporting such a foundation isn’t worthwhile or important!), and certainly more preferable than using the money on jewelry. To my knowledge, the best known examples of the second way are SingularityExplicitAndImplicitWork; the most effectively altruistic forms of work that one can presently do. —Anand
It is also worth noting that “IntuitiveSelfishness” is not the same thing as “RationalSelfishness” or “EffectiveSelfishness”. —observer
Last edited October 4, 2003 1:19 am by Observer
“women were about 30% of the forecaster pool” and “80% of research analyst applicants were male” aren’t very far apart, especially since the former is from memory and the latter is a small sample.
(I did a bit of looking trying to find the gender breakdown of Good Judgement Project volunteers, without success)
Agreed. For what it’s worth GWWC also uses a higher threshold than the 1x cash this post advocates.
Here’s the letter: image from Expo.
It doesn’t mention “subject to due diligence”. It says they’re waiting for the foundation to complete registration.
The FLI did nothing wrong.
I don’t completely agree: grantmaking organizations shouldn’t issue grant intent letters which imply this level of certainty before completing their evaluation. I expect one outcome here will be that FLI changes how they phrase letters they send at this stage to be clearer about what they actually represent, and this will be a good thing on its own where it helps grantees better understand where they are in the process and how confident to be about incoming funds.
I’m also not convinced that the stage at which this was caught is the stage at which their process was intended to catch it, but that wouldn’t rise to the level of doing something wrong—it would be small internal mistake if it hadn’t been for the misleading letter.
An advertiser can choose to pay per conversion instead of per click, but whether to send the conversion ping is always up to the advertiser. They don’t need to use something like Smile to get an excuse for not sending the ping: they can just not send the ping.
(Why send pings at all? The reason to pay per conversion is to let Google optimize for sending you the cheapest traffic that converts. Your bids still end up in auctions against others, though, and if Google’s estimate of how likely this traffic is to convert on your site is lower, the bids they’ll put in on your behalf are lower, and you’ll lose lots of auctions you’d have preferred to win.)
I wonder whether Larissa MacFarquhar would be interested? She wrote about the early EA community in her 2015 book Strangers Drowning (chapters “At Once Rational and Ardent” and “From the Point of View of the Universe”) and also wrote a 2011 profile of Derek Parfit.