Director of Research at PAISRI
I don’t know if someone has posted this before, but would be good to compare this to the idea of running for other political offices. For example, maybe a lot could be achieved as a senator or representative rather than as president and those seem easier jobs to get.
Since I originally wrote this post I’ve only become more certain of the central message, which is that EAs and rationalist-like people in general are at extreme risk of Goodharting ourselves. See for example a more recent LW post on that theme.
In this post I use the idea of “legibility” to talk about impact that can be easily measured. I’m now less sure that was the right move, since legibility is a bit of jargon that, while it’s taken off in some circles, hasn’t caught on more broadly. Although the post deals with this, a better version of this post might avoid talking about legibility all together and instead speak in more familiar language about measurement, etc. that people are already familiar with. There’s nothing in here that I think hinges on the idea of legibility, though it’s certainly helpful for framing the point, so if there were interest I think I’d be willing to revisit this post and see if I can make a shorter version of it that doesn’t teaching some extra jargon above all the other necessary jargon.
I think I’d also highlight the Goodharting part more, since that’s really what the problem is. More time on Goodharting and why this is a consequence of that, less time on going round the topic.
I don’t think I ever heard anyone use the phrase “hard-core EAs” or if I did it just passed by without note, but now that I bother to think about it I actually think it’s really apt!
The etymology of hardcore has been a bit lost over the years. Here’s what etymonline says:
also hard-core; 1936 (n.); 1951 (adj.); from hard (adj.) + core (n.). Original use seems to be among economists and sociologists, in reference to unemployables. Extension to pornography is attested by 1966. Also the name of a surfacing material.
Merriam-Webster seem to think it’s a bit older, dating back at least to 1841:
So the earliest sense in which hard core was used was in reference to a sort of foundation on which something substantial was built. In the early 20th century the word broadened its sense to refer to serving as the foundation, or central element, of things aside of man-made structures, such as groups or organizations.
And in its perhaps better known application to pornography, the idea of a hard core that was irredeemable by virtue of how committed it was to immorality (or at least the morality of the time).
So actually I really like the idea of hardcore EAs. They’re the bedrock, the foundation, the EAs who are still going to be there if EA becomes uncool or gets canceled or whatever. It makes me think of people like Peter Singer who would just keep on being an EA even if no one had come up with the label or built a movement. It has the metaphor of being so EA that even if someone brought in a jackhammer you wouldn’t crack.
I don’t know if I am or want to be a hardcore EA, but I’m sure as hell glad they exist!
I can only speak for myself, but assuming my experience generalizes, this means lots of people will miss out on what you have to say. Since you don’t have a prior belief that posts by you are worth reading and this post has a vague title that could be about any number of things, it makes it hard to consider it worth the time to invest in reading. So just purely from the pragmatic point of view, I estimate a summary would help get more people to read.
The irony is that EdoArad and myself have probably now spend enough time engaging with comments on this post that we could have read it, but I know I still haven’t. The comments feel valuable (chatting with a fellow forum member about possible ways to make a post better) while reading the post itself doesn’t (since there’s not even really much of a teaser to pull me in, I’m just not developing any motivation to read).
Friendly suggestion: a summary might help. I briefly skimmed this but was really hoping for a summary. These are often helpful to help readers like me to decide to invest time in a post or not.
I think what’s great about Free Guy is that the AI part is not the center of the plot most of the time. Rather it’s a story about some characters who find themselves in some unusual circumstances. That might not seem much different, but compare typical AI films that spend a lot of time being about AI rather than the characters. By being character-focused, I think it delivers on ideas better than most idea movies that get so caught up in the ideas they forget to tell a good story.
As you’ve noticed, the root of good and bad lies with individual preferences and values. What is good is “merely” that which satisfies our desires at the lowest levels (perhaps what is good is what is least surprising to us, if you buy the predictive processing model of the brain). I put “merely” in scare quote, though, because it’s not so mere as it seems. This is in fact the root of all that matters to us in the world.
It’s normal, when first noticing that good and bad rest on something so subjective as what individuals like, to feel a sense of disease because you’ve likely been carrying around a strong expectation that meaning is externalized and objective in the universe. Realizing that humans create meaning for themselves through their existence rather than relating to it out in the universe can feel like the ground has fallen away.
But, it always was this way, and that which was already true cannot destroy us by having realized it.
Now, we can say a bit more about good and bad. Because all humans are quite similar, we care about substantially similar things and a supermajority of us share common ideas about what is good and bad, even if we tend to focus a lot on the ways in which we differ in our values among each other. If we expand our moral circle to include other animals, we find that there’s still a lot of commonality. Thus, people often choose to equate good with some fundamental thing common to all living beings, like preference satisfaction or not suffering. This is basically how various flavors of utilitarianism are grounded.
As to why are humans important, well, humans are important to us because we’re humans, so it’s reasonable that we value humans. The only confusion is if we previously thought our value was given by the universe to us rather than created by us caring about ourselves, so we’re well entitled to care about things that benefit humanity. Although, while we’re here, maybe we could expand the circle a bit to be all living things? The choice is really up to us!
There’s lots more to explore here, but hopefully that gives you a start!
I like this idea a lot. I spent O($1k) on giftcards this year from tisbest instead of giving more traditional gifts. This is nice in multiple ways: this is way more than I would have spent on regular gifts, and each person gets the chance to give to something they care about. And selfishly I get a tax deduction (although I would have gotten it anyway since most of this money would have been donated anyway) and get to push my agenda on family that giving money is good (this doesn’t seem like the worst thing in the world, but I’ll take it for what it is: I’m doing something that I hope will cause them to be more inclined to make marginally more altruistic choices).
There’s not an easy way for me to make this about EA, though, other than if they ask for advice or something like that, since it ruins the gift a bit if I push them in some direction. But if the gift card mechanism could somehow nudge them towards effective charities, that would be awesome.
Note: Sorry for not creating this as an event post, but I can’t do that yet, and this is time sensitive so I created it as a regular post.
Fund weird things: A decent litmus test is “would it be really embarrassing for my parents, friends or employer to find out about this?” and if the answer is yes, more strongly consider making the grant.
Things don’t even have to be that weird to be things that let you have outsized impact with small funding.
A couple examples come to mind of things I’ve either helped fund or encouraged others to fund that for one reason or another got passed over for grants. Typically the reason wasn’t that the idea was in principle bad, but that there were trust issues with the principals: maybe the granters had a bad interaction with the principals, maybe they just didn’t know them that well or know anyone who did, or maybe they just didn’t pass a smell test for one reason or another. But, if I know and trust the principals and think the idea is good, then I can fund it when no one else would.
Basically this is a way of exploiting information asymmetries to make donations. It doesn’t scale indefinitely, but if you’re a small time funder with plenty of social connections in the community there’s probably work you could fund that would get passed over for being weird in the sense I describe above.
This is basically my own experience. I worked a bunch on AI independent research, but now I don’t really because it just doesn’t make sense: I have way more opportunity to make money to do more good than any direct work I could do, in my estimation, so I just double down on that.
(For context I’m on the higher end of technical talent now: 12 years of work experience, L7-equivalent, in a group tech lead role, and if I can crank up to L8 the potential gains are quite large in terms of comp that I can then donate.)
I also really like the platform this uses, Tisbest. This year I decided to do all my Xmas giving by giving Tisbest cards to folks so they can make donations to places of their choosing. I think it’s a nice way to spread the spirit of giving with folks, and it’s a great chance to talk about EA if anyone asks “what should I donate it to?”.
I don’t want this to seem like it’s directed at this post in particular, but more a general class of things on see on EA Forum, and this just happened to finally trigger the thought for me.
Calls to action like this for things that aren’t broadly accepted as core EA areas would benefit substantially from including links to reminding us why we should care about this.
Like, if someone posts about x-risk or global poverty or animal welfare or something like that, I’m like, sure, seems on topic and relevant to EAs because there’s broad agreement that this thing is solidly within EA and, even if individual EAs choose not to work on it, there’s not a major dispute this is potentially effective, only disagreements about how much it matters relative to other things.
But when I see things about mental health of systematic change or, in this case, election reform, I’m left wondering when this became an EA concern. In this case, I have no idea if approval voting is actually better in terms of outcomes; I just know it’s something people like because they feel like it better reflects their preferences.
Including a link at least to why election reform might be an effective cause area would be helpful for things like this that are calls to action. I dare say it should even really be a norm on the forum: if you’re making a call to action, you need to at least include links to where you’re making the case that it’s an effective cause area.
Again, this is not especially directed at the content of this post, but it did make me realize it would be nice if we could address this more broadly.
My own experience is that there’s a sweet spot. Big tech companies only really offer high compensation for the most experienced and capable employees. If there’s 10 levels and you’re not at least at level 8, a big company is probably not, in my own informal analysis, like to offer you the best compensation in expectation. Some of this is simply because these folks have high opportunity costs, and the only way to get them as employees is to pay them enough that it balances off against what they would likely do instead: start a company.
If you’re in the middle say 4-7, then a large, succeeding startup is probably the best bet. It offers better pay, more room for advancement and promotion, and decent equity.
If you’re at the bottom, especially say because you’re new to work, then early stage startups can provide really great returns in expectation. This works a couple ways. You won’t make a lot of cash compensation, but you’ll earn a lot of equity in expectation, possibly more than $10mm a year if the startup becomes a unicorn. Beyond that, you’ll gain a lot of career capital by getting to do a bit of everything and having to operate fairly independently in ways that you won’t get to do in a larger company, which means you’ll be able to level up faster than you would in a more established place if you apply yourself.
This is all assuming you’re best fit to be an employee rather than an entrepreneur, of course.
Many people want the world to be better.
I feel like there’s a lot of people who take this desire for a better world and then hope that they will be the one to make it all better. Maybe they’ll discover some grand idea that will improve many things and lead us to salvation!
I don’t think that’s what we need though. We mostly need all us little people to just be a bit nicer, a bit more trusting, a bit more compassionate, and then not quite so many grand schemes will be required because we’ll find we’re already living in a better world.
Thanks for your reply. Helps make a case that parliaments do something above and beyond the culture/tradition in which they are situated.
That said, I do want to respond to one thing you said:
Some would say that the aspects that matter are issues like trust, low corruption, respect of property rights, etc. But are there any cultures which do not value those things, which claim they are outright undesirable? I don’t think there are.
Up until 2 days ago I likely would have shared this sentiment, but I was talking with someone who grew up in Romania and as he put it some of these are not so obvious. For example, although corruption was rampant, no one thought of it that way. Instead it was framed as a gifting custom and seen as normal to provide gifts to those providing services to you (doctors, teachers, government officials, etc.) because you want to show your respect and ensure good service. No one thought of this as bribery, so it seemed like they were already low corruption. And it’s easy to imagine folks balking at the idea that it is corruption; how dare, they might say, you come in and disturb our local gift giving tradition!
That makes it quite easy for me to imagine similar stories for things like trust, property rights, etc.: a local equilibrium can become justified and then no one will think a thing is undesirable, or even necessarily realize that something undesirable is going on (in fact, locally it seems quite desirable!).
I’m sure this is addressed in the book I haven’t read, but I wonder how much of this is confounded by former British rule. That is, if you factor out parliamentary systems that were established after a legacy of British rule, would it still be the case that parliaments are better?
I’m guess the argument is “yes’ but I’m not sure and am somewhat suspicious that some of these effects could be cultural ones that just happen to come along with parliaments, making parliamentarism an effect rather than a cause.
I think of it as coming from two angles. One is that it’s a form of community building to expose folks to EA ideas who might otherwise not engage with them by doing so in a language they are familiar with. Two, it’s a way for EAs who are religious to explore how EA impacts other spheres of their life.
I think it’s also nice to have community by creating a sense of belonging. With EA being such a secular space normally, having a way to learn you’re not the only one trying to combine EA and practice of a religion is nice. Good to have folks to talk to, etc.