Has this post happened anywhere?
Moses
FHI staff were asked to give advice at the highest level of government in the U.K. and the Czech Republic
Is there more info anywhere on the connection between FHI and the Czech govt?
I think the first step, if you believe you’re less competent than your colleagues believe you to be, is to find out who’s wrong—you, them, or both? And are you wrong about your assessment of yourself, or about what your colleagues think of you, or both? Think about what questions you could ask or what metrics you could measure to answer these questions.
If it’s your colleagues who’s wrong, is it worth correcting them? They understand the risks, they know that recruitment is hit and miss. Is it your responsibility to protect them? You can live in fear of the moment when you’ll be found out, or you can cherish the days when you are allowed to do the job, and accept your fate with equanimity. You’re not getting your head cut off; you can choose how you feel about this.
Oh, I would’ve sworn that was already the case (with the understanding that, as you say, there is less volunteering involved, because with the “inner” movement being smaller, more selective, and with tighter/more personal relationships, there is much less friction in the movement of money, either in the form of employment contracts or grants).
So, to simplify your problem: I help someone, but somewhere else there is someone else who I wasn’t able to help. Wat do?
You’re in this precise situation regardless of quantum physics; I guarantee you won’t be able to save everyone in your personal future light cone either. So I think that should simplify your question a bunch.
Why would this change your metaethical position? The reason you’d want to help someone else shouldn’t change if I make you aware of some additional people somewhere which you’re not capable of helping.
Both here and on LW, I have
/allPosts
bookmarked, “Sorted by Daily”; that helps. I haven’t used the front page in ages.
Just as a data point, I didn’t read OP as an attack at all.
I also don’t think that if you have overall negative feedback, you should necessarily have to come up with some good things to say as well, just to balance things out and “be nice”. OP said what they wanted to say and it reads to me like valuable feedback, including the subtle undertone of frustration.
As a data point on the object level, I think that magic sorting makes sense on a website with intense traffic (HN, reddit), not on a site with a few posts a day.
Ah, got it.
Oh, I thought you refer to some kind of legal costs. You mean costs of vetting. Right. As has been noted: EA is vetting constrained, EA is network constrained.
But this is the case with employees as well, isn’t it? It’s just about vetting people in general.
One thing I notice, looking at the 80k job board, is that not that many EA(-adjacent) orgs are interested in remote workers.
The costs to set up contractor relationships are considerable
I’m curious, how does that work in the US? Why is contract work different in this regard from receiving services from any other type of supplier?
Hmm, it’s not so much the classic rationalist trait of overthinking that I’m concerned about. It’s more like…
First, when you do X, the brain has a pesky tendency to learn exactly X. If you set out to practice thinking, the brain improves at the activity of “practicing thinking”. If you set out to achieve something that will require serious thinking, you improve at serious thinking in the process. Trying to try and all that. So yes, practicing thinking, but you can’t let your brain know that that’s what you’re trying to achieve.
Second, “thinking for real” sure is work, but the next question is, is this work worth doing? When you start with some tangible end goal and make plans by working your way backwards to where you are now, that informs you what thinking works needs to be done, decreasing the chance that you’ll waste time on producing research which looks nice and impressive and all that, but in the end doesn’t help anyone improve the world.
I guess if you come up with technology that allows people to plug into the world-saving-machine at the level of “doing research-assistant-kind-of-work for other people who know what they’re doing” and gradually work their way up to “being one of the people who know what they’re doing”, that would make this work.
You wouldn’t be “practicing thinking”; you could easily convince your brain that you’re actually trying to achieve something in the real world, because you could clearly follow some chain of sub-sub-agendas to sub-agendas to agendas and see that what you’re working on is for real.
And, by the same token, you’d be working on something that (someone believes) needs to be done. And maybe sometimes you’d realize that, no, actually, this whole line of reasoning can be cut out or de-prioritized, here’s why, etc.—and that’s how you’d gradually grow to be one of the people who know what they’re doing.
So, yeah, proceed on that, I guess.
Ah.
An important facet of the Middle of the Middle is that people don’t yet have the agency or context needed to figure out what’s actually worth doing, and a lot of the obvious choices are wrong.
This seems to me like two different problems:
Some people lack, as you say, agency. This is what I was talking about—they’re looking for someone to manage them.
Other people are happy to do things on their own, but they don’t have the necessary skills and experience, so they will end up doing something that’s useless in the best case and actively harmful in the worst case. This is a problem which I missed before but now acknowledge.
Normally I would encourage practicing doing (or, ideally, you know, doing) rather than practicing thinking, but when doing carries the risk of harm, thinking starts to seem like a sensible option. Fair enough.
I think a big problem for EA is not having a clear sense of what mid-level EAs are supposed to do.
Funny—I think a big problem for EA is mid-level EAs looking over their shoulders for someone else to tell them what they’re supposed to do
I’ll take your invitation to treat this as an open thread (I’m not going to EAG).
before you’re ready to tackle anything real ambitious… what should you do?
Why not tackle less ambitious goals?
I’m going to speak for myself again:
I view our current situation as a fork in the road. Either very bad outcomes or very good ones. There is no slowing down. There is no scenario where we linger before the fork for decades or centuries.
As far as very bad outcomes, I’m not worried about extinction that much; dead people cannot suffer, at least. What I’m most concerned about is locking ourselves into a state of perpetual hell (e.g. undefeatable totalitarianism, or something like Christiano’s first tale of doom, and then spreading that hell across the universe.
The very good outcomes would mean that we’re recognizably beyond the point where bad things could happen; we’ve built a superintelligence, it’s well-aligned, it’s clear to everyone that there are no risks anymore. The superintelligence will prevent wars, pandemics, asteroids, supervolcanos, disease, death, poverty, suffering, you name it. There will be no such thing as “existential risk”.
Of course, I’m keeping an eye on the developments and I’m ready to reconsider this position at any time; but right now this is the way I see the world.
If humanity wipes itself out, those wild animals are going to continue suffering forever.
If we only partially destroy civilization, we’re going to set back the solution to problems like wild animal suffering until (and if) we rebuild civilization. (And in the meantime, we will suffer as our ancestors suffered).
If we nuke the entire planet down to bedrock or turn the universe into paperclips, that might be a better scenario than the first one in terms of suffering, but then all of the anthropic measure is confined to the past, where it suffers, and we’re foregoing the creation of an immeasurably larger measure of extremely positive experiences to balance things out.
On the other hand, if we just manage to pass through the imminent bottleneck of potential destruction and emerge victorious on the other side—where we have solved coordination and AI—we will have the capacity to solve problems like wild animal suffering, global poverty, or climate change with a snap of our fingers, so to speak.
That is to say, problems like wild animal suffering will either be solved with trivial effort a few decades from now, or we will have much, much bigger problems. Either way—this is my personal view, not necessarily other “long-termists”—current work on these issues will be mostly in vain.
Most often I downvote posts when I’m reasonably confident that it would be a waste of time for others to open and read it (confused posts, off-topic, rambling, trivial, etc.)—my goal with voting is to make recommendations to others.
I rarely downvote comments, typically only when someone’s not playing nice, but that’s more on LW than here.
I think it’s more than a matter of the quantity of thinking; I think there’s a qualitative difference in whether the underlying motive for even starting the train of thought is “I intend to do X, so I have to plan the steps that constitute X”, or whether it’s “X scares the fuck out of me and I have to avoid doing X in a way that the System 2 can rationalize to itself, so it’s either (1) go stare in the fridge, (2) masturbate, (3) deep-clean the bathroom, or (4) start a google doc brainstorming all the concerns I should take into account when prioritizing the various sub-tasks of X. Hmm, 4 sounds like something System 2 would eat up, the absolute dumbass.”
Re: productivity—from personal experience, meditation also seems to help with overthinking. I think that Rationalists in particular have the nasty habit of endless intellectualizing about how to beat akrasia and get myself to do X; it seems that as you meditate, the addiction to this mental movement fades and then it’s not appealing anymore, so you go do X instead.
A good example of a ToC diagram is this old Leverage Research plan.