I’m going to be working at 80,000hours as a careers advisor starting in September 2021; the opinions I’ve shared here (and will share in the future) are my own.
I interpreted your comment as saying that I was “lambasting the foibles of being a well intentioned unilateralist”, and that I should not be doing so. If that was not the intent I’m glad.
The lesson people I would want people to learn is “I might not have considered all the reasons people might do stuff”. See comment below.
This is closer, I think the framing I might have had in mind is closer to:
people underestimate the probability of tail risks.
I think one of the reasons why is that they don’t appreciate the size of the space of unknown unknowns (which in this case includes people pushing the button for reasons like this).
causing them to see something from the unknown unknown space is therefore useful.
I think last year’s phishing incident was actually a reasonable example of this. I don’t think many people would have put sufficiently high probability on it happening, even given the button getting pressed.
Yeah I guess you could read what I’m saying as that I actually think I should have pressed it for these reasons, but my moral conviction is not strong enough to have borne the social cost of doing so.
One read of that is that the community is strong enough in its social pressure to quiet bad actors like me from doing stupid harmful stuff we think is right.
Another is that social pressure is often enough to stop people from doing the right thing, and that we should be extra grateful to Petrov, and others in similar situations, because of this.
Either reading seems reasonable to discuss today.
This wasn’t intended as a “you should have felt sorry for me if I’d done a unilateralist thing without thinking”. It was intended as a way of giving more information about the probability of unilateralist action than people would otherwise have had, which seems well within the spirit of the day.
I also think it’s noteworthy that in the situation being celebrated the ability to resist social pressure was pointing in the opposite direction to the way it goes here, which seems like a problem with the current structure, but I didn’t end up finding a good way to articulate it, and someone else said something similar already.
It seems fairly likely (25%) to me that had Kirsten not started this discussion (on Twitter) I would have pushed the button because:
actually preventing the destruction of the world is important to me.
doing so, especially as a “trusted community member”, would hammer home the danger of well intentioned unilateralists in the way an essay can’t, and I think that idea is important.
despite being aware of lesswrong and having co-authored one post there, I didn’t really understand how seriously some people took the game previously.
worse, I was in the dangerous position of having heard enough about Petrov day to, when I read the email, think “oh yeah I basically know what this is about”, and therefore not read the announcement post.
I decided not to launch, but this was primarily because it became apparent through this discussion how socially costly it would be. I find people being angry with me on the internet unusually hard, and expect that pushing the button using the reasoning above could quite easily have cost me a significant amount of productive work (my median is ~ 1 week).
This talk and paper discusses what I think are some of your concerns about growing uncertainty over longer and longer horizons.
In my case it was the opposite—I spent several years considering only non-EA jobs as I had formed the (as it turns out mistaken) impression that I would not be a serious candidate for any roles at EA orgs.
NB—None of the things below were done with the goal of building prestige/signalling. I did them because they were some combinaion of interesting, fun, and useful to the world. I doubt I’d have been able to stick with any if I’d viewed them as purely instrumental. I’ve listed them roughly in the order in which I think they were helpful in developing my understanding. The signalling value ordering is probably different (maybe even exactly reversed), but my experience of getting hired by an EA org is that you should prioritise developing skill/knowledge/understanding over signalling very heavily.
As a teacher, I ran a high-school group talking about EA ideas, mostly focusing on the interesting maths. This involved a lot of thinking and reading on my part in order to make the sessions interesting.
Over the course of a few years, I listened to almost every episode of the 80k podcast, some multiple times.
I wrote about things I thought were important on the EA forum.
I volunteered for SoGive as an analyst, and had a bunch of exciting calls with people like GiveWell and CATF as a result.
I spent a bunch of time on Metaculus, including volunteering as a moderator and trying to write useful questions, though I ended up doing fairly well at forecasting by some metrics.
Sentinel seems promising
I don’t think the claim from Linch here is that not bothering to edit out snark has led to high value, rather that if a piece of work is flawed both in the level of snark and the poor quality of argument, the latter is more important to fix.
https://www.animaladvocacycareers.org/ seems like a good option to check out if you’re set on Animal welfare work. Given that you’re thinking about keeping AI on the table, you should probably at least consider keeping pandemic prevention similarly on the table, it seems like a smaller step sideways from your current interests. Have you considered applying to speak to someone at 80,000 hours*?*I’ll be working on the 1-1 team from September, but this is, as far as I can tell, the advice I’d have given anyway, and shouldn’t be treated as advice from the team.
How do you approach identity? If ~no future people are “necessary”, does this just reduce to critical-level utilitarianism (but still counting people with negative welfare, can’t remember if critical level does that)? Are you ok with that?
Trying to summarise for my own understanding.
Is the below a reasonable tl;dr?Total utilitarianism, except you ignore people who satisfy all of:
won’t definitely exist
Have welfare between 0 and T
Where T is a threshold chosen democratically by them, and lives with positive utility are taken to be “worth living”.If so, does this reduce to total utilitarianism in the case that people would choose not to be ignored if their lives were worth living?
Forecasting:Metaculus intro resources, partially complete introductory video series, book.
I think plastic straws are a v good option here, when you consider that:
paper straws are just a worse experience for ~everyone
metal/glass arguably worse for the environment given number of uses and resources required to produce (see also reusable bags)
Some disabled people rely on straws and paper replacements terrible for them
This is certainly closer to the playpumps [actively harmful once you think properly about it] than the ALS [not a huge issue but it’s not like stopping ALS would be actually bad in a vacuum].
Is the claim here that EA orgs focusing on GCRs didn’t think GoF research was a serious problem and consequently didn’t do enough to prevent it, even though they easily could have if they had just tried harder?
My impression is that many organisations and individual EAs were both conerned about risks due to GoF research, and were working on trying to prevent it. A postmortem about strategies used seems plausibly useful, as does a retrospective on whether it should have been an even bigger focus, but the claim as stated above I think is false, and probably unhelpful.
Overall I liked this post, and in particular I very strongly endorse the view that it’s worth spending nontrivial time/energy/money to improve your health, energy, productivity etc. I don’t have a strong view about how useful the specific pieces of advice were, my impression is that the literature is fairly poor in many of these areas. Partly because of this, my favourite section was:
One thing people sometimes say when I tell them there is a small chance taking some pill will fix their problems is that this seems somehow like cheating because it doesn’t require any lifestyle changes. As if because it’s easy you don’t really deserve to have it fixed? I don’t get it but suffice to say that if for ~$20 you can trial something with a simply massive expected value (even if it’s unlikely to work) and usually with almost no downside (you can just stop taking it after two weeks if it doesn’t work) you should definitely try that thing. Think of it like buying a lottery ticket but with much better odds and a chance of actually making you consistently happier in the long-run.
It’s noteworthy that the above applies not just to “taking some pill”, but in fact to any low-cost-of-trying intervention which might prove substantially beneficial in the long run.
To that end, I was surprised to see the following at the end (as I think its framing is contradicted by the above).
Less ideal solutions (but still definitely worth considering) include patching over the problem by trying things like nootropics, antidepressants, or other medication.
It seems straightforwardly wrong to characterise medically treating e.g. clinical depression or ADHD as a “less ideal solution” which is merely “patching over the problem”. For many, treatment will be necessary for at least some time even if lifestyle adjustments and therapy are sufficient management in the longer term. For many others, medicine is a necessary part of the long-term solution, and possibly also a sufficient long-term solution.I really liked this quote from Howie in a recent 80k1 podcast about this.
 - I’m linking to this because I think it makes the point well, but should probably disclose that I’ll be working at 80k from September. The opinions above are only intended to represent my views, including the interpretation of what Howie’s saying in the quote.
I agree, a single rejection is not close to conclusive evidence, but it is still evidence on which you should update (though, depending on the field, possibly not very much)