LessWrong/Lightcone Infrastructure
Ruby
Sounds an awful lot like LessWrong, but competition can be healthy[1] ;)
- ^
I think this is less likely to be true of things like “places of discussion” because splitting the conversation / eroding common knowledge, but I think it’s fine/maybe good to experiment here.
- ^
I didn’t scrutinize, but at a high-level, new intro article is the best I’ve seen yet for EA. Very pleased to see it!
I think 20% might be a decent steady-state but at the start of their involvement I think I’d like to see new aspiring community builders do something like six months on intensive object-level work/research.
Fwiw, my role is similar to yours, and granted that LessWrong has a much stronger focus on Alignment, but I currently feel that a very good candidate for the #1 reason that I will fail to steer LW to massive impact is because I’m not and haven’t been an Alignment researcher (and perhaps Oli hasn’t been either, but he’s a lot more engaged with the field than I am).
Again, thanks for taking the time to engage.
I think this post is maybe a format that the EA Forum hasn’t done before, but this is intended to be a repository of advice that’s crowd-sourced. This is also maybe not obvious because I “seeded” it with a lot of content I thought was worth sharing (and also to make it less sad if it didn’t get many contributions – so far a few).
As I wrote:I’ve seeded this post with a mix of advice, experience, and resources from myself and a few friends, plus various good content I found on LessWrong through the Relationships tag. The starting content is probably not the very best content possible (if it was, why make this a thread?), but I wanted to launch with something. Don’t anchor too hard on what I thought to include! Likely as better stuff is submitted, I’ll move some stuff out of the post text and into comments to keep the post from becoming absurdly long.
I also solicit disagreement:
Please disagree with advice you think is wrong! (It probably makes sense to add notes/links about differing views next to advice items in the main text, so worth the effort to call out stuff you disagree with.)
If you’re okay with it, I will add your points of disagreement into the main post.
It is definitely not comprehensive! I put this together within a few hours over the weekend, I did not aim to start off with everything that’s relevant. (Somehow still reached 10k words). If someone has good content on childbirth, pregnancy, etc., I think that would be great to add. On reflection, I’m in favor it being a behemoth and people hunting for sections relevant to them and/or later distillation.
it’s not clear to me that a lot of this is actually very good advice for a lot of people.
I agree, because giving universal advice is extremely hard. The approach I’d advise for this is for people to read it, and if it seems like a good idea for them, try it. But also “consider reversing all advice you hear”.
I’m also not sure why you linked to a list of ‘negative expectation value’ ‘infohazard’ questions that you don’t recommend people do?
Because it’s funny and fun. Note that I didn’t write the text around it. Like most of the text, it’s stuff I copied in. Also, it’s a not a major world-ending infohazard. It’s clearly marked. And as a commenter wrote on LW, it’s only an infohazard if your relationship is bad (I think his bar for good relationships is too high, but I agree that healthier relationships/people aren’t at as much risk.
And finally, most bizarrely… why is 50% of the ‘sex’ advice section a survey on what it is like to have sex with one particular guy?
Going back to how I was just seeding the crowd-sourced post with content I had, that was something I had on hand. I didn’t have other stuff and didn’t feel like going hunting for advice, but thought it’d be good if other people had things that wanted to recommend be added to that section. As I write in that section:
How to have good and healthy sex is beyond the intended scope for this thread, but I welcome people to add links to external resources here (or submit them via comments with spoiler text/warning, or the Google Form).
I agree that it’d be much better if that one link was not 50% of the list! But I actually thought it’s a helpful read for people who don’t find it TMI.
Hi Larks, thanks for taking the time to engage.
I’m not sure how relevant this is to the EA forum?
I personally think that for Effective Altruists to be effective, they need to be healthy/well-adjusted/flourishing humans and therefore something as crucial as good relationship advice ought to be shared on the EA Forum (much the same productivity, agency or motivation advice).
I didn’t mention it in the post, but part of the impetus for this post came from Julia’s recent Power Dynamics between people in EA post that discusses relationships, and it seemed like collecting broader advice on that would make for a healthier community overall. Mm, that’s a point I’d emphasize – healthy relationships between individuals makes for a healthy community, especially when the individuals are working within and across EA orgs.
In terms of thinking about why solutions haven’t been attempted, I’ll plug Inadequate Equilibria. Though it probably provides a better explanation for why problems in the broader world haven’t been addressed. I don’t think the EA world is yet in an equilibrium and so things don’t get done because {it’s genuinely a bad idea, it seems like the thing you shouldn’t be unilateral on and no one has built consensus, sheer lack of time}.
Good comment!!
Most ideas for solving problems are bad, so your prior should be that if you have an idea, and it’s not being tried, probably the idea is bad;
A key thing here is to be able to accurately judge whether the idea would be harmful if tried or not. “Prior is bad idea != EV is negative”. If the idea is a random research direction, probably won’t hurt anyone if you try it. On the other hand, for example, certain kinds of community coordination attempts deplete a common resource and interfere with other attempts, so the fact no one else is acting is a reason to hesitate.
Going to people who you think maybe ought to be acting and asking them why they’re not doing a thing is probably a thing that should be encouraged and welcomed? I expect in most cases the answer will be “lack of time” rather than anything more substantial.
For LessWrong, we’ve thought about some kind of “karma over views” metrics for a while. We experimented a few years ago but it proved to be a hard UI design challenge to make it work well. Recently we’ve thought about having another crack at it.
Yes! This. Thank you for writing.
I often get asked why LessWrong doesn’t hire contractors in the meantime while we’re hiring, and this is the answer. In particular the fact that getting contractors to do good work would require all of the onboarding that getting a team member to do good work would require.
He also gave a talk at the EA Summit 2014
I don’t mean that I expect EA Forum software to replace Swapcard for EAG itself probably, just that the goal is to provide similar functionality all year round.
My understanding (which could be wrong, and I hope they don’t mind me mentioning it on their behalf) is that the EA Forum dev team is working to build Swapcard functionality into the forum, including the ability to import your Swapcard data.
In the meantime, I agree with the OP.
I bet that if they are impressive to you (and your judgment is reasonable), you can convince grantmakers at present.
But there already is from the major funders.
Thank you for the detailed reply!
I agree that Earning to Give may make sense if you’re neartermist or don’t share the full moral framework. This is why my next sentence beings “if you’d be donating to longtermist/x-risk causes.” I could have emphasized these caveats more.
I will say that if a path is not producing value, I very much want to demotivate people pursuing that path. They should do something else! One should only be motivated for things that deserve motivation.I’ve looked at the posts you shared and I don’t find them compelling.
I think the best previous argument for Earning to Give is that you as a small donor might be able to fund things that the major funders won’t or can’t, but my current sense is that bar is sufficiently low that it is now very hard to find such opportunities (within the x-risk/lontermist space and framework at least). Things that seem like remotely a good idea get funding now.
I think that the reason we’re not hiring more people isn’t for lack of money, as discussed on that post.
There might be crazy future scenarios where EA suddenly needs a tremendous amount of money, more than all the funders currently have (or will have), in which case additional funds might be useful, but...it seems if we really thought this was the case, the big funders should raise the bar and not fund as many things as generously as they do.
Earn to Learn
Well dang.
Maybe the Visible Thoughts Project.
[Speaking from LessWrong here:] based on our experiments so far, I think there’s a fair amount more work to be done before we’d want to widely roll out a new voting system. Unfortunately for this feature, development is paused while we work on some other stuff.
This kind of utilitarian reasoning seems not too different from the kind that would get one to commit fraud to begin within. I don’t think whether it’s legally required to return or not makes the difference – morality does not depend on laws. If someone else steals money from a bank and gives it to me, I won’t feel good about using that money even if I don’t have to give it back and will use it much better.