Milan_Griffes
Karma: 1,018
NewTop
Giving more won’t make you happier
If slow-takeoff AGI is somewhat likely, don’t give now
What consequences?
Doing good while clueless
What we talk about when we talk about life satisfaction
when I’m making public comments without a time crunch
My hunch is even when there’s a time crunch, fewer words will be bigger bang for buck :-)
I’ve been toying around with the following:
There are two motivations for donating money – egotistic (e.g. it feels good to do) & altruistic (e.g. other people are better off)
The egotistic motivation is highly scope insensitive – giving away $500 feels roughly as good as giving away $50,000
Probably also scope insensitive qualitatively – giving $5,000 to a low-impact charity feels about as good as giving $5,000 to an effective charity (especially if you don’t reflect very much about impact)
This scope insensitivity is baked in – knowing about it doesn’t make it go away
EA orgs sometimes say that giving effectively will make you happier (e.g. 80k, e.g. GWWC)
These arguments ignore the scope insensitivity of the egotistic motivation – donating some money to charity will probably make you happier than not donating any at all. It’s less clear that donating more money to charity will make you happier than donating some (and especially unclear that the donation <> happiness link scales anything close to linearly)
Ergo, EA should stop recommending effective giving on egotistic grounds, and probably even encourage people to not do effective giving if they’re considering it because they want to be happier (related)
If the above is true, effective giving won’t make you much happier than low-impact giving, and donating large amounts won’t make you much happier than donating small amounts
e.g. $100 to GiveDirectly feels about as good as $1,000 to GiveDirectly; e.g. saving one (statistical) life via AMF feels about as good as saving two (statistical) lives via AMF
Advocating for effective giving on egotistic grounds (e.g. “it will make you happier”) is sorta a false promise
“Just take the expected value” – a possible reply to concerns about cluelessness
I’m also curious for more info about how teams were selected.
Is intellectual work better construed as exploration or performance?
Why I’m skeptical of cost-effectiveness analysis
Reposting as comment because mods told me this wasn’t thorough enough to be a post.
Briefly:
The entire course of the future matters (more)
Present-day interventions will bear on the entire course of the future, out to the far future
The effects of present-day interventions on far-future outcomes are very hard to predict
Any model of an intervention’s effectiveness that doesn’t include far-future effects isn’t taking into account the bulk of the effects of the intervention
Any model that includes far-future effects isn’t believable because these effects are very difficult to predict accurately
How tractable is cluelessness?
Was chatting with Gwern about this. An excerpt of their thoughts (published with permission):
Wellbutrin/Buproprion is a weird one. Comes up often on SSC and elsewhere as surprisingly effective and side-effect free, but also with a very wide variance and messy mechanism (even for antidepressants) so anecdotes differ dramatically.
With antidepressants, you’re usually talking multi-week onset and washout periods, so blinded self-experiments would take something like half a year for decent sample sizes. It’s not that easy to get if you aren’t willing to go through a doctor (ie no easy ordering online from a DNM or clearnet site, like modafinil)…
Finally, as far as I can tell, my personal problems have more to do with anxiety than depression, and anti-anxiety is not what buproprion is generally described as best for, so my own benefit is probably less than usual. I thought about it a little but decided it was too weird and hard to get, and self-experiments would take too long.
Comments on any issue are generally welcome but naturally you should try to focus on major issues rather than minor ones. If you post a long line of arguments about education policy for instance, I might not get around to reading and fact-checking the whole thing, because the model only gives a very small weight to education policy right now (0.01) so it won’t make a big difference either way. But if you say something about immigration, no matter how nuanced, I will pay close attention because it has a very high weight right now (2).
I think this begs the question.
If modeler attention is distributed proportional to the model’s current weighting (such that discussion of high-weighted issues receive more attention than discussion of low-weighted issues), it’ll be hard to identify mistakes in the current weighting.
Great to see so many folks working at cool stuff at the EA Hotel!
Thank you for taking the time to write this up, and for everything else you’ve done to make this happen.
I think paper books are an under-appreciated technology these days; very excited that paper versions of Rationality are coming out!
Thanks for making this happen!
Emergent Ventures is looking to fund projects on “advancing humane solutions to those facing adversity – based on tolerance, universality, and cooperative processes”
Recommend shooting Tyler Cowen an email checking about the chances for your idea before submitting an app. He’s pretty responsive to email & there are areas Emergent Ventures almost certainly won’t fund, so checking first can save a bunch of time.
These two directions put us in a difficult position. Given our limited resources, if we go narrower, then we’ll make our site worse for the broader audience, and vice versa.
Has 80k considered spinning off a sister org that focuses on the broader audience?
Seems like serving both the narrow-career-advice & broad-career-advice markets are important EA projects.
80k could be comparatively well-positioned to address both, given its track record & funder base.
This is great – consider making it a standalone post?
Found this surprising given the positive valence of the rest of the comment. Could you expand a little on why you don’t think Leverage et al. are a good use of time/money?