# david_reinstein

Karma: 1,446

davidreinstein.org

I am a Senior Economist at Rethink Priorities (https://​​www.rethinkpriorities.org/​​our-team), previously an Economics lecturer/​​professor for 15 years

I’m working to impact EA fundraising and marketing; see https://​​bit.ly/​​eamtt

And projects bridging EA, academia, and open science (esp. the ‘Unjournal’) … see bit.ly/​eaprojects

My previous and ongoing research focuses on determinants and motivators of charitable giving (propensity, amounts, and ‘to which cause?’), and drivers of/​barriers to effective giving, as well as the impact of pro-social behavior and social preferences on market contexts.

Podcasts: “Found in the Struce” https://​​anchor.fm/​​david-reinstein

and the EA Forum podcast: https://​​anchor.fm/​​ea-forum-podcast (co-founder, regular reader)

• Would donations and interventions now, in anticipation of the potential famine, be more effective?

• I think they do want to allow speculative, less ‘we know for sure’ claims that are “epistemically signposted” and reasoning transparent. So to some extent that connotation for the asterisk is not so far off.

But I agree that I have heard of “that home run record comes with an asterisk”. But somewhat doubtful this will come to mind in this context.

• I’m very impressed with this work. I’d love to see more resources devoted to it. This ties in with something I’ve been encouraging and trying to bring together. I made a Gitbook HERE to organize the project and it’s discussion. Please engage and comment on that if you are interested.

IMO big gains here will include:

• the use of quantified uncertainty to allow us to expand the set of interventions we can consider, in a principled and transparent way

• better understanding of how moral parameters and world views should affect your cause prioritization (or not)

• better appreciation and understanding of the importance of the Fermi-MonteCarlo approach, leading to more general adoption and more careful comparisons among EA research and funding orgs

• Very interesting and welcome. This is distinct but has some overlap with the Unjournal .

We should coordinate; some work not relevant for one might be passed to the other, unjournal work might be rewritten for asterisk, and we will face some overlapping implementation issues. Please reach out.

• not so much when they hear you talk about the cool stuff that someone else should do.

Maybe that feels a bit unfair non-steelmanny to me? There are other ways of motivating and helping others and the process other than just saying ‘wouldn’t it be great if someone solved the alignment problem’ etc.

Such as:

• Encouraging people who are working on the problem

• Providing inputs and support to others working on important problems

• Helping communicate and explain the work that is being done, in term helping people coordinate

• # Some feedback please, esp. if it’s about the content, ToC, methods, etc

Maybe not detailed feedback, but I think you should give some feedback, especially to applicants who are particularly EA aligned and have EA forum type epistemic and discussion norms.

I think you should also encourage applicants to briefly respond.

And ideally this should be added to the public conversation too.

Why? Because we are in a low-information space, EA is a question not an answer, and my impression is that we are very uncertain about approaches and theories of change, especially in the X-risk & LT space.

I don’t think we make progress by ‘just charging forward on big stuff’ (not that you are advocating that). A big part of many of these projects themselves, and their impact is ‘figuring out and explaining what we should be doing and why’. So if the grant-making process can add substantial value to that, it’s worth doing (if it has high benefit/​cost, which I argue it does).

You may know and appreciate something the applicants do not, and vice versa. (Public) conversation moves us forward.

## “But this is better done in more systematic ways/​contexts”

Maybe, but ~often these careful assessments don’t happen. I highly rate public feedback in academia as part of the review process if possible … Because in academia people actually ~under-read other people’s work carefully. (Everyone is trying to ‘publish their own’.) In EA this is probably less of a problem, we have better norms but still.

You, the grantmaker have read at least some parts of their project and taken it very seriously in ways that others will not. And you have some expertise and knowledge that may be transferable, particularly given our low information space.

## But “It’s very hard to accurately change someone’s plans based on quick feedback”

That is OK. You don’t need to change their plans dramatically. The information you are giving them will still feed into their world model, and if they respond, v/​v. And even better if it can be made public in some way.

## But “It’s about personal factors” (OK, that’s ~different)

“No because of personal factors or person-project fit reasons” is probably the most common situation in a lot of cases.

I agree that this case and this type of feedback is a bit different, and does not pertain to my arguments above so much. Still, I think people would really appreciate some feedback here. What skills are they missing? What value have they failed to demonstrate? (Here an ounce of personalized feedback could be supplemented by a more substantial body of ‘generalized feedback’)

• But ultimately, I think you are basically saying you are trying to maximise your counterfactual impact, no? Not the impact that can be traced to you, but the impact that you have, through all channels.

• “Write a Philosophical Argument That Convinces Research Participants to Donate to Charity”

Has this every been followed up on? Is their data public?

• Very detailed and interesting, thanks.

One clarification/​suggestion: Where you give sort of ‘null results’, e.g.,

There was no correlation with SAT scores. And notably, there was no correlation with studying economics, which suggests that these effects may not necessarily be driven by having learned expected value theory.

Can you do more to illustrate the confidence/​credible intervals on these (above, correlation coefficient), so we can get a sense of ‘how tightly bounded the result is’ and how much confidence we can have that ‘any difference is likely to be small, at most’?

To ~belabor the point, it would be nice to get a sense of the extent to which results like these (and others like ‘no significant correlation’ or ‘no reliable relationship’) could be simply by an (un)lucky draw in the context of a lack of statistical power. Or, conversely, are these ‘real tightly bounded nulls’.

• I think having kids is widely seen as changing your perspective on what’s important, maybe toward a narrower moral circles (towards your kids and away from others) This could be a cost to consider? E.g.,

• Are you engaging in motivated reasoning … or çommitting other reasoning fallacies?

I propose the following good epistemic check using Elicit.org’s “reason from one claim to another” tool

Whenever you have a theory that

Feed this tool your theory, negating one side or the other[1]

and/​or

And see if any of the arguments it presents seem equally plausible to your arguments for

If so, believe your arguments and conclusion less.

Caveat: the tool is not working great yet, and often requires a few rounds of iteration, selecting the better arguments and telling it “show me more like this”, or feeding it some arguments

1. ^

Or the contrapositives of either

• I had a similar question. Well stated. One answer is various arguments that “sentient valenced AGIs won’t maximise happiness of themselves” as noted by other commenters.

But I don’t think that is satisfying. Because most of the arguments (AFAIK) and appeals against AI risk don’t even mention this. So i think the appeal seems to take on board our feelings that “even if AIs take over and make themselves super happy with all the paper clips, that still feels bad”.

• More from Elicit.org’s “reason from one claim to another”

I am responsible for my family and local community ~ I should donate to the global poor

I am responsible for my family and local community ➟ I feel some moral obligation to them. ➟ I feel equally compelled to care about others more globally. ➟ Therefore, I should donate to the global poor.

I am responsible for my family and local community ➟ It is an innate moral obligation to care for loved ones ➟ The global poor are as deserving of our assistance as loved ones.

I am responsible for my family and local community ➟ Lived through periods of poverty ➟ Would not wish this on anyone ➟ Wouldn’t want my children to grow up in such a world ➟ I should donate to the global poor

I am responsible for my family and local community ➟ I have a wealth of resources. ➟ My opportunities for making a difference are larger than most. ➟ Someone else can care for my immediate surroundings. ➟ The less fortunate of the world need support. ➟ I should donate to the global poor

• From Elicit.org’s “reason from one task to another”

I’m a vegan abolitionist ➟ I want animals to have equal or greater moral consideration than humans do. ➟ At present, large food companies would be unlikely to favouring the abolition of farming over the reduction of suffering, so we should work with them. ➟ This will enable us to simultaneously cause greater improvements in animal welfare and reduce future farming intensity

• GWWC is already quite well-known and referenced as ‘the place you go to donate 10% of your income’. So if a lot of people are coming onto your page with that goal in mind, then it would make sense that the layouts that centre that option and make it as frictionless as possible will do better

Thanks, that makes sense to me in a general senses.

I was thinking in this direction too but having a hard time putting in words ‘why seeing options other than the expected one would make me less likely to follow through’.

Can you dive a little deeper into what the actual ‘friction is’ or ‘what about seeing pledges other than the one I was planning to do would make me less likely to continue?’ I guess my thought was that the mechanism would be indecision, need to take more time to think about this, or maybe a sort of ‘hey, I am over-achieving here, do I really need to signal that I’m a 10-percenter when I could much more easily be a 1-percenter’ … but then I need to think about it more so I don’t decide in the moment.

You could try testing the 10% pledge next to the further pledge without the 1% pledge,

This would be interesting, I agree.

Really key thing feels like a post-pledge survey. ‘Did you already know what you would pledge when you went on our website?’ ‘If so, did you consider giving at a different level when you saw the options?’ etc. I’m sure you’d get a good response rate as people would be motivated to ensure others completed the pledge. Or if you already have this information, it would be really useful to see it!

I think we would get some information from this. (AFAIK we don’t have it but I could ask.) I’m not convinced it would be ‘fully informative’, because of the usual caveats about selection bias and people not always knowing/​remembering what was in their minds. But still, it seems worth doing!

# Giv­ing What We Can—Pledge page trial (EA Mar­ket Test­ing)

16 May 2022 22:39 UTC
55 points