I think ethnography could be useful. But what I really want is for people to spend more time discussing why they make donations, prioritize certain causes above others, etc.
People write about this on the Forum all the time, but the number of people who post on the Forum is a tiny fraction of the number who donate a lot of money, want to work in a certain field, etc. I don’t mind if people have lots of hand-wavey bits in their models (lord knows I do); I mostly want to see what kinds of reasons they think they have:
How many of us mostly make decisions by putting a lot of trust in EA organizations?
How many of us found that the decisions we were making already matched up with what EA organizations recommended?
How many of us do any kind of independent analysis of orgs we support, or even read what those orgs write about themselves?
...and so on. Invisible motives can be very powerful, but they don’t have to be invisible. (Now that I’ve said this, I realize I should write up a “where I’m giving and why” post at the end of this year; thanks for the inspiration, Holly!)
Can you spell out why you’d like to see that? As read I your comment I immediately thought ‘I would also like to see this’ and then realised I wasn’t sure why self-reports of reasons would be useful.
This could be a long essay, but here are the two points which most stand out to me:
1. I’d like a culture of more honesty/transparency in EA around, specifically, charitable giving; it’s a huge part of the movement, but few people talk openly about their own giving decisions, which seems like it has a few different bad effects (for example, making it seem like direct work is a much bigger part of EA than it is, thus increasing the pressure on people to do direct work and feel like donating doesn’t matter).
2. I want to learn from people who have spent time thinking about giving, even if those thought processes aren’t completely clear or unbiased. I can’t possibly follow all of the interesting charities that might appeal to EAs, so seeing where people give is often really informative for me.
Seems like there are a lots of incentive effects & cognitive biases that’d be activated when someone writes up a public-facing account of their prioritization & donation decisions.
Well, the idea would be to try and write your way through those biases and incentives as best you can—the idea being that EA should have a culture where it’s fine to not have all the numbers and to have a personal pull in certain directions, as long as you can recognize this. I’d guess that 90+% of Giving What We Can members don’t have really distinct personal models for their donations, for example, and I’d be interested to hear how they choose instead.
the idea would be to try and write your way through those biases and incentives as best you can
I think a crux here is that I’m bearish about the community being able to collectively write its way through this in a way that’s positive on net.
It seems like you’re more bullish about that.
(I agree that getting more truth-tracking info about why folks are making the decisions they make is a good goal. I think we have a tactical disagreement about how to surface truth-tracking information.)
I think that if a lot of people tried to do this, few would fully succeed, and most would mostly fail, but that we’d all learn a lot in the process and get better at bias-free belief reporting over time.
The EA community has become unusually good at some forms of communication (e.g. our online discussions are more civil and helpful than those almost anywhere else), and I think that’s partly a function of our ability to help each other improve through the use of group norms, even if no group member fully adheres to those norms.
I think that if a lot of people tried to do this, few would fully succeed, and most would mostly fail, but that we’d all learn a lot in the process and get better at bias-free belief reporting over time.
Right. I’m modeling some subset of the failures as negative expected value, and it’s not obvious to me that the positive impact of the successes would outweigh the impact of these failures.
The EA community has become unusually good at some forms of communication (e.g. our online discussions are more civil and helpful than those almost anywhere else)
Totally agree. I don’t understand why our communication norms are so good (compared to benchmarks).
Because I don’t have a believable causal model of how this came to be, I have a Hayekian stance towards it – I’m reluctant to go twiddling with things that seem to be working well via processes I don’t understand.
I’m reluctant to go twiddling with things that seem to be working well via processes I don’t understand.
To me, one of the things that has “worked well” historically has been “people in EA writing about why they’ve made decisions in great detail”. These posts tend to be heavily upvoted and have often been influential in setting the tone of discussion around a particular topic. I don’t think people should be forced or pressured to write more of them, but I also don’t see why more of them would turn the sign from positive to negative.
… but I also don’t see why more of them would turn the sign from positive to negative.
There’s probably strong selection effects here.
People write up things / spotlight things that are straightforward to justify and/or make them look good.
People avoid things / downplay things that are opaque and/or unflattering.
(speculative) Perhaps more posts like this would increase the selection pressure, leading to a more distorted map of what’s going on / more distance between the map and the territory.
Zvi’s recent post feels tangentially relevant to our disagreement here:
This is a world where all one cares about is how one is evaluated, and lying and deceiving others is free as long as you’re not caught. You’ll get exactly what you incentivize.
To the extent that ethnography is anonymized, I could imagine people speaking more freely than they do in blog posts, interviews where they’re identified, etc.
I see this as something of a different question, i.e. “What portion of this disagreement is due to factors we can access through self-reflection and rationally discuss?” I would want the ethnography to get at things we’re too embedded in to see.
The term “counter organization” sounds like a bad place to start. I think we currently live in a world where EA organizations are generally pretty transparent about their reasoning and open to being challenged in public, so I’m not sure what a specific “independent watchdog” might accomplish, but I’d be curious to see more details of a proposal in a Forum post!
Good point. I originally interpreted the comment to mean just an independent take on 80k topics, and I’m super-supportive of that, but I agree with you that it shouldn’t be adversarial.
I think ethnography could be useful. But what I really want is for people to spend more time discussing why they make donations, prioritize certain causes above others, etc.
People write about this on the Forum all the time, but the number of people who post on the Forum is a tiny fraction of the number who donate a lot of money, want to work in a certain field, etc. I don’t mind if people have lots of hand-wavey bits in their models (lord knows I do); I mostly want to see what kinds of reasons they think they have:
How many of us mostly make decisions by putting a lot of trust in EA organizations?
How many of us found that the decisions we were making already matched up with what EA organizations recommended?
How many of us do any kind of independent analysis of orgs we support, or even read what those orgs write about themselves?
...and so on. Invisible motives can be very powerful, but they don’t have to be invisible. (Now that I’ve said this, I realize I should write up a “where I’m giving and why” post at the end of this year; thanks for the inspiration, Holly!)
Can you spell out why you’d like to see that? As read I your comment I immediately thought ‘I would also like to see this’ and then realised I wasn’t sure why self-reports of reasons would be useful.
This could be a long essay, but here are the two points which most stand out to me:
1. I’d like a culture of more honesty/transparency in EA around, specifically, charitable giving; it’s a huge part of the movement, but few people talk openly about their own giving decisions, which seems like it has a few different bad effects (for example, making it seem like direct work is a much bigger part of EA than it is, thus increasing the pressure on people to do direct work and feel like donating doesn’t matter).
2. I want to learn from people who have spent time thinking about giving, even if those thought processes aren’t completely clear or unbiased. I can’t possibly follow all of the interesting charities that might appeal to EAs, so seeing where people give is often really informative for me.
(I work for CEA, but these views are my own.)
+1.
Seems like there are a lots of incentive effects & cognitive biases that’d be activated when someone writes up a public-facing account of their prioritization & donation decisions.
Well, the idea would be to try and write your way through those biases and incentives as best you can—the idea being that EA should have a culture where it’s fine to not have all the numbers and to have a personal pull in certain directions, as long as you can recognize this. I’d guess that 90+% of Giving What We Can members don’t have really distinct personal models for their donations, for example, and I’d be interested to hear how they choose instead.
I think a crux here is that I’m bearish about the community being able to collectively write its way through this in a way that’s positive on net.
It seems like you’re more bullish about that.
(I agree that getting more truth-tracking info about why folks are making the decisions they make is a good goal. I think we have a tactical disagreement about how to surface truth-tracking information.)
I think that if a lot of people tried to do this, few would fully succeed, and most would mostly fail, but that we’d all learn a lot in the process and get better at bias-free belief reporting over time.
The EA community has become unusually good at some forms of communication (e.g. our online discussions are more civil and helpful than those almost anywhere else), and I think that’s partly a function of our ability to help each other improve through the use of group norms, even if no group member fully adheres to those norms.
Right. I’m modeling some subset of the failures as negative expected value, and it’s not obvious to me that the positive impact of the successes would outweigh the impact of these failures.
Totally agree. I don’t understand why our communication norms are so good (compared to benchmarks).
Because I don’t have a believable causal model of how this came to be, I have a Hayekian stance towards it – I’m reluctant to go twiddling with things that seem to be working well via processes I don’t understand.
To me, one of the things that has “worked well” historically has been “people in EA writing about why they’ve made decisions in great detail”. These posts tend to be heavily upvoted and have often been influential in setting the tone of discussion around a particular topic. I don’t think people should be forced or pressured to write more of them, but I also don’t see why more of them would turn the sign from positive to negative.
Ben Hoffman’s latest feels tangentially relevant to our disagreement here.
There’s probably strong selection effects here.
People write up things / spotlight things that are straightforward to justify and/or make them look good.
People avoid things / downplay things that are opaque and/or unflattering.
(speculative) Perhaps more posts like this would increase the selection pressure, leading to a more distorted map of what’s going on / more distance between the map and the territory.
What is this bear/bull distinction?
https://www.investopedia.com/terms/b/bull.asp
https://www.investopedia.com/terms/b/bear.asp
Zvi’s recent post feels tangentially relevant to our disagreement here:
To the extent that ethnography is anonymized, I could imagine people speaking more freely than they do in blog posts, interviews where they’re identified, etc.
I see this as something of a different question, i.e. “What portion of this disagreement is due to factors we can access through self-reflection and rationally discuss?” I would want the ethnography to get at things we’re too embedded in to see.
To what extent should we fund a counter organisation to, say, 80000 hours to reresearch its decisions—an independent watchdog so to speak?
The term “counter organization” sounds like a bad place to start. I think we currently live in a world where EA organizations are generally pretty transparent about their reasoning and open to being challenged in public, so I’m not sure what a specific “independent watchdog” might accomplish, but I’d be curious to see more details of a proposal in a Forum post!
(I work for CEA, but these views are my own.)
Thanks, I wrote a post here https://forum.effectivealtruism.org/posts/jwZhhpyZXahkogbkJ/how-do-we-check-for-flaws-in-effective-altruism
Good point. I originally interpreted the comment to mean just an independent take on 80k topics, and I’m super-supportive of that, but I agree with you that it shouldn’t be adversarial.
Maybe post this as a separate question?