Can you spell out why you’d like to see that? As read I your comment I immediately thought ‘I would also like to see this’ and then realised I wasn’t sure why self-reports of reasons would be useful.
This could be a long essay, but here are the two points which most stand out to me:
1. I’d like a culture of more honesty/transparency in EA around, specifically, charitable giving; it’s a huge part of the movement, but few people talk openly about their own giving decisions, which seems like it has a few different bad effects (for example, making it seem like direct work is a much bigger part of EA than it is, thus increasing the pressure on people to do direct work and feel like donating doesn’t matter).
2. I want to learn from people who have spent time thinking about giving, even if those thought processes aren’t completely clear or unbiased. I can’t possibly follow all of the interesting charities that might appeal to EAs, so seeing where people give is often really informative for me.
Seems like there are a lots of incentive effects & cognitive biases that’d be activated when someone writes up a public-facing account of their prioritization & donation decisions.
Well, the idea would be to try and write your way through those biases and incentives as best you can—the idea being that EA should have a culture where it’s fine to not have all the numbers and to have a personal pull in certain directions, as long as you can recognize this. I’d guess that 90+% of Giving What We Can members don’t have really distinct personal models for their donations, for example, and I’d be interested to hear how they choose instead.
the idea would be to try and write your way through those biases and incentives as best you can
I think a crux here is that I’m bearish about the community being able to collectively write its way through this in a way that’s positive on net.
It seems like you’re more bullish about that.
(I agree that getting more truth-tracking info about why folks are making the decisions they make is a good goal. I think we have a tactical disagreement about how to surface truth-tracking information.)
I think that if a lot of people tried to do this, few would fully succeed, and most would mostly fail, but that we’d all learn a lot in the process and get better at bias-free belief reporting over time.
The EA community has become unusually good at some forms of communication (e.g. our online discussions are more civil and helpful than those almost anywhere else), and I think that’s partly a function of our ability to help each other improve through the use of group norms, even if no group member fully adheres to those norms.
I think that if a lot of people tried to do this, few would fully succeed, and most would mostly fail, but that we’d all learn a lot in the process and get better at bias-free belief reporting over time.
Right. I’m modeling some subset of the failures as negative expected value, and it’s not obvious to me that the positive impact of the successes would outweigh the impact of these failures.
The EA community has become unusually good at some forms of communication (e.g. our online discussions are more civil and helpful than those almost anywhere else)
Totally agree. I don’t understand why our communication norms are so good (compared to benchmarks).
Because I don’t have a believable causal model of how this came to be, I have a Hayekian stance towards it – I’m reluctant to go twiddling with things that seem to be working well via processes I don’t understand.
I’m reluctant to go twiddling with things that seem to be working well via processes I don’t understand.
To me, one of the things that has “worked well” historically has been “people in EA writing about why they’ve made decisions in great detail”. These posts tend to be heavily upvoted and have often been influential in setting the tone of discussion around a particular topic. I don’t think people should be forced or pressured to write more of them, but I also don’t see why more of them would turn the sign from positive to negative.
… but I also don’t see why more of them would turn the sign from positive to negative.
There’s probably strong selection effects here.
People write up things / spotlight things that are straightforward to justify and/or make them look good.
People avoid things / downplay things that are opaque and/or unflattering.
(speculative) Perhaps more posts like this would increase the selection pressure, leading to a more distorted map of what’s going on / more distance between the map and the territory.
Zvi’s recent post feels tangentially relevant to our disagreement here:
This is a world where all one cares about is how one is evaluated, and lying and deceiving others is free as long as you’re not caught. You’ll get exactly what you incentivize.
Can you spell out why you’d like to see that? As read I your comment I immediately thought ‘I would also like to see this’ and then realised I wasn’t sure why self-reports of reasons would be useful.
This could be a long essay, but here are the two points which most stand out to me:
1. I’d like a culture of more honesty/transparency in EA around, specifically, charitable giving; it’s a huge part of the movement, but few people talk openly about their own giving decisions, which seems like it has a few different bad effects (for example, making it seem like direct work is a much bigger part of EA than it is, thus increasing the pressure on people to do direct work and feel like donating doesn’t matter).
2. I want to learn from people who have spent time thinking about giving, even if those thought processes aren’t completely clear or unbiased. I can’t possibly follow all of the interesting charities that might appeal to EAs, so seeing where people give is often really informative for me.
(I work for CEA, but these views are my own.)
+1.
Seems like there are a lots of incentive effects & cognitive biases that’d be activated when someone writes up a public-facing account of their prioritization & donation decisions.
Well, the idea would be to try and write your way through those biases and incentives as best you can—the idea being that EA should have a culture where it’s fine to not have all the numbers and to have a personal pull in certain directions, as long as you can recognize this. I’d guess that 90+% of Giving What We Can members don’t have really distinct personal models for their donations, for example, and I’d be interested to hear how they choose instead.
I think a crux here is that I’m bearish about the community being able to collectively write its way through this in a way that’s positive on net.
It seems like you’re more bullish about that.
(I agree that getting more truth-tracking info about why folks are making the decisions they make is a good goal. I think we have a tactical disagreement about how to surface truth-tracking information.)
I think that if a lot of people tried to do this, few would fully succeed, and most would mostly fail, but that we’d all learn a lot in the process and get better at bias-free belief reporting over time.
The EA community has become unusually good at some forms of communication (e.g. our online discussions are more civil and helpful than those almost anywhere else), and I think that’s partly a function of our ability to help each other improve through the use of group norms, even if no group member fully adheres to those norms.
Right. I’m modeling some subset of the failures as negative expected value, and it’s not obvious to me that the positive impact of the successes would outweigh the impact of these failures.
Totally agree. I don’t understand why our communication norms are so good (compared to benchmarks).
Because I don’t have a believable causal model of how this came to be, I have a Hayekian stance towards it – I’m reluctant to go twiddling with things that seem to be working well via processes I don’t understand.
To me, one of the things that has “worked well” historically has been “people in EA writing about why they’ve made decisions in great detail”. These posts tend to be heavily upvoted and have often been influential in setting the tone of discussion around a particular topic. I don’t think people should be forced or pressured to write more of them, but I also don’t see why more of them would turn the sign from positive to negative.
Ben Hoffman’s latest feels tangentially relevant to our disagreement here.
There’s probably strong selection effects here.
People write up things / spotlight things that are straightforward to justify and/or make them look good.
People avoid things / downplay things that are opaque and/or unflattering.
(speculative) Perhaps more posts like this would increase the selection pressure, leading to a more distorted map of what’s going on / more distance between the map and the territory.
What is this bear/bull distinction?
https://www.investopedia.com/terms/b/bull.asp
https://www.investopedia.com/terms/b/bear.asp
Zvi’s recent post feels tangentially relevant to our disagreement here: