How would this be an “internal practice”? The only way this would work would be to have people publically post their earn addresses.
“Internal” in the sense of being primarily intended to solve internal coordination purposes and primarily used in messaging within the community.
I think you underrate the cost of weirdness.
You gave a particular example of a causal pathway by which weirdness leads to bad stuff, but it doesn’t really cause me to change my mind because I was already aware of it as a failure mode. What makes you think I underrate the cost in comparison to the benefits of coordination?
While the kind of his status EA that might be contacted this way might get more emails then they prefer, it’s important for them to be easily contacted by outsiders because that allows for valuable interactions to happen.
They’d still have a normal email. Though there is a risk of moving to an equilibrium for non-paid emails get no attention, and I haven’t thought that through in detail.
It’s not clear to me that we are in a mess.
Well, that’s why I’m posting this—to get some data and find out :)
(I guess the title seemed to have turned a few people off, though)
In hindsight, I should have made the intended use-cases clearer in the post. I optimised for shipping it fast rather than not at all, but that had its costs.
The reason I wrote this was basically entirely motivated by problems I’ve encountered myself.
For example, I’ve spent this year trying to build an AI forecasting community, and faced the awkward problem of needing a critical mass of users, but at the same time recruiting from a base with high opportunity costs and attention value (largely EA). This usually involves a pain-staking process of thinking carefully about who we message and how much, and being quite risk-averse and rather not messaging people at all when we’re uncertain. I would have loved the ability to send paid emails, such that if we did happen to spam people, they could just claim some compensation. Moreover, this is a scalable strategy which would avoid the failure mode where project’s like ours which think a lot about attention costs get deprioritised in favour of projects which don’t.
As another example, I’ve considered unilaterally launching initiatives that seemed important and that no one was doing (like this!), but that very busy people might have reservations/opinions about. This put me in a spot of making awkward trade-offs along the lines analysed above.
In addition to that, I added on some problem that I’ve not personally experienced but which seemed like they should happen due to basic microeconomics.
This is super helpful, thanks (and that’s a really awesome list of email hygiene tips, I’ve saved it).
I wonder whether educating and encouraging good email hygiene could be an easier solution (at least initially).
I think it would improve things on the margin, and also has a much smaller risk of landing us in a worse equilibrium, so it seems robustly good for people to do.
Still, I’m not super excited because if you believe that the initial mess is a coordination problem, the solution is not for individuals to put in lots of effort to be helpful; but for everyone to jointly move to another game where the low-effort/incentivised action is to cooperate rather than defect.
On the topic of weirdness: I expect that if what I’m pointing to is a real problem, and paid emails would help the situation, then the benefits from becoming more effective at coordinating internally would massively outweigh reputational risks from increased weirdness.
I find it somewhat hard to elucidate the reasons I believe this (though could try if you’d want me to), but some hand-wavy examples are Paul Graham’s thoughts that it’s almost always a mistake for startups to worry about competitors as opposed to focusing on building a good product (see paragraph 4); as well as extremely succesful organisations with pretty weird internal practices (e.g. Bridgewater, Amazon).
I think the way to answer the question is: “given the distribution of equilibria we expect following this change, what are the expected costs and benefits, and how does that compare with the costs and benefits under the current equilibrium?” (as well as considering strategic heuristics like avoiding irreversible actions and unilateralist action.)
I don’t update much on your comment since it feels like it’s just pointing out a bunch of costs under a particular new equilibrium, without engaging enough with how likely this is or what the benefits would be. 
-If Julia Wise were prioritising paid emails in her role regarding community health, is she more likely to miss emails from people on the periphery of EA or who have less money, who are potentially very vulnerable?
Here, by assumption, Julia Wise already gets so many emails that she misses some/has to prioritise. So the question is: what gets prioritised currently, and would get prioritised under the new system? There would likely be a shift towards people with more money being more able to get their issues heard—but I’d expect it to be very small (e.g. initial email costs of $5-$25 might be enough). It might also allow her to find out about stuff she otherwise wouldn’t (“I don’t know if this is worth your time, though it might be, and if it wasn’t, here’s $10 to offset the attention cost”).
Though to be clear, I’ve not thought a lot about community health matters, and it’s not the area where I would pilot this.
 To be clear, I’m not claiming you should do the entire analysis, this would be an isolated demand for rigor. Just to engage more with opposing points and say why they’re not convincing.
This was crossposted to LessWrong, replacing all the mentions of “EA” with “rationality”, mutatis mutandis.
I’m posting this as a first step towards collecting data. Poll is a good idea, thanks!
I’m unfortunately only publishing the transcript at this time. The audio contains some sections that were edited out for privacy reasons.
Thanks, that’s great to hear.
The prize has been going on for a while, which seems important, and I think the transparency of the Prize post is really important for making common knowledge of what kind of work there is demand for. So overall it’s pretty great.
The structure of feedback looks to me like: “here’s the object-level content of the post, and here are 2-3 reasons we liked it”. I think you could be more clear about what you want to incentivise. More precisely, the current structure doesn’t answer:
How strong were the reasons relative to each other? (e.g. maybe removing Reason A would make the person win 2nd prize instead of 1st, but removing Reason B might make them win no prize)
Were the reasons only jointly sufficient to merit the prize, or might accomplishing only one of them have worked?
What other properties did the post display, which did not merit the prize? For example, maybe prize-meriting posts tend to be quite long—even though length is not something you want to incentivise on the margin.
Why did the posts end up ordered the way they did? Beyond “the black-box voting process gave that verdict” :) Currently I don’t know why SHOW was judged as deserving 4x the prize money of “The Case for the Hotel”, for example.
[Note: I double-checked with the moderators before posting this to ensure it was not too “marketingy”.]
When I and Tom came up with that, I don’t think we meant “belief” to be imbued with the usual philosophical connotations. Rather, we intended it to mean something like “action-guiding, introspectively accessible representation of a state of affairs existing independently of whether it is queried”.
When people ask me what I think about the world, I can often come up with lots of intelligent sounding answers—but it is unfortunately more rare that my actual actions, plans and normative evaluations are somehow suitably hooked up to, and crucially depend upon, those answers.