I was responding mainly to the format. I don’t expect you to get complete answers to your earlier two questions because there’s a lot more rationality methodology in EA than can be expressed in the amount of time I expect someone to spend on an answer
If I had to put my finger on why I don’t feel like the failure to answer those questions is as concerning to me as it seems to be for you I’d say because.
A) Just because it’s hard to answer doesn’t mean EAs aren’t holding themselves and each other to a high epistemic standard
B) Something about perfect not being the enemy of good and about urgency of other work. I want humanity to have some good universal epistemic tools but currently I don’t have them and I don’t really have the option to wait to do good until I have them. So I’ll just focus on the best thing my flawed brain sees to work on at the moment (using what fuzzy technical tools it has but still being subject to bias) because I don’t have any other machinery to use
I could be wrong but my read from your comments on other answers is that we disagree most on B). E.g you think current EA work would be better directed if we were able to have a lot more formally rational discussions. To the point that EA work or priorities should be put on hold (or slowed down) until we can do this.
I think I disagree with you on both A and B, as well as some other things. Would you like to have a serious, high-effort discussion about it and try to reach a conclusion?
I was responding mainly to the format. I don’t expect you to get complete answers to your earlier two questions because there’s a lot more rationality methodology in EA than can be expressed in the amount of time I expect someone to spend on an answer
If I had to put my finger on why I don’t feel like the failure to answer those questions is as concerning to me as it seems to be for you I’d say because.
A) Just because it’s hard to answer doesn’t mean EAs aren’t holding themselves and each other to a high epistemic standard
B) Something about perfect not being the enemy of good and about urgency of other work. I want humanity to have some good universal epistemic tools but currently I don’t have them and I don’t really have the option to wait to do good until I have them. So I’ll just focus on the best thing my flawed brain sees to work on at the moment (using what fuzzy technical tools it has but still being subject to bias) because I don’t have any other machinery to use
I could be wrong but my read from your comments on other answers is that we disagree most on B). E.g you think current EA work would be better directed if we were able to have a lot more formally rational discussions. To the point that EA work or priorities should be put on hold (or slowed down) until we can do this.
I think I disagree with you on both A and B, as well as some other things. Would you like to have a serious, high-effort discussion about it and try to reach a conclusion?