I don’t feel like you’ve engaged with the core of Will’s post here. He proposes that the best way to Do Good Better is to set up a panel. It sounds like you want to define EA in a way that makes setting up a panel “not what EA is about”. I could try to redefine EA so it doesn’t include whatever my least favorite EA charity is, but that wouldn’t constitute a valid argument for that charity being ineffective. You appeal to “honest math”, but you don’t demonstrate how math shows that setting up a panel is a bad idea.
BTW, even if you’re a utilitarian expected utility maximizer at heart, it can make sense to abandon that in some cases for game-theoretic reasons. See moral trade, Newcomb’s paradox, or this essay.
Was I trying to engage with the core of the post? What if I don’t want to engage with the core of the post? What if I already said enough things to sufficiently engage with it before it was posted?
It sounds like you want to define EA in a way that makes setting up a panel “not what EA is about”.
No… I’m saying that creating a set of EA guiding principles is not what EA is about. Try not to make up hidden meanings behind my comments, ok?
BTW, even if you’re a utilitarian expected utility maximizer at heart, it can make sense to abandon that in some cases for game-theoretic reasons. See moral trade, Newcomb’s paradox, or this essay.
Really?
moral trades are advantageous to the utilitarian who is making them (if they’re not advantageous to her then she shouldn’t make them and it’s not a moral trade)
the Newcomb problem is about decision theory not ethics, if one-boxing is the correct decision then utilitarians ought to one box, same goes for two-boxing; those are just two competing theories for maximizing expected utility
That essay doesn’t make any clear arguments and doesn’t show that a utilitarian ought not act like utilitarian; it’s just Scott saying that things seem nicer if lots of people follow certain rules or heuristics
I downvoted both of these comments. I very rarely downvote comments.
“Was I trying to engage with the core of the post? What if I don’t want to engage with the core of the post? What if I already said enough things to sufficiently engage with it before it was posted?”
If you don’t want to engage with the post, don’t post.
If you want to point out that you have already engaged with the ideas in this post (which in your case I think is fair), then maybe link to your previous engagement as Peter did.
I downvoted both of these comments. I very rarely downvote comments.
Okay. Thanks for telling me? I downvote people all the time. It’s not a big deal.
If you don’t want to engage with the post, don’t post.
There is no obligation to respond to every point in a lengthy blog post in order to reply. If someone makes twenty claims, and one of them is false, I can point out that one of their claims is false and say nothing about the remaining nineteen. If I was saying “MacAskill’s blog post is totally wrong because of this one thing he said at the end,” you would have a point. But I didn’t say that.
If you want to point out that you have already engaged with the ideas in this post (which in your case I think is fair), then maybe link to your previous engagement as Peter did.
I figured that was unnecessary, as the person I was replying to was already fully aware of what I had said in the other thread.
“There is no obligation to respond to every point in a lengthy blog post in order to reply. If someone makes twenty claims, and one of them is false, I can point out that one of their claims is false and say nothing about the remaining nineteen.”
Agreed. But you didn’t do that. You made a point which (without reading your supporting argumentation) interacted with none of what Will had said.
“I figured that was unnecessary, as the person I was replying to was already fully aware of what I had said in the other thread.”
Your first comment was actually in reply to Will Macaskill, the OP. I see no reason to assume he, or any third party reading, was fully aware of what you had already said. So you certainly didn’t ‘figure it was unnecessary’ for that reason. I’m not sure what your true reason was.
Agreed. But you didn’t do that. You made a point which (without reading your supporting argumentation) interacted with none of what Will had said.
Sure it did. The OP suggested a body that made decisions based on some set of explicit principles. I objected to the idea of explicit principles.
Your first comment was actually in reply to Will Macaskill, the OP.
Okay, well then let’s just be clear on what comments we’re referring to, so that we don’t confuse each other like this.
Here this what happens. I argue in Thread A that making an EA gatekeeper panel would be a terrible idea. Then Thread B is created where the OP argues for an EA gatekeeper panel which is guided by explicit principles. In Thread B I state that I don’t like the idea of explicit principles.
Apparently you think I can’t say that I don’t like the idea of explicit principles without also adding “oh, by the way, here’s a link to other posts I made about how I don’t like everything else in your blog post.” Yes, I could have done that. I chose not to. Why this matters, I don’t know. In this case, I assumed that Will MacAskill, who is making the official statement on behalf of the CEA after careful evaluation and discussion behind the scenes, knew about the comments in the prior thread before making his post.
I think that you might have a reasonable argument—but in order for it to be a valuable contribution to this discussion you would have had to have broken down your argument more. I think that if you had done this, then your comment would not have been down-voted. If you have already said things before that would strengthen your argument, then a link to these previous arguments would have gone a long way and removed the need to repeat yourself.
I would think there are several smaller principles that go along with Doing Good Better, that would be helpful to have specified. For instance, if someone was claiming to Do Good Better, but wasn’t actually Doing Good Better, in a way that is empirically obvious (e.g. murder, kidnapping, lying, scandal, increasing existential/suffering risks, movement-damage, etc.)
It also seems like we’re aiming more for guidelines, not set-in-stone bylaws.
(EDIT: Not sure what kbog’s response was, but I just realized my comment may seem like I was anchoring on Bad Things to make Gleb look bad; that wasn’t my intent. In addition to being a bit silly, I was just listing things from most severe to less severe, and stopped, partly because I am not sure exactly what principles would make good guidelines)
I agree with you, and I’m anxious about creating an “official” broader definition of EA. That said, it would probably help prevent situations like this from arising, so it may be worth it.
Remember what EA is about. Doing Good Better, and that’s it. No strict principles, no Bill of Rights, just honest math.
I don’t feel like you’ve engaged with the core of Will’s post here. He proposes that the best way to Do Good Better is to set up a panel. It sounds like you want to define EA in a way that makes setting up a panel “not what EA is about”. I could try to redefine EA so it doesn’t include whatever my least favorite EA charity is, but that wouldn’t constitute a valid argument for that charity being ineffective. You appeal to “honest math”, but you don’t demonstrate how math shows that setting up a panel is a bad idea.
BTW, even if you’re a utilitarian expected utility maximizer at heart, it can make sense to abandon that in some cases for game-theoretic reasons. See moral trade, Newcomb’s paradox, or this essay.
I’m paying 5 karma to contribute to Kbog’s defense—he has spent a large amount of time sincerely engaging with these arguments. I suggest reading through http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8nl
Was I trying to engage with the core of the post? What if I don’t want to engage with the core of the post? What if I already said enough things to sufficiently engage with it before it was posted?
No… I’m saying that creating a set of EA guiding principles is not what EA is about. Try not to make up hidden meanings behind my comments, ok?
Really?
moral trades are advantageous to the utilitarian who is making them (if they’re not advantageous to her then she shouldn’t make them and it’s not a moral trade)
the Newcomb problem is about decision theory not ethics, if one-boxing is the correct decision then utilitarians ought to one box, same goes for two-boxing; those are just two competing theories for maximizing expected utility
That essay doesn’t make any clear arguments and doesn’t show that a utilitarian ought not act like utilitarian; it’s just Scott saying that things seem nicer if lots of people follow certain rules or heuristics
I downvoted both of these comments. I very rarely downvote comments.
“Was I trying to engage with the core of the post? What if I don’t want to engage with the core of the post? What if I already said enough things to sufficiently engage with it before it was posted?”
If you don’t want to engage with the post, don’t post.
If you want to point out that you have already engaged with the ideas in this post (which in your case I think is fair), then maybe link to your previous engagement as Peter did.
Okay. Thanks for telling me? I downvote people all the time. It’s not a big deal.
There is no obligation to respond to every point in a lengthy blog post in order to reply. If someone makes twenty claims, and one of them is false, I can point out that one of their claims is false and say nothing about the remaining nineteen. If I was saying “MacAskill’s blog post is totally wrong because of this one thing he said at the end,” you would have a point. But I didn’t say that.
I figured that was unnecessary, as the person I was replying to was already fully aware of what I had said in the other thread.
“There is no obligation to respond to every point in a lengthy blog post in order to reply. If someone makes twenty claims, and one of them is false, I can point out that one of their claims is false and say nothing about the remaining nineteen.”
Agreed. But you didn’t do that. You made a point which (without reading your supporting argumentation) interacted with none of what Will had said.
“I figured that was unnecessary, as the person I was replying to was already fully aware of what I had said in the other thread.”
Your first comment was actually in reply to Will Macaskill, the OP. I see no reason to assume he, or any third party reading, was fully aware of what you had already said. So you certainly didn’t ‘figure it was unnecessary’ for that reason. I’m not sure what your true reason was.
Sure it did. The OP suggested a body that made decisions based on some set of explicit principles. I objected to the idea of explicit principles.
Okay, well then let’s just be clear on what comments we’re referring to, so that we don’t confuse each other like this.
Here this what happens. I argue in Thread A that making an EA gatekeeper panel would be a terrible idea. Then Thread B is created where the OP argues for an EA gatekeeper panel which is guided by explicit principles. In Thread B I state that I don’t like the idea of explicit principles.
Apparently you think I can’t say that I don’t like the idea of explicit principles without also adding “oh, by the way, here’s a link to other posts I made about how I don’t like everything else in your blog post.” Yes, I could have done that. I chose not to. Why this matters, I don’t know. In this case, I assumed that Will MacAskill, who is making the official statement on behalf of the CEA after careful evaluation and discussion behind the scenes, knew about the comments in the prior thread before making his post.
I think that you might have a reasonable argument—but in order for it to be a valuable contribution to this discussion you would have had to have broken down your argument more. I think that if you had done this, then your comment would not have been down-voted. If you have already said things before that would strengthen your argument, then a link to these previous arguments would have gone a long way and removed the need to repeat yourself.
I would think there are several smaller principles that go along with Doing Good Better, that would be helpful to have specified. For instance, if someone was claiming to Do Good Better, but wasn’t actually Doing Good Better, in a way that is empirically obvious (e.g. murder, kidnapping, lying, scandal, increasing existential/suffering risks, movement-damage, etc.)
It also seems like we’re aiming more for guidelines, not set-in-stone bylaws.
(EDIT: Not sure what kbog’s response was, but I just realized my comment may seem like I was anchoring on Bad Things to make Gleb look bad; that wasn’t my intent. In addition to being a bit silly, I was just listing things from most severe to less severe, and stopped, partly because I am not sure exactly what principles would make good guidelines)
nope just posted by accident and couldn’t figure out how to delete the comment
I agree with you, and I’m anxious about creating an “official” broader definition of EA. That said, it would probably help prevent situations like this from arising, so it may be worth it.