This seems really spot-on to me. My 2 years of startup experience (at Cruise, a different self-driving car company broadly similar to Aurora) often feels like it was the most important thus far for my personal growth. In fact, I think it’s likely that I should have continued in that role for another year, rather than shifting into direct work when I did.
bwr
[This comment previously consisted of an objection that misunderstood the point of this post, and was mostly deleted]
This is an interesting topic that I hadn’t heard discussed before, and I appreciate learning about these benefits!
While I understand that your goal here was to list arguments in favor of competitive debate, and leave any counterarguments out of the scope, I also think that in doing so you might have fallen short of the stated promise to
do so in the spirit of anti-debate – pointing out the limitations of my arguments where I notice them, and leaving open the possibility that anti-debate could be a superior alternative.
Overall, I think that this aim is incompatible with your decision that
[the disadvantages of competitive debate] – and therefore any all-things-considered conclusions – fall outside of the scope of this post.
unless you plan to write further posts following up on those disadvantages.
In particular, it seems like this post naturally raises the question “and what are the negative impacts of competitive debate on the debaters, if any?”, to which it seems like there are some obvious answers, and probably some less obvious ones.
I think that listing benefits on its own is a fine basis for a post; it just doesn’t seem to me like “the spirit of anti-debate”.
There are no obvious structural connections between knowing correct moral facts and evolutionary benefit.
...
There do not seem to be many candidates for types of mechanism that would guide evolution to deliver humans with reliable beliefs about moral reasons for action. Two species of mechanism stand out.
I haven’t read Lukas Gloor’s post, so I’m not sure whether this counts as “subjectivism” and therefore is implausible to you, but:
Another way to end up with reliable moral beliefs would be if they do provide an evolutionary benefit. There might be objective facts about exactly which moral systems provide this benefit, and believing in a useful moral system could help you to enact that moral system.
For example, it could be the case that what is “good” is what benefits your genes without benefiting you personally. People could thus correctly believe that there are some actions that are good, in the same way they believe that some actions are “helpful”. I think, and have been told, that there are mathematical reasons to think this particular instantiation is not the case, but I haven’t fully understood them yet.
Overall this is a good point, but I have one nit:
I don’t think this follows; in particular, following the policy “everyone does the thing which they are best at in the world” doesn’t actually make a prescription for most people, since most people are not the best in the world at anything (unless you take a weirdly granular view of things, like “the best Orthodontist named P. Sherman, with an office at 42 Wallaby Way, Sydney”, at which point the reductio stops seeming obviously absurdum)