I like this comment. To respond to just a small part of it:
And not just by convincing others of ârationalismâ or AI risk worries specifically (though I also donât understand why the author discounts this type of âwinningâ)
Iâve also only read the excerpt, not the full post. There, the author seems to only exclude/âdiscount as âwinningâ convincing others of rationalism, not AI risk worries.
I had interpreted this exclusion/âdiscounting as motivated by something like a worry about pyramid schemes. If the only way rationalism made one systematically more likely to âwinâ was by making one better at convincing others of rationalism, then that âwinâ wouldnât provide any real value to the world; it could make the convincers rich and high-status, but by profiting off of something like a pyramid scheme.
This would seem similar to a person writing a book or teaching a course on something like how to get rich quick, but with that person seeming to have gotten rich quick only via those books or courses.
(I think the same thing would maybe be relevant with regards to convincing people of AI risk worries, if those worries were unfounded. But my view is that the worries are well-founded enough to warrant attention.)
But I think that, if rationalism makes people systematically more likely to âwinâ in other ways as well, then convincing others of rationalism:
should also be counted as a âproper winâ
would be more like someone being genuinely good at running businesses as well as being good at getting money for writing about their good approaches to running businesses, rather than like a pyramid scheme
I like this comment. To respond to just a small part of it:
Iâve also only read the excerpt, not the full post. There, the author seems to only exclude/âdiscount as âwinningâ convincing others of rationalism, not AI risk worries.
I had interpreted this exclusion/âdiscounting as motivated by something like a worry about pyramid schemes. If the only way rationalism made one systematically more likely to âwinâ was by making one better at convincing others of rationalism, then that âwinâ wouldnât provide any real value to the world; it could make the convincers rich and high-status, but by profiting off of something like a pyramid scheme.
This would seem similar to a person writing a book or teaching a course on something like how to get rich quick, but with that person seeming to have gotten rich quick only via those books or courses.
(I think the same thing would maybe be relevant with regards to convincing people of AI risk worries, if those worries were unfounded. But my view is that the worries are well-founded enough to warrant attention.)
But I think that, if rationalism makes people systematically more likely to âwinâ in other ways as well, then convincing others of rationalism:
should also be counted as a âproper winâ
would be more like someone being genuinely good at running businesses as well as being good at getting money for writing about their good approaches to running businesses, rather than like a pyramid scheme