Context: I’ve worked in various roles at 80,000 Hours since 2014, and continue to support the team in a fairly minimal advisory role.
Views my own.
I agree that the heavy use of a poorly defined concept of “value alignment” has some major costs.
I’ve been moderately on the receiving end of this one. I think it’s due to some combination of:
I take Nietzsche seriously (as Derek Parfit did).
I have a strong intellectual immune system. This means it took me several years to get enthusiastically on board with utilitarianism, longtermism and AI safety as focus areas. There’s quite some variance on the speed with which key figures decide to take an argument at face value and deeply integrate it into their decision-making. I think variance on this dimension is good—as in any complex ecosystem, pace layers are important.
I insisted on working mostly remotely.
I’ve made a big effort to maintain an “FU money” relationship to EA community, including a mostly non-EA friendship group.
Some of my object-level views have been quite different to those of my peers over the years. E.g. I’ve had reservations about the maximisation meme ever since I got involved, along the lines of those recently expressed by Holden.
I mostly quit 80,000 Hours in autumn 2015, mainly due to concerns about strategy, messaging and fit with my colleagues. I took on a 90% time role again in 2017, for roughly 4 years.
I have some beliefs and traits that some people find suggestive of a lack of moral seriousness (e.g. I’m into two-thirds utilitarianism; I don’t lose sleep over EA/LT concerns; I’m fairly normie by EA standards).
There are some advantages to the status quo, and I don’t have a positive proposal for improving this.
If someone at 80K or CEA wants to pay me to think about it for a day, I’d be up for that. Maybe I’ll do it anyway. I hesitate because I’m not sure how tractable this is, and I would not be surprised if, on further reflection, I came to think the status quo is roughly optimal at current margins.
Context: I’ve worked in various roles at 80,000 Hours since 2014, and continue to support the team in a fairly minimal advisory role.
Views my own.
I agree that the heavy use of a poorly defined concept of “value alignment” has some major costs.
I’ve been moderately on the receiving end of this one. I think it’s due to some combination of:
I take Nietzsche seriously (as Derek Parfit did).
I have a strong intellectual immune system. This means it took me several years to get enthusiastically on board with utilitarianism, longtermism and AI safety as focus areas. There’s quite some variance on the speed with which key figures decide to take an argument at face value and deeply integrate it into their decision-making. I think variance on this dimension is good—as in any complex ecosystem, pace layers are important.
I insisted on working mostly remotely.
I’ve made a big effort to maintain an “FU money” relationship to EA community, including a mostly non-EA friendship group.
I am more interested in “deep” criticism of EA than some of my peers. E.g. I tweet about Peter Thiel on death with dignity, Nietzsche on EA, and I think Derek Parfit made valuable contributions but was not one of the Greats.
Some of my object-level views have been quite different to those of my peers over the years. E.g. I’ve had reservations about the maximisation meme ever since I got involved, along the lines of those recently expressed by Holden.
I mostly quit 80,000 Hours in autumn 2015, mainly due to concerns about strategy, messaging and fit with my colleagues. I took on a 90% time role again in 2017, for roughly 4 years.
I have some beliefs and traits that some people find suggestive of a lack of moral seriousness (e.g. I’m into two-thirds utilitarianism; I don’t lose sleep over EA/LT concerns; I’m fairly normie by EA standards).
There are some advantages to the status quo, and I don’t have a positive proposal for improving this.
If someone at 80K or CEA wants to pay me to think about it for a day, I’d be up for that. Maybe I’ll do it anyway. I hesitate because I’m not sure how tractable this is, and I would not be surprised if, on further reflection, I came to think the status quo is roughly optimal at current margins.