Epistemic status: I wrote this quickly (for my standards) and I have ~zero expertise in this domain.
Introduction
It seems plausible that language models such as GPT3 inherit (however haphazardly) some of the traits, beliefs and value judgments of human raters doing RLHF. For example, Perez et al. (2022) find that models trained via RLHF are more prone to make statements corresponding to Big Five agreeableness than models not trained via RLHF. This is presumably (in part) because human raters gave positive ratings to any behavior exhibiting such traits.
Given this, it seems plausible that selecting RLHF raters for more desirable traits—e.g., low malevolence, epistemic virtues / truth-seeking, or altruism—would result in LLMs instantiating more of these characteristics. (In a later section, I will discuss which traits seem most promising to me and how to measure them.)
It’s already best practice to give human RLHF raters reasonably long training instructions and have them undergo some form of selection process. For example, for InstructGPT, the instruction manual was 17 pages long and raters were selected based on their performance in a trial which involved things like ability to identify sensitive speech (Ouyang et al., 2022, Appendix B). So adding an additional (brief) screening for these traits wouldn’t be that costly or unusual.
Clarification
Talking about stable traits or dispositions of LLMs is inaccurate. Given different prompts, LLMs simulate wildly different characters with different traits. So the concept of inheriting dispositions from human RLHF raters is misleading.
We might reformulate the path to impact as follows: If we train LLMs with RLHF raters with traits X, then a (slightly) larger fraction of characters or simulacra that LLMs tend to simulate will exhibit the traits X. This increases the probability that the eventual character(s) that transformative AIs will “collapse on” (if this ever happens) will have traits X.
Open questions
I don’t know how the RLHF process works in detail. For example, i) to what extent is the behavior of individual RLHF raters double-checked or scrutinized, either by AI company employees or other RLHF raters, after the initial trial period is over, and ii) do RLHF raters know when the trial period has ended? In the worst case, trolls could behave well during the initial trial period but then, e.g., deliberately reward offensive or harmful LLM behavior for the lulz.
Fortunately, I expect that at most a few percent of people would behave like this. Is this enough to meaningfully affect the behavior of LLMs?
Generally, it could be interesting to do more research on whether and to what extent the traits and beliefs of RLHF raters influence the type of feedback they give. For example, it would be good to know whether RLHF raters that score highly on some dark triad measure in fact systematically reward more malevolent LLM behavior.
Which traits precisely should we screen RLHF raters for? I make some suggestions in this section below.
Positive impact, useless, or negative impact?
Why this might be positive impact
Pushing for adopting such selection processes now increases the probability that they will be used when training truly transformative AI. Arguably, whether or not current-day LLMs exhibit desirable traits doesn’t really matter all that much. However, if we convince AI companies to adopt such selection processes now, this will plausibly increase the probability that they will continue to use these selection processes (if only because of organizational inertia) once they train truly transformative AIs. If we wait to do so six months before the singularity, AI companies might be too busy to adopt such practices.
Of course, the training setup and architecture of future transformative AIs might be totally different. But they might also be at least somewhat similar.
If (transformative) AIs really inherit, even if in a haphazard fashion, the traits and beliefs of RLHF raters, then this increases the expected value of the long-term future as long as RLHF raters are selected for desirable traits. For example, it seems fairly clear that transformative AIs with malevolent traits increase s-risk and x-risks.
This is probably especially valuable if we fail at aligning AIs. That is, if we successfully align our AIs, the idiosyncratic traits of RLHF raters won’t make a difference because the values of the AI are fully aligned with the human principals anyways. But unaligned AIs might differ a lot in their values. For example, an unaligned AI with some sadistic traits will create more expected disvalue than an unaligned AI that just wants to create paper clips.
It might already be valuable to endow non-transformative, present-day AIs with more desirable traits. For example, having more truthful present-day AI assistants seems beneficial for various reasons, such as having a more informed populace, more truth-tracking, nuanced political discourse, and increased cooperation and trust. Ultimately, truthful AI assistants would also help us with AI alignment. For much more details, see Evans et al. (2021, chapter 3).
Why this is probably useless not that impactful
This doesn’t solve any problems related to inner alignment or mesa optimization. (In fact, it might increase risks related to deceptive alignment but more on this below.)
Generally, it’s not clear that the dispositions or preferences of AIs will correspond in some predictable way to the kind of human feedback they received. It seems clear that current AIs will inherit some of the traits, views, and values of human RLHF raters, at least on distribution. However, as the CoinRun example showcases, it’s difficult to know what values an AI is actually learning as a result of our training. That is, off-distribution behavior might be radically different than what we expect.
There will probably be many RLHF raters. Many of the more problematic traits such as, e.g. psychopathy or sadism seem relatively rare, so they wouldn’t have much of an influence anyways.
People won’t just give feedback based on what appeals to their idiosyncratic traits or beliefs. They are given detailed instructions on what to reward. This means that working on the instructions that RLHF raters receive is probably more important. However, as mentioned above, malevolent RLHF raters or “trolls” might deliberately do the opposite of what they are instructed to do and reward e.g. sadistic or psychopathic behavior. Also, instructions cannot cover every possible example so in unclear cases, the idiosyncratic traits and beliefs of human RLHF raters might make a (tiny) difference.
The values AGIs learn during training might change later as they reflect more and resolve internal conflicts. This process might be chaotic and thus reduces the expected magnitude of any intervention that focuses on installing any particular values right now.
Generally, what matters are not the current LLMs but the eventual transformative AIs. These AIs might have a completely different architecture or training setups than current systems.
Why this might be negative impact
RLHF might actually be net negative and selecting for desirable traits in RLHF raters (insofar it has an effect at all) might exacerbate these negative effects. For instance, Oliver Habryka argues: “In most worlds RLHF, especially if widely distributed and used, seems to make the world a bunch worse from a safety perspective (by making unaligned systems appear aligned at lower capabilities levels, meaning people are less likely to take alignment problems seriously, and by leading to new products that will cause lots of money to go into AI research, as well as giving a strong incentive towards deception at higher capability levels)”. For example, the fact that Bing Chat was blatantly misaligned was arguably positive because it led more people to take AI risks seriously.
On the other hand, Paul Christiano addresses (some of) these arguments here and overall beliefs that RLHF has been net positive.
In general, this whole proposal is not an intervention that makes substantial, direct progress on the central parts of the alignment problem. Thus, it might just distract from the actually important and difficult parts of the problem. It might even be used as some form of safety washing.
Another worry is that pushing for selection processes will mutate into selecting traits we don’t particularly care about. For instance, OpenAI seems primarily concerned with issues that are important to the political left.[1] So maybe pitching OpenAI (or other AI companies) the idea of selecting RLHF raters according to desirable traits will mostly result in a selection process that upholds a long list of “woke” constraints, which in some instances, might be in conflict with other desirable traits such as truthfulness. However, it might still be net positive.
Which traits and how?
I list a few suggestions for traits we might want to select for below. All of the traits I list arguably have the following characteristics:
i) plausibly affects existential or suffering risks if present in transformative AIs.
ii) AI assistants exhibiting more of these traits is beneficial for the longterm future or at least not negative
iii) is uncontroversially viewed as (un)desirable
iv) is (reliably and briefly) measurable in humans.
If we can’t reliably measure a trait in humans, we obviously cannot select for it.
The shorter the measures, the cheaper they are to employ, and the easier it is to convince AI companies to use them.
Ideally, any trait which we want to include in a RLFH rater selection process should have these characteristics. The reasons for these criteria are obvious but I briefly elaborate on them in this footnote.[2]
This isn’t a definitive or exhaustive list by any means. In fact, which traits to select for, and how to measure them (perhaps even developing novel measurements) could arguably be a research area for psychologists or other social scientists.
Dark tetrad traits / malevolence
One common operationalization of malevolence are the dark tetrad traits, comprising machiavellianism, narcissism, psychopathy, and sadism. I have previously written on the nature of dark tetrad traits and the substantial risks they pose. It seems obvious that we don’t want any AIs to exhibit these traits.
Fortunately, these traits have been studied extensively by psychologists. Consequently, brief and reliable measures of these traits exist, e.g., the Short Dark Tetrad (Paulhus et al., 2020) or the Short Dark Triad (Jones & Paulhus, 2014). However, since these are merely self-report scales, it’s unclear how well they work in situations where people know they are being assessed for a job.
Truthfulness and epistemic virtues
(I outlined some of the benefits of truthfulness above, in the third bullet point of this section.)
It’s not easy to measure how truthful humans are, especially in assessment situations.[3] Fortunately, there exist reliable measures for some epistemic virtues that correlate with truthfulness. For example, the argument evaluation test, (Stanovich & West, 1997) or the actively open-minded thinking scale (e.g., Baron, 2019). See also Stanovich and West (1988) for a classic overview of various measures of epistemic rationality.
Still, none of these measures are all that great. For example, some of these measures, especially the AOT scale, have strong ceiling effects. Developing more powerful measures would be useful.
Pragmatic operationalization: forecasting ability
One possibility would be to select for human raters above some acceptable threshold of forecasting ability as forecasting skills correlate with epistemic virtues. The problem is that very few people have a public forecasting track record and measuring people’s forecasting ability is a lengthy and costly process.
Cooperativeness, harm aversion, altruism
In some sense, altruism or benevolence are just the opposite of malevolence[4], so perhaps we could just use one or the other. HEXACO honesty-humility (e.g., Ashton et al., 2014) is one very well-studied measure of benevolence. Alternatives include the self-report altruism scale (Rushton et al., 1981) or behavior in economic games such as the dictator game.
Cooperativeness, however, is a somewhat distinct construct. Others have written about the benefits of making AIs more cooperative in this sense. One measure of cooperativeness is the cooperative personality scale by Lu et al. (2013).
Harm aversion could also be desirable because it might translate into (some form of) low-impact AIs. On the other hand, (excessive) instrumental harm aversion can come into conflict with consequentialist principles.
Other traits
As mentioned above, this is by no means an exhaustive list. There are many other traits which could be desirable, such as empathy, tolerance, helpfulness, fairness, intelligence, effectiveness-focus, compassion, or wisdom. Other possibly undesirable traits include spite, tribalism, partisanship, vengefulness, or (excessive) retributivism.
References
Ashton, M. C., Lee, K., & De Vries, R. E. (2014). The HEXACO Honesty-Humility, Agreeableness, and Emotionality factors: A review of research and theory. Personality and Social Psychology Review, 18(2), 139-152.
Baron, J. (2019). Actively open-minded thinking in politics. Cognition, 188, 8-18.
Evans, O., Cotton-Barratt, O., Finnveden, L., Bales, A., Balwit, A., Wills, P., … & Saunders, W. (2021). Truthful AI: Developing and governing AI that does not lie. arXiv preprint arXiv:2110.06674.
Forsyth, L., Anglim, J., March, E., & Bilobrk, B. (2021). Dark Tetrad personality traits and the propensity to lie across multiple contexts. Personality and individual differences, 177, 110792.
Lee, K., & Ashton, M. C. (2014). The dark triad, the big five, and the HEXACO model. Personality and Individual Differences, 67, 2-5.
Lu, S., Au, W. T., Jiang, F., Xie, X., & Yam, P. (2013). Cooperativeness and competitiveness as two distinct constructs: Validating the Cooperative and Competitive Personality Scale in a social dilemma context. International Journal of Psychology, 48(6), 1135-1147.
Perez, E., Ringer, S., Lukošiūtė, K., Nguyen, K., Chen, E., Heiner, S., … & Kaplan, J. (2022). Discovering Language Model Behaviors with Model-Written Evaluations. arXiv preprint arXiv:2212.09251.
Rushton, J. P., Chrisjohn, R. D., & Fekken, G. C. (1981). The altruistic personality and the self-report altruism scale. Personality and individual differences, 2(4), 293-302.
Stanovich, K. E., & West, R. F. (1997). Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of educational psychology, 89(2), 342.
Stanovich, K. E., & West, R. F. (1998). Individual differences in rational thought. Journal of experimental psychology: general, 127(2), 161.
i) is important because the trait is otherwise not very consequential, ii) is obvious, iii) is more or less necessary because we otherwise couldn’t convince AI companies to select according to these traits because they would disagree or because they would fear public backlash, iv) is required because if we can’t reliably measure a trait in humans, we obviously cannot select for it. The shorter the measures, the cheaper they are to employ, and the easier it is to convince AI companies to use them.
Selecting RLHF human raters for desirable traits?
Epistemic status: I wrote this quickly (for my standards) and I have ~zero expertise in this domain.
Introduction
It seems plausible that language models such as GPT3 inherit (however haphazardly) some of the traits, beliefs and value judgments of human raters doing RLHF. For example, Perez et al. (2022) find that models trained via RLHF are more prone to make statements corresponding to Big Five agreeableness than models not trained via RLHF. This is presumably (in part) because human raters gave positive ratings to any behavior exhibiting such traits.
Given this, it seems plausible that selecting RLHF raters for more desirable traits—e.g., low malevolence, epistemic virtues / truth-seeking, or altruism—would result in LLMs instantiating more of these characteristics. (In a later section, I will discuss which traits seem most promising to me and how to measure them.)
It’s already best practice to give human RLHF raters reasonably long training instructions and have them undergo some form of selection process. For example, for InstructGPT, the instruction manual was 17 pages long and raters were selected based on their performance in a trial which involved things like ability to identify sensitive speech (Ouyang et al., 2022, Appendix B). So adding an additional (brief) screening for these traits wouldn’t be that costly or unusual.
Clarification
Talking about stable traits or dispositions of LLMs is inaccurate. Given different prompts, LLMs simulate wildly different characters with different traits. So the concept of inheriting dispositions from human RLHF raters is misleading.
We might reformulate the path to impact as follows: If we train LLMs with RLHF raters with traits X, then a (slightly) larger fraction of characters or simulacra that LLMs tend to simulate will exhibit the traits X. This increases the probability that the eventual character(s) that transformative AIs will “collapse on” (if this ever happens) will have traits X.
Open questions
I don’t know how the RLHF process works in detail. For example, i) to what extent is the behavior of individual RLHF raters double-checked or scrutinized, either by AI company employees or other RLHF raters, after the initial trial period is over, and ii) do RLHF raters know when the trial period has ended? In the worst case, trolls could behave well during the initial trial period but then, e.g., deliberately reward offensive or harmful LLM behavior for the lulz.
Fortunately, I expect that at most a few percent of people would behave like this. Is this enough to meaningfully affect the behavior of LLMs?
Generally, it could be interesting to do more research on whether and to what extent the traits and beliefs of RLHF raters influence the type of feedback they give. For example, it would be good to know whether RLHF raters that score highly on some dark triad measure in fact systematically reward more malevolent LLM behavior.
Which traits precisely should we screen RLHF raters for? I make some suggestions in this section below.
Positive impact, useless, or negative impact?
Why this might be positive impact
Pushing for adopting such selection processes now increases the probability that they will be used when training truly transformative AI. Arguably, whether or not current-day LLMs exhibit desirable traits doesn’t really matter all that much. However, if we convince AI companies to adopt such selection processes now, this will plausibly increase the probability that they will continue to use these selection processes (if only because of organizational inertia) once they train truly transformative AIs. If we wait to do so six months before the singularity, AI companies might be too busy to adopt such practices.
Of course, the training setup and architecture of future transformative AIs might be totally different. But they might also be at least somewhat similar.
If (transformative) AIs really inherit, even if in a haphazard fashion, the traits and beliefs of RLHF raters, then this increases the expected value of the long-term future as long as RLHF raters are selected for desirable traits. For example, it seems fairly clear that transformative AIs with malevolent traits increase s-risk and x-risks.
This is probably especially valuable if we fail at aligning AIs. That is, if we successfully align our AIs, the idiosyncratic traits of RLHF raters won’t make a difference because the values of the AI are fully aligned with the human principals anyways. But unaligned AIs might differ a lot in their values. For example, an unaligned AI with some sadistic traits will create more expected disvalue than an unaligned AI that just wants to create paper clips.
It might already be valuable to endow non-transformative, present-day AIs with more desirable traits. For example, having more truthful present-day AI assistants seems beneficial for various reasons, such as having a more informed populace, more truth-tracking, nuanced political discourse, and increased cooperation and trust. Ultimately, truthful AI assistants would also help us with AI alignment. For much more details, see Evans et al. (2021, chapter 3).
Why this is probably
uselessnot that impactfulThis doesn’t solve any problems related to inner alignment or mesa optimization. (In fact, it might increase risks related to deceptive alignment but more on this below.)
Generally, it’s not clear that the dispositions or preferences of AIs will correspond in some predictable way to the kind of human feedback they received. It seems clear that current AIs will inherit some of the traits, views, and values of human RLHF raters, at least on distribution. However, as the CoinRun example showcases, it’s difficult to know what values an AI is actually learning as a result of our training. That is, off-distribution behavior might be radically different than what we expect.
There will probably be many RLHF raters. Many of the more problematic traits such as, e.g. psychopathy or sadism seem relatively rare, so they wouldn’t have much of an influence anyways.
People won’t just give feedback based on what appeals to their idiosyncratic traits or beliefs. They are given detailed instructions on what to reward. This means that working on the instructions that RLHF raters receive is probably more important. However, as mentioned above, malevolent RLHF raters or “trolls” might deliberately do the opposite of what they are instructed to do and reward e.g. sadistic or psychopathic behavior. Also, instructions cannot cover every possible example so in unclear cases, the idiosyncratic traits and beliefs of human RLHF raters might make a (tiny) difference.
The values AGIs learn during training might change later as they reflect more and resolve internal conflicts. This process might be chaotic and thus reduces the expected magnitude of any intervention that focuses on installing any particular values right now.
Generally, what matters are not the current LLMs but the eventual transformative AIs. These AIs might have a completely different architecture or training setups than current systems.
Why this might be negative impact
RLHF might actually be net negative and selecting for desirable traits in RLHF raters (insofar it has an effect at all) might exacerbate these negative effects. For instance, Oliver Habryka argues: “In most worlds RLHF, especially if widely distributed and used, seems to make the world a bunch worse from a safety perspective (by making unaligned systems appear aligned at lower capabilities levels, meaning people are less likely to take alignment problems seriously, and by leading to new products that will cause lots of money to go into AI research, as well as giving a strong incentive towards deception at higher capability levels)”. For example, the fact that Bing Chat was blatantly misaligned was arguably positive because it led more people to take AI risks seriously.
On the other hand, Paul Christiano addresses (some of) these arguments here and overall beliefs that RLHF has been net positive.
In general, this whole proposal is not an intervention that makes substantial, direct progress on the central parts of the alignment problem. Thus, it might just distract from the actually important and difficult parts of the problem. It might even be used as some form of safety washing.
Another worry is that pushing for selection processes will mutate into selecting traits we don’t particularly care about. For instance, OpenAI seems primarily concerned with issues that are important to the political left.[1] So maybe pitching OpenAI (or other AI companies) the idea of selecting RLHF raters according to desirable traits will mostly result in a selection process that upholds a long list of “woke” constraints, which in some instances, might be in conflict with other desirable traits such as truthfulness. However, it might still be net positive.
Which traits and how?
I list a few suggestions for traits we might want to select for below. All of the traits I list arguably have the following characteristics:
i) plausibly affects existential or suffering risks if present in transformative AIs.
ii) AI assistants exhibiting more of these traits is beneficial for the longterm future or at least not negative
iii) is uncontroversially viewed as (un)desirable
iv) is (reliably and briefly) measurable in humans.
If we can’t reliably measure a trait in humans, we obviously cannot select for it.
The shorter the measures, the cheaper they are to employ, and the easier it is to convince AI companies to use them.
Ideally, any trait which we want to include in a RLFH rater selection process should have these characteristics. The reasons for these criteria are obvious but I briefly elaborate on them in this footnote.[2]
This isn’t a definitive or exhaustive list by any means. In fact, which traits to select for, and how to measure them (perhaps even developing novel measurements) could arguably be a research area for psychologists or other social scientists.
Dark tetrad traits / malevolence
One common operationalization of malevolence are the dark tetrad traits, comprising machiavellianism, narcissism, psychopathy, and sadism. I have previously written on the nature of dark tetrad traits and the substantial risks they pose. It seems obvious that we don’t want any AIs to exhibit these traits.
Fortunately, these traits have been studied extensively by psychologists. Consequently, brief and reliable measures of these traits exist, e.g., the Short Dark Tetrad (Paulhus et al., 2020) or the Short Dark Triad (Jones & Paulhus, 2014). However, since these are merely self-report scales, it’s unclear how well they work in situations where people know they are being assessed for a job.
Truthfulness and epistemic virtues
(I outlined some of the benefits of truthfulness above, in the third bullet point of this section.)
It’s not easy to measure how truthful humans are, especially in assessment situations.[3] Fortunately, there exist reliable measures for some epistemic virtues that correlate with truthfulness. For example, the argument evaluation test, (Stanovich & West, 1997) or the actively open-minded thinking scale (e.g., Baron, 2019). See also Stanovich and West (1988) for a classic overview of various measures of epistemic rationality.
Still, none of these measures are all that great. For example, some of these measures, especially the AOT scale, have strong ceiling effects. Developing more powerful measures would be useful.
Pragmatic operationalization: forecasting ability
One possibility would be to select for human raters above some acceptable threshold of forecasting ability as forecasting skills correlate with epistemic virtues. The problem is that very few people have a public forecasting track record and measuring people’s forecasting ability is a lengthy and costly process.
Cooperativeness, harm aversion, altruism
In some sense, altruism or benevolence are just the opposite of malevolence[4], so perhaps we could just use one or the other. HEXACO honesty-humility (e.g., Ashton et al., 2014) is one very well-studied measure of benevolence. Alternatives include the self-report altruism scale (Rushton et al., 1981) or behavior in economic games such as the dictator game.
Cooperativeness, however, is a somewhat distinct construct. Others have written about the benefits of making AIs more cooperative in this sense. One measure of cooperativeness is the cooperative personality scale by Lu et al. (2013).
Harm aversion could also be desirable because it might translate into (some form of) low-impact AIs. On the other hand, (excessive) instrumental harm aversion can come into conflict with consequentialist principles.
Other traits
As mentioned above, this is by no means an exhaustive list. There are many other traits which could be desirable, such as empathy, tolerance, helpfulness, fairness, intelligence, effectiveness-focus, compassion, or wisdom. Other possibly undesirable traits include spite, tribalism, partisanship, vengefulness, or (excessive) retributivism.
References
Ashton, M. C., Lee, K., & De Vries, R. E. (2014). The HEXACO Honesty-Humility, Agreeableness, and Emotionality factors: A review of research and theory. Personality and Social Psychology Review, 18(2), 139-152.
Baron, J. (2019). Actively open-minded thinking in politics. Cognition, 188, 8-18.
Evans, O., Cotton-Barratt, O., Finnveden, L., Bales, A., Balwit, A., Wills, P., … & Saunders, W. (2021). Truthful AI: Developing and governing AI that does not lie. arXiv preprint arXiv:2110.06674.
Forsyth, L., Anglim, J., March, E., & Bilobrk, B. (2021). Dark Tetrad personality traits and the propensity to lie across multiple contexts. Personality and individual differences, 177, 110792.
Lee, K., & Ashton, M. C. (2014). The dark triad, the big five, and the HEXACO model. Personality and Individual Differences, 67, 2-5.
Lu, S., Au, W. T., Jiang, F., Xie, X., & Yam, P. (2013). Cooperativeness and competitiveness as two distinct constructs: Validating the Cooperative and Competitive Personality Scale in a social dilemma context. International Journal of Psychology, 48(6), 1135-1147.
Perez, E., Ringer, S., Lukošiūtė, K., Nguyen, K., Chen, E., Heiner, S., … & Kaplan, J. (2022). Discovering Language Model Behaviors with Model-Written Evaluations. arXiv preprint arXiv:2212.09251.
Rushton, J. P., Chrisjohn, R. D., & Fekken, G. C. (1981). The altruistic personality and the self-report altruism scale. Personality and individual differences, 2(4), 293-302.
Stanovich, K. E., & West, R. F. (1997). Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of educational psychology, 89(2), 342.
Stanovich, K. E., & West, R. F. (1998). Individual differences in rational thought. Journal of experimental psychology: general, 127(2), 161.
Though, to be fair, this snapshot of the instruction guidelines seem actually fair and balanced.
i) is important because the trait is otherwise not very consequential, ii) is obvious, iii) is more or less necessary because we otherwise couldn’t convince AI companies to select according to these traits because they would disagree or because they would fear public backlash, iv) is required because if we can’t reliably measure a trait in humans, we obviously cannot select for it. The shorter the measures, the cheaper they are to employ, and the easier it is to convince AI companies to use them.
Though dark tetrad traits correlate with a propensity to lie (Forsyth et al., 2021).
For instance, HEXACO honesty-humility correlates highly negatively with dark triad traits (e.g., Lee & Ashton, 2014).