At some point in the 90s or the 00s, the “whole of person” concept became popular in the US Natsec community for security clearance matters.
It distinguishes between a surface level vibe from a person, and trying to understand the whole person. The surface level vibe is literally taking the worst of a person and taking it out of context, whereas the whole person concept is making any effort at all to evaluate the person and the odds that they’re good to work with and on what areas. Each subject has their own cost-benefit analysis in the context of different work they might do, and more flexible people (e.g. younger) and weirder people will probably have cost-benefit analysis that change somewhat over time.
In environments where evaluators are incompetent, lack the resources needed to evaluate each person, or believe that humans can’t be evaluated, then there’s a reasonable justification to rule people out without making an effort to optimize.
Otherwise, evaluators should strive to make predictions and minimize the gap between their predictions of whether a subject will cause harm again, and the reality that comes to pass; for example, putting in any effort at all to succeed at distinguishing between individuals causing harm due to mental health, individuals causing harm due to mistakes due to unpreventable ignorance (e.g. the pauseAI movement), mistakes caused by ignorance that should have been preventable, harm caused by malice correctly attributed to the subject, harm caused by someone spoofing the point of origin, or harm caused by a hostile individual, team, or force covertly using SOTA divide-and-conquer tactics to disrupt or sow discord in an entire org, movement, or vulnerable clique; see Conflict vs mistake theory.
At some point in the 90s or the 00s, the “whole of person” concept became popular in the US Natsec community for security clearance matters.
It distinguishes between a surface level vibe from a person, and trying to understand the whole person. The surface level vibe is literally taking the worst of a person and taking it out of context, whereas the whole person concept is making any effort at all to evaluate the person and the odds that they’re good to work with and on what areas. Each subject has their own cost-benefit analysis in the context of different work they might do, and more flexible people (e.g. younger) and weirder people will probably have cost-benefit analysis that change somewhat over time.
In environments where evaluators are incompetent, lack the resources needed to evaluate each person, or believe that humans can’t be evaluated, then there’s a reasonable justification to rule people out without making an effort to optimize.
Otherwise, evaluators should strive to make predictions and minimize the gap between their predictions of whether a subject will cause harm again, and the reality that comes to pass; for example, putting in any effort at all to succeed at distinguishing between individuals causing harm due to mental health, individuals causing harm due to mistakes due to unpreventable ignorance (e.g. the pauseAI movement), mistakes caused by ignorance that should have been preventable, harm caused by malice correctly attributed to the subject, harm caused by someone spoofing the point of origin, or harm caused by a hostile individual, team, or force covertly using SOTA divide-and-conquer tactics to disrupt or sow discord in an entire org, movement, or vulnerable clique; see Conflict vs mistake theory.