Forum Team, maybe link “alignment” at https://forum.effectivealtruism.org/wrapped to this comment rather than the Wikipedia page? (If I’d been labeled “evil” I think I’d much rather be reassured that it’s for a completely irrelevant reason than linked to the D&D reference.)
For those who don’t program, this is what ChatGPT says this code means:
“This code defines a function getAlignment that takes in an object called results and returns a string indicating a combination of good/evil and lawful/chaotic alignments.
The function first initializes two variables, goodEvil and lawfulChaotic, to the string ‘neutral’ and ‘Neutral’, respectively. It then checks the engagementPercentile property of the results object. If the engagement percentile is less than 0.33, goodEvil is set to the string ‘evil’. If the engagement percentile is greater than 0.66, goodEvil is set to the string ‘good’.
The function then calculates the ratio of commentCount to postCount in the results object, and uses this ratio to set the value of lawfulChaotic. If the ratio is less than 3, lawfulChaotic is set to ‘Chaotic’. If the ratio is greater than 6, lawfulChaotic is set to ‘Lawful’.
Finally, the function checks if both lawfulChaotic and goodEvil are equal to ‘neutral’. If they are, the function returns the string ‘True neutral’. Otherwise, it returns the concatenation of lawfulChaotic and goodEvil with a space in between.”
I don’t code, so I have no idea if this is accurate, so please let me know if it’s off.
https://github.com/ForumMagnum/ForumMagnum/blob/5f08a68cfd2eb48d5a2286962cd70ddfea9a97a6/packages/lesswrong/server/resolvers/userResolvers.ts#L322-L339
I think it looks at engagement (I assume time spent on the Forum) and the comments/posts ratio.
Forum Team, maybe link “alignment” at https://forum.effectivealtruism.org/wrapped to this comment rather than the Wikipedia page? (If I’d been labeled “evil” I think I’d much rather be reassured that it’s for a completely irrelevant reason than linked to the D&D reference.)
For those who don’t program, this is what ChatGPT says this code means:
“This code defines a function
getAlignment
that takes in an object calledresults
and returns a string indicating a combination of good/evil and lawful/chaotic alignments.The function first initializes two variables,
goodEvil
andlawfulChaotic
, to the string ‘neutral’ and ‘Neutral’, respectively. It then checks theengagementPercentile
property of theresults
object. If the engagement percentile is less than 0.33,goodEvil
is set to the string ‘evil’. If the engagement percentile is greater than 0.66,goodEvil
is set to the string ‘good’.The function then calculates the ratio of
commentCount
topostCount
in theresults
object, and uses this ratio to set the value oflawfulChaotic
. If the ratio is less than 3,lawfulChaotic
is set to ‘Chaotic’. If the ratio is greater than 6,lawfulChaotic
is set to ‘Lawful’.Finally, the function checks if both
lawfulChaotic
andgoodEvil
are equal to ‘neutral’. If they are, the function returns the string ‘True neutral’. Otherwise, it returns the concatenation oflawfulChaotic
andgoodEvil
with a space in between.”I don’t code, so I have no idea if this is accurate, so please let me know if it’s off.
I think it’s accurate, but I don’t know if it’s clearer
Here’s a shitty table that I think is clearer
Actually, ChatGPT does a decent job at that
This is arguably better than my table:
No, I think your table is substantially better than chatgpt’s because it factors out the two alignment dimensions into two spatial dimensions.
I have to squint a lot to see the sense in this mapping
I don’t think it’s meant to be taken seriously, just some whimsical easter egg