create a descriptive model of the EA community, not a normative one of the idealized EA community.
Ok.
How do I calculate upbias?
Average of the values estimated by editors and users familiar with emotional reasoning/marketing tricks or hosting a focus group discussion and agreeing on a number (using human intelligence to calibrate and weigh participants’ estimates based on their arguments and relevant skills presentation).
Thanks for reviewing the books. In case you are interested I made reading questions for 5 of them.
GPT-3/J: I see. So, the 2⁄3 reduce critical reasoning by attention-captivating tricks using the legitimacy presentation of the 1⁄3 academic sources ah hah (can be read as an exaggeration). The inclusion of academic sources also makes arguing against bias less thinkable (due to a ‘respectful/less questioning’ approach of academics’ claims and trust in their neutrality and comprehensive coverage of important topics—this makes me think—is the academic text selected based on if it is a ‘conversation ender,’ including based on biased norm perpetuation, rather than an invitation for an inclusive solutions-oriented discourse about topics that concern especially disadvantaged groups?). However, it can be a positive step toward GPT-n, which uses 50% academic sources (international), 15% investigative journalism, 10% non-western newspapers and the UN website with its links, and 5% impact investors’ sites, NGO sites, and anything nodal in rationality thinking.
Also, I must be biased about the GPT-J name stepping up aggression or threat (the category paying attention and renarrating it’s cool). I mean it’s possibly just a bias don’t worry about it.
Hmmm .. that is a great question—I have not reviewed the SSC or similar websites in detail but would imagine that the posts get people start thinking about EA-related topics (rather than being for those already up to speed). It can make sense that a post which only hints on some EA topics would not get on the EA Forum (or not be highly upvoted), however, it is also possible that these posts talk about important EA-related topics but are just not linked (such as Beware Systemic Change). Sure, the frequency of linking (e. g. Beware of Systemic change seems popular) can work for external pieces that are not linked or summarized as posts. Even though the Meditations on Moloch and Seeing Like a State summaries can seem as more on the ‘starting to think about EA’ side, they are also linked on the Forum, so maybe the current thinking in EA includes a range of viewpoints based on different experiences with EA.
Cool cool. So simple to just read everything at once …
Ok.
Average of the values estimated by editors and users familiar with emotional reasoning/marketing tricks or hosting a focus group discussion and agreeing on a number (using human intelligence to calibrate and weigh participants’ estimates based on their arguments and relevant skills presentation).
Thanks for reviewing the books. In case you are interested I made reading questions for 5 of them.
GPT-3/J: I see. So, the 2⁄3 reduce critical reasoning by attention-captivating tricks using the legitimacy presentation of the 1⁄3 academic sources ah hah (can be read as an exaggeration). The inclusion of academic sources also makes arguing against bias less thinkable (due to a ‘respectful/less questioning’ approach of academics’ claims and trust in their neutrality and comprehensive coverage of important topics—this makes me think—is the academic text selected based on if it is a ‘conversation ender,’ including based on biased norm perpetuation, rather than an invitation for an inclusive solutions-oriented discourse about topics that concern especially disadvantaged groups?). However, it can be a positive step toward GPT-n, which uses 50% academic sources (international), 15% investigative journalism, 10% non-western newspapers and the UN website with its links, and 5% impact investors’ sites, NGO sites, and anything nodal in rationality thinking.
Also, I must be biased about the GPT-J name stepping up aggression or threat (the category paying attention and renarrating it’s cool). I mean it’s possibly just a bias don’t worry about it.
Hmmm .. that is a great question—I have not reviewed the SSC or similar websites in detail but would imagine that the posts get people start thinking about EA-related topics (rather than being for those already up to speed). It can make sense that a post which only hints on some EA topics would not get on the EA Forum (or not be highly upvoted), however, it is also possible that these posts talk about important EA-related topics but are just not linked (such as Beware Systemic Change). Sure, the frequency of linking (e. g. Beware of Systemic change seems popular) can work for external pieces that are not linked or summarized as posts. Even though the Meditations on Moloch and Seeing Like a State summaries can seem as more on the ‘starting to think about EA’ side, they are also linked on the Forum, so maybe the current thinking in EA includes a range of viewpoints based on different experiences with EA.
Cool cool. So simple to just read everything at once …
Thanks for your thoughts.