The goal is not to create a model to create the most good. While aligning an AI with values and principles could be a potentially interesting project, the goal of this project is to create a descriptive model of the EA community, not a normative one of the idealized EA community.
I believe GPT-3 can do more than memorizing specific objectives like malaria nets. Infusing principles deeply would need to happen using more sophisticated techniques, probably post-finetuning.
upbias (-1, 1) is the Forum editors’ or users’ perspective on the fraction of upvotes that happened due to fear, other negative emotions, or limited critical thinking that the post motivated otherwise
How do I calculate upbias?
Thank you for the books to use in the dataset. I will review each of them.
The original GPT-3 was trained largely on a web crawl known as Common Crawl. Users on the internet, especially tend to optimize for attention. Unlike GPT-3, GPT-J’s training set is around a third academic sources.
SSC blog includes posts like Meditations on Moloch or the review of Seeing Like a State. These seem like perspectives important to the EA community. Are you suggesting I include posts based on if they’re linked from the EA Forum frequently?
I’ll try to crawl the EA Funds’ grant program as well.
create a descriptive model of the EA community, not a normative one of the idealized EA community.
Ok.
How do I calculate upbias?
Average of the values estimated by editors and users familiar with emotional reasoning/marketing tricks or hosting a focus group discussion and agreeing on a number (using human intelligence to calibrate and weigh participants’ estimates based on their arguments and relevant skills presentation).
Thanks for reviewing the books. In case you are interested I made reading questions for 5 of them.
GPT-3/J: I see. So, the 2⁄3 reduce critical reasoning by attention-captivating tricks using the legitimacy presentation of the 1⁄3 academic sources ah hah (can be read as an exaggeration). The inclusion of academic sources also makes arguing against bias less thinkable (due to a ‘respectful/less questioning’ approach of academics’ claims and trust in their neutrality and comprehensive coverage of important topics—this makes me think—is the academic text selected based on if it is a ‘conversation ender,’ including based on biased norm perpetuation, rather than an invitation for an inclusive solutions-oriented discourse about topics that concern especially disadvantaged groups?). However, it can be a positive step toward GPT-n, which uses 50% academic sources (international), 15% investigative journalism, 10% non-western newspapers and the UN website with its links, and 5% impact investors’ sites, NGO sites, and anything nodal in rationality thinking.
Also, I must be biased about the GPT-J name stepping up aggression or threat (the category paying attention and renarrating it’s cool). I mean it’s possibly just a bias don’t worry about it.
Hmmm .. that is a great question—I have not reviewed the SSC or similar websites in detail but would imagine that the posts get people start thinking about EA-related topics (rather than being for those already up to speed). It can make sense that a post which only hints on some EA topics would not get on the EA Forum (or not be highly upvoted), however, it is also possible that these posts talk about important EA-related topics but are just not linked (such as Beware Systemic Change). Sure, the frequency of linking (e. g. Beware of Systemic change seems popular) can work for external pieces that are not linked or summarized as posts. Even though the Meditations on Moloch and Seeing Like a State summaries can seem as more on the ‘starting to think about EA’ side, they are also linked on the Forum, so maybe the current thinking in EA includes a range of viewpoints based on different experiences with EA.
Cool cool. So simple to just read everything at once …
The goal is not to create a model to create the most good. While aligning an AI with values and principles could be a potentially interesting project, the goal of this project is to create a descriptive model of the EA community, not a normative one of the idealized EA community.
I believe GPT-3 can do more than memorizing specific objectives like malaria nets. Infusing principles deeply would need to happen using more sophisticated techniques, probably post-finetuning.
How do I calculate upbias?
Thank you for the books to use in the dataset. I will review each of them.
The original GPT-3 was trained largely on a web crawl known as Common Crawl. Users on the internet, especially tend to optimize for attention. Unlike GPT-3, GPT-J’s training set is around a third academic sources.
SSC blog includes posts like Meditations on Moloch or the review of Seeing Like a State. These seem like perspectives important to the EA community. Are you suggesting I include posts based on if they’re linked from the EA Forum frequently?
I’ll try to crawl the EA Funds’ grant program as well.
Ok.
Average of the values estimated by editors and users familiar with emotional reasoning/marketing tricks or hosting a focus group discussion and agreeing on a number (using human intelligence to calibrate and weigh participants’ estimates based on their arguments and relevant skills presentation).
Thanks for reviewing the books. In case you are interested I made reading questions for 5 of them.
GPT-3/J: I see. So, the 2⁄3 reduce critical reasoning by attention-captivating tricks using the legitimacy presentation of the 1⁄3 academic sources ah hah (can be read as an exaggeration). The inclusion of academic sources also makes arguing against bias less thinkable (due to a ‘respectful/less questioning’ approach of academics’ claims and trust in their neutrality and comprehensive coverage of important topics—this makes me think—is the academic text selected based on if it is a ‘conversation ender,’ including based on biased norm perpetuation, rather than an invitation for an inclusive solutions-oriented discourse about topics that concern especially disadvantaged groups?). However, it can be a positive step toward GPT-n, which uses 50% academic sources (international), 15% investigative journalism, 10% non-western newspapers and the UN website with its links, and 5% impact investors’ sites, NGO sites, and anything nodal in rationality thinking.
Also, I must be biased about the GPT-J name stepping up aggression or threat (the category paying attention and renarrating it’s cool). I mean it’s possibly just a bias don’t worry about it.
Hmmm .. that is a great question—I have not reviewed the SSC or similar websites in detail but would imagine that the posts get people start thinking about EA-related topics (rather than being for those already up to speed). It can make sense that a post which only hints on some EA topics would not get on the EA Forum (or not be highly upvoted), however, it is also possible that these posts talk about important EA-related topics but are just not linked (such as Beware Systemic Change). Sure, the frequency of linking (e. g. Beware of Systemic change seems popular) can work for external pieces that are not linked or summarized as posts. Even though the Meditations on Moloch and Seeing Like a State summaries can seem as more on the ‘starting to think about EA’ side, they are also linked on the Forum, so maybe the current thinking in EA includes a range of viewpoints based on different experiences with EA.
Cool cool. So simple to just read everything at once …
Thanks for your thoughts.