I used a model I fine-tuned to generate takes on Effective Altruism.
was unclear. It should be:
I used a model that I fine-tuned, in order to generate takes on Effective Altruism.
This model was not fine-tuned specifically for Effective Altruism. It was developed to explore the effects of training language models on a twitter account. I became surprised and concerned when I noticed it was able to generate remarkable takes regarding effective altruism, despite not being present in the original dataset. Furthermore, these takes are always criticism.
This particular model is fine-tuned OpenAI davinci. I plan to fine-tune GPT-EA on GPT-NeoX-20B. A predecessor to GPT-EA (GPT-EA-Forum) was trained using a third-party API. I want to train GPT-EA on a cloud platform so I can download a copy of the weights myself. I am not receiving technical support (or funding for GPU costs), it could be helpful. The dataset was selected and cleaned by myself, with input from community members, though I’m still looking for community input.
was unclear. It should be:
This model was not fine-tuned specifically for Effective Altruism. It was developed to explore the effects of training language models on a twitter account. I became surprised and concerned when I noticed it was able to generate remarkable takes regarding effective altruism, despite not being present in the original dataset. Furthermore, these takes are always criticism.
This particular model is fine-tuned OpenAI davinci. I plan to fine-tune GPT-EA on GPT-NeoX-20B. A predecessor to GPT-EA (GPT-EA-Forum) was trained using a third-party API. I want to train GPT-EA on a cloud platform so I can download a copy of the weights myself. I am not receiving technical support (or funding for GPU costs), it could be helpful. The dataset was selected and cleaned by myself, with input from community members, though I’m still looking for community input.