Proofreading a job application seems completely fine and socially normal to me, including for content. The thing that crosses a line, by my lights, is having someone (or GPT-4) write it for you.
As a counter-opinion to the above, I would be fine with the use of GPT-4, or even paying a writer. The goal of most initial applications is to asses some of the skills and experience of the individual. As long as that information is accurate, then any system that turns that into a readable application (human or AI) seems fine, and more efficient seems better.
The information this looses, is the way someone would communicate their skills and experience unassisted, but I’m skeptical that this is valuable in most jobs (and suspect it’s better to test for these kinds of skills later in the process).
More generally I’m doubtful of the value of any norms that are very hard to enforce and disadvantage scrupulous people (e.g. “don’t use GPT-4 or “only spend x hours on this application”).
Thanks! Some of these as questions such as, “According to the article in the description, we can distinguish between natural, incidental, and agential s-risks. Which type should be the priority of the Center on Long-Term Risk? Why?” Different norms might apply to such questions as opposed to, “Why do you want to work for us?” or “What is your experience in the capybara space?”
Proofreading a job application seems completely fine and socially normal to me, including for content. The thing that crosses a line, by my lights, is having someone (or GPT-4) write it for you.
As a counter-opinion to the above, I would be fine with the use of GPT-4, or even paying a writer. The goal of most initial applications is to asses some of the skills and experience of the individual. As long as that information is accurate, then any system that turns that into a readable application (human or AI) seems fine, and more efficient seems better.
The information this looses, is the way someone would communicate their skills and experience unassisted, but I’m skeptical that this is valuable in most jobs (and suspect it’s better to test for these kinds of skills later in the process).
More generally I’m doubtful of the value of any norms that are very hard to enforce and disadvantage scrupulous people (e.g. “don’t use GPT-4 or “only spend x hours on this application”).
Thanks! Some of these as questions such as, “According to the article in the description, we can distinguish between natural, incidental, and agential s-risks. Which type should be the priority of the Center on Long-Term Risk? Why?” Different norms might apply to such questions as opposed to, “Why do you want to work for us?” or “What is your experience in the capybara space?”
Thanks! :-D