Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Very interesting role, but my understanding was that job posts were not meant to be posted on the forum
My process was to check the “About the forum” link on the left hand side, see that there was a section on “What we discourage” that made no mention of hiring, then search for a few job ads posted on the forum and check that no disapproval was expressed in the comments of those posts.
That’s not my understanding. As the lead moderator, here’s what I’ve told people who ask about job posts:
If we start to have a lot of them such that they’re getting in the way of more discussion-ready content, I’d want to keep them off the frontpage. Right now, we only get them very occasionally, and I’m generally happy to have them be more visible...
...especially if it’s a post like this one which naturally leads to a bunch of discussion of an org’s actual work. (If the job were something like “we need a copyeditor to work on grant reports,” it’s less likely that good discussion follows, and I’d again consider sorting the content differently.)
If I said something at some point that gave you a different impression of our policy here, my apologies!
Before the revamp of the forum, I was asked to take down job ads, but maybe things have changed since then. I personally don’t think it would be good for the forum to become a jobs board, since the community already has several places to post jobs.
I think our policy has been pretty consistent since the revamp (I wasn’t around before then), but it’s plausible that the previous policy led people not to post many jobs.
I also don’t want the Forum to become a job board, but I think an occasional post along the lines of this one seems fine; I’m neither an engineer nor a researcher, but I found it interesting to learn what OpenAI was up to.
I see that people don’t seem to like the policy very much (or maybe think it wasn’t handled consistently before). If anyone who downvoted sees this, would you mind explaining what you didn’t like? Do you think we should simply prohibit all job posts, or make sure they never show up on the front page?
I didn’t downvote, but I could imagine someone thinking Halstead had been ‘tricked’ - forced into compliance with a rule that was then revoked without notifying him. If he had been notified he might have wanted to post his own job adverts in the last few years.
Personally I share your intuitions that the occasional interesting job offer is good, but I don’t know how this public goods problem could be solved. No job ads might be the best solution, for all that I enjoyed this one.
Yeah. Well, not that they cannot be posted, but that they will not be frontpaged by the mods, and instead kept in the personal blog / community section, which has less visibility.
Added: As it currently says on the About page:
Ok, the post is still labelled as ‘front page’ in that case, which seems like it should be changed
To clarify, Halstead, “Community” is now a tag, not a category on the level of “Frontpage”. Posts tagged “Community” will still either be “Frontpage” or “Personal Blog”.
This comment is a bit out of date (though I think it was made before I made this edit). The current language is:
We don’t hide all “community” posts by default, but they will generally be less prominent on the front page unless a user changes the weighting themselves.
Thank you for posting this, Paul. I have questions about two different aspects.
In the beginning of your post you suggest that this is “the real thing” and that these systems “could pose an existential risk if scaled up”.
I personally, and I believe other members of the community, would like to learn more about your reasoning.
In particular, do you think that GPT-3 specifically could pose an existential risk (for example if it falls into the wrong hands, or scaled up sufficiently)? If so, why, and what is a plausible mechanism by which it poses an x-risk?
On a different matter, what does aligning GPT-3 (or similar systems) mean for you concretely? What would the optimal result of your team’s work look like?
(This question assumes that GPT-3 is indeed a “prosaic” AI system, and that we will not gain a fundamental understanding of intelligence by this work.)
Thanks again!
I think that a scaled up version of GPT-3 can be directly applied to problems like “Here’s a situation. Here’s the desired result. What action will achieve that result?” (E.g. you can already use it to get answers like “What copy will get the user to subscribe to our newsletter?” and we can improve performance by fine-tuning on data about actual customer behavior or by combining GPT-3 with very simple search algorithms.)
I think that if GPT-3 was more powerful then many people would apply it to problems like that. I’m concerned that such systems will then be much better at steering the future than humans are, and that none of these systems will be actually trying to help people get what they want.
A bunch of people have written about this scenario and whether/how it could be risky. I wish that I had better writing to refer people to. Here’s a post I wrote last year to try to communicate what I’m concerned about.
Thanks for the response.
I believe this answers the first part, why GPT-3 poses an x-risk specifically.
Did you or anyone else ever write what aligning a system like GPT-3 looks like? I have to admit that it’s hard for me to even have a definition of being (intent) aligned for a system GPT-3, which is not really an agent on its own. How do you define or measure something like this?
Quick question—are these positions relevant as remote positions (not in the US)?
(I wrote this comment separately, because I think it will be interesting to a different, and probably smaller, group of people than the other one.)
Hires would need to be able to move to the US.
Hi, quick question, not sure this is the best place for it but curious:
Does work to “align GTP-3” include work to identify the most egregious uses for GTP-3 and develop countermeasures?
Cheers
No, I’m talking somewhat narrowly about intent alignment, i.e. ensuring that our AI system is “trying” to do what we want. We are a relatively focused technical team, and a minority of the organization’s investment in safety and preparedness.
The policy team works on identifying misuses and developing countermeasures, and the applied team thinks about those issues as they arise today.
Hi Paul, I messaged you privately.