Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Executive summary: SociaLLM is a proposed language model architecture for building personalized AI applications, conducting social science research, and pursuing AI safety goals.
Key points:
SociaLLM tracks separate message streams related to conversations, individual users, and user pairs to enable personalization.
It could power apps for comment reordering, recommendations, customer service, education, mental health counseling, media analysis, and more.
The model facilitates research into language, theory of mind, group dynamics, information flow, and collective intelligence.
Studying deception and collusion with SociaLLM may inform techniques to prevent undesirable behavior in AI teams.
Open questions remain around optimally engineering SociaLLM blocks and measuring information content.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Announcement
I think SociaLLM has a good chance of getting OpenAI’s “Research into Agentic AI Systems” grant because it addresses both the challenges of the legibility of AI agent’s behaviour by making the agent’s behaviour more “human-like” thanks to weight sharing and regularisation techniques/inductive biases described the post, as well as automatic monitoring: detection of duplicity or deception in AI agent’s behaviour by comparing agent’s ToMs “in the eyes” of different other interlocutors, building on the work “Collective Intelligence in Human-AI Teams”.
I am looking for co-investigators for this (up to $100k, up to 8 months long) project with hands-on academic or practical experience in DL training (preferably), ML, Bayesian statistics, or NLP. The deadline for the grant application itself is the 20th of January, so I need to find a co-investigator by the 15th of January.
Another requirement for the co-investigator is that they preferably should be in academia, non-profit, or independent at the moment.
I plan to be hands-on during the project in data preparation (cleansing, generation by other LLMs, etc.) and training, too. However, I don’t have any prior experience with DL training, so if I apply for the project alone, this is a significant risk and a likely rejection.
If the project is successful, it could later be extended for further grants or turned into a startup.
If the project is not a good fit for you but you know someone who may be interested, I’d appreciate it a lot if you shared this with them or within your academic network!
Please reach out to me in DMs or at leventov.ru@gmail.com.