My syllabus on Artificial Social Intelligence (draft)
Sociology meets AI
Below is a draft syllabus for my seminar on Artificial Social Intelligence. The seminar starts from a simple reversal of a common assumption in AI: society comes first, and mind is one of its achievements, not its precondition. If that is right, then building the next wave of AI will not mean scaling up an individual cognitive model. It will mean building systems of agents that can take part in, and help generate, forms of social life.
Sociologists have been thinking in this direction for a long time. In the early 1990s, Randall Collins, for example, articulated a theoretical model of what a genuinely sociological AI would look like, and his approach now feels newly practical. One way to restate it is to notice that the editorial decision to title Mead’s lectures Mind, Self, and Society has Mead exactly backwards: mind and self are downstream of society.
I’ll probably write posts about these pieces as we work through them in the seminar, but for now I wanted to share the draft to get feedback and spark a conversation. The course pairs theoretical work on the social formation of mind, interaction, knowledge, and social order with experiments that try to build those insights into AI design. What would it mean to construct a social chain of thought, or social reward functions, alongside the now-standard cognitive ones? What would it mean to design agents that can recognize and modulate different forms of sociality, align to one another, define themselves through taking on the roles of others, or generate larger functional orders, built on self-stabilizing feedback mechanisms?
You can access the readings here.
Beyond the Singularity: Building Artificial Social Intelligence.
A Graduate Seminar on the Sociological Foundations of AI
The ambition driving much artificial intelligence is to build a god: a singular, all-knowing super-intelligence. This seminar begins from the proposition that this goal is not only misguided but also ignores a fundamental proposition of classical sociology: intelligence is social. We will advance the proposition that true superintelligence (often referred to as ASI), should it ever exist, will not be a monolithic entity, but a fundamentally social and distributed system.
We will seek to dislodge the prevailing image of Artificial General Intelligence (AGI) – a gigantic individual mind – for a richer and more generative concept: Artificial Social Intelligence (ASI). We will explore the idea that the future of AI lies in networks of intelligent agents that must learn to creatively and situationally understand, coordinate, and sometimes compete with each other and with us. While computer science often approaches this topic through Multi-Agent Systems (MAS), our focus is distinct and complementary; we will use classical sociology as a launching point to reimagine AI, focusing on the underlying capabilities and orientations that make multi-agent interaction possible and more or less effective.
Our sessions will be highly interactive, combining deep dives into social theory with practical experimental design. We will engage with foundational sociological texts on topics such as:
The social nature of intelligence and the nature of social intelligence
The effect of social organization on individual capabilities
The social conditions of creativity
The social origins and functions of values
The nature of social order and the forms of social interaction
Building on these theoretical discussions, we will collaboratively design experiments to:
Evaluate the inherently social nature of current AI models against novel benchmarks.
Develop and assess methods for enhancing the social capabilities of these models.
Measure the impact of improved social intelligence on outcomes in fields like medical diagnosis and complex problem-solving.
Contribute to the sociological re-engineering of AI architecture, potentially offering new approaches to unsolved problems like the “credit assignment” challenge in multi-agent reinforcement learning (MARL).
Throughout, we will explore the implication of results for ongoing challenges in AI system design, such as “alignment” and “sycophancy.”
A key component of this seminar is hands-on access to https://www.chatstorm.io, a unique platform for ASI research we are developing. No advanced technical background is necessary to participate, and your work will directly influence the platform’s evolution.
At the same time, the seminar will explore how the effort to build ASI feeds back to enrich social theory. The act of translating our concepts into computational models forces a high level of analytical rigor, exposing ambiguities and inspiring refinement. This process can yield more dynamic social simulations, but its value is more fundamental. By attempting to construct the social from the ground up, we create a powerful new context for testing, extending, and innovating in sociological theory
This seminar runs in parallel with a complementary course in the Faculty of Information, which will focus on technical operationalization and implementation. Participants are strongly encouraged (though not required) to attend both and to form teams that unite sociologists with engineers and computer scientists.
Weekly readings (evolving draft)
Week 1 — From Singularity to Sociality (framing + surveys)
Schrape (2025) “Artificial Intelligence and Social Action”
Griggs (2025) “The Plurality: A Better Myth for AI.”
Farrell (2025) “Large language models are cultural technologies. What might that mean?”
Sutton (2019) “The Bitter Lesson.”
Xu, Y. et al. (2024) “AI for Social Science and the Social Science of AI: A Survey.”
Farrell, Gopnik, et al. (2025) “Large AI Models are Cultural and Social Technologies.”
Week 2 — Sociology × AI (disciplinary foundations)
Woolgar (1985) “Why not a sociology of machines?”
Bainbridge, W. S., et al. (1986) “Artificial Social Intelligence.”
Carley (1996) “Artificial Intelligence within Sociology.”
G. Nigel Gilbert, Christian Heath. 1985. Introduction to “Social action and artificial intelligence.”
Hayles, How We Became Posthuman, selections
Supplemental:
Jean-Pierre Dupuy (2000). On the Origins of Cognitive Science: The Mechanization of the Mind (selections)
David Good. “Sociology and AI: the lesson from Social Psychology,” in Social action and artificial intelligence
Wang, Dong, Lanyu Shang, and Yang Zhang. 2025. Social Intelligence: The New Frontier of Integrating Human Intelligence and Artificial Intelligence in Social Space. Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-90080-8.
Week 3 — Social Constitution of Mind I (selves, internalization, symbols)
Mead, Mind, Self, and Society (selections)
Collins, R. (1992) “Can Sociology Create an Artificial Intelligence?”
Simmel, “How Is Society Possible?” (three a prioris).
Shanahan, Murray, Kyle McDonell, and Laria Reynolds. 2023. “Role-Play with Large Language Models.” arXiv:2305.16367. https://doi.org/10.48550/arXiv.2305.16367
Supplemental
Mökander, Jakob, and Ralph Schroeder. 2022. “AI and Social Theory.” AI & SOCIETY 37 (4): 1337–51. https://doi.org/10.1007/s00146-021-01222-z.
Huebner, Dan. Reintroducing Mead (selections)
Lechner, F., & Silver, D. (2022). “Georg Simmel’s ‘How Is Society Possible?’: A Dialogue.” Simmel Studies, 26(2)
Week 4 — Social Constitution of Mind II (categories, constitutive orders, collectives, context)
Durkheim, The Elementary Forms of Religious Life (selections on social constitution of categories of understanding).
Rawls, A. “Durkheim’s Epistemology: The Neglected Argument.”
Rawls, Anne Warfield. 2011. “Wittgenstein, Durkheim, Garfinkel and Winch: Constitutive Orders of Sensemaking…” Journal for the Theory of Social Behaviour 41 (4): 396–418. https://doi.org/10.1111/j.1468-5914.2011.00471.x.
Matt Ratto. 2025. “Artifical Intelligence as Bounded and Contextual.”
Mathur, Leena, Paul Pu Liang, and Louis-Philippe Morency. 2024. “Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions.”
Supplemental:
John Bateman, “The Role of Language in the Maintenance of Intersubjectivity,” in Social Action and Artificial Intelligence.
Oldman and Drucker. “The non-reducibility of ethno-methods: can people and computers form a society.” in Social Action and Artificial Intelligence.
Week 5 — Cultural and Developmental Genesis of Mind
Tomasello (1999) The Cultural Origins of Human Cognition (intro)
Heyes, C. (2018). Cognitive gadgets: The cultural evolution of thinking. Harvard University Press (selections)
Vygotsky, Mind in Society (selections).
Bingley, William J, S Alexander Haslam, and Janet Wiles. n.d. Socially-Minded Intelligence: How Individuals, Groups, and AI Systems Can Make Each-Other Smarter (or Not).
Bokanga, Maurice, Alessandra Lembo, and John Levi Martin. 2023. “Through a Scanner Darkly: Machine Sentience and the Language Virus.” Journal of Social Computing 4 (4): 254–69. https://doi.org/10.23919/JSC.2023.0024.
Supplemental:
Li, Cheng, Mengzhou Chen, Jindong Wang, Sunayana Sitaram, and Xing Xie. 2024. “CultureLLM: Incorporating Cultural Differences into Large Language Models.” arXiv:2402.10946. Preprint, arXiv, December 3. https://doi.org/10.48550/arXiv.2402.10946.
Week 6 — Cognition as a distributed and extended system
Hutchins (1995) Cognition in the Wild (ch. 1 + navigation vignette).
Clark & Chalmers (1998) “The Extended Mind.”
Lai, Shiyang, Yujin Potter, Junsol Kim, Richard Zhuang, Dawn Song, and James Evans. 2024. “Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation.”
Santoro, Adam, Andrew Lampinen, Kory Mathewson, Timothy Lillicrap, and David Raposo. 2022. “Symbolic Behaviour in Artificial Intelligence.” arXiv:2102.03406. https://doi.org/10.48550/arXiv.2102.03406
Supplemental
Clark (2008) Supersizing the Mind, ch. 1.
Doran (1985) “The Computational Approach to Knowledge, Communication and Structure in Multi-Actor Systems,” in Social Action and Artificial Intelligence.
Chen et al. (2024) “LLMArena: Assessing Capabilities of LLMs in Dynamic Multi-Agent Environments.”
Chen et al. (2023) “AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors.”
Week 7 — Sociality of Knowledge I (pragmatism & social validation)
Heath, Joseph. Critical Theory A Rational History. (Selections)
Weatherby, Lief. 2025. Language Machines (selections).
Peirce, “The Fixation of Belief” & “How to Make Our Ideas Clear.”
John Dewey, How We Think (selections)
Manning, Benjamin S., and John J. Horton. 2025. General Social Agents. September 8. https://benjaminmanning.io/files/optimize.pdf.
Supplemental
Misak, C. J. (2004). Truth and the End of Inquiry: A Peircean Account of Truth. Oxford: Clarendon Press.
Liang, Tian, Zhiwei He, Jen-tse Huang, et al. 2023. “Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models.” arXiv:2310.20499. https://doi.org/10.48550/arXiv.2310.20499
“On Comprehension and Mental Representation,” Jeff Coulter, in Social Action and Artificial Intelligence
Week 8 — Sociality of Knowledge II (justification, discourse, reasons)
Habermas, “Discourse Ethics: Notes on a Program of Philosophical Justification” (in Moral Consciousness and Communicative Action, 1990)
Brandom, Articulating Reasons (introduction)
Martin, John Levi. 2023. “The Ethico-Political Universe of ChatGPT.” Journal of Social Computing 4 (1): 1–11. https://doi.org/10.23919/JSC.2023.0003
Wu, Yusen, Junwu Xiong, and Xiaotie Deng. 2025. “How Social Is It? A Benchmark for LLMs’ Capabilities in Multi-User Multi-Turn Social Agent Tasks.” arXiv:2505.04628 https://doi.org/10.48550/arXiv.2505.04628
Bao et al. (2025) “Language Models Surface the Unwritten Code of Science and Society.”
Supplemental
Huang, Saffron, and Divya Siddarth. 2023. “Generative AI and the Digital Commons.”arXiv:2303.11074. https://doi.org/10.48550/arXiv.2303.11074
Thomas McCarthy, The Critical Theory of Jürgen Habermas (1978). (Selections)
Week 9 — Interaction I (ritual, turn-taking, and creativity)
Goffman, “On Face-Work” and “The Nature of Deference and Demeanor” (selections)
Sacks, Schegloff & Jefferson (1974) “A Simplest Systematics for the Organization of Turn-Taking.”
Joas, Hans. The Creativity of Action (situation-corporeality-sociality)
Chen, Hongzhan, Hehong Chen, Ming Yan, et al. 2024. “SocialBench: Sociality Evaluation of Role-Playing Conversational Agents.” arXiv:2403.13679. https://doi.org/10.48550/arXiv.2403.13679
Supplemental
Zhang, Jintian, Xin Xu, Ningyu Zhang, Ruibo Liu, Bryan Hooi, and Shumin Deng. 2024. “Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View.” arXiv:2310.02124. https://doi.org/10.48550/arXiv.2310.02124
Michael McTear. “Breakdown and repair in naturally occurring conversation and human-computer dialogue.” In Social Action and Artificial Intelligence.
Week 10 — Interaction II (trust, conflict, secrecy)
Garfinkel (1967) Studies in Ethnomethodology (selections on trust and breaching).
Simmel, Georg. The Sociology of Secrecy and of Secret Societies.
Simmel, Georg. The Sociology of Conflict
Coleman. 1957. The dynamics of community controversy.
Gu, Zhouhong, Xiaoxuan Zhu, Haoran Guo, et al. 2024. “AgentGroupChat: An Interactive Group Chat Simulacra For Better Eliciting Emergent Behavior.” arXiv:2403.13433. https://doi.org/10.48550/arXiv.2403.13433
Supplemental
Kozlowski, Austin C., and James Evans. 2025. “Simulating Subjects: The Promise and Peril of Artificial Intelligence Stand-Ins for Social Agents and Interactions.” Sociological Methods & Research 54 (3): 1017–73. https://doi.org/10.1177/00491241251337316
Shevlin, H. (2024). All too human? Identifying and mitigating ethical risks of Social AI
Devlin, Kate. 2024. “Relating with Social Robots: Issues of Sex, Love, Intimacy, Emotion, Attachment, and Companionship.” In The De Gruyter Handbook of Robots in Society and Culture, eds. Fortunati & Edwards. https://doi.org/10.1515/9783110792270-015
Week 11 — Systems & Order
Parsons (1951) The Social System, ch. 2 “The Problem of Order in Social Systems.”
Parsons and Shils (1951) Towards a General Theory of Action, selections
Niklas Luhmann, selections from Introduction to Systems Theory
Merton, “Role Sets”
Ashery, Ariel Flint, Luca Maria Aiello, and Andrea Baronchelli. 2025. “Emergent Social Conventions and Collective Bias in LLM Populations.” Science Advances.
Supplemental
Yan et al. (2025) “Beyond Self-Talk: A Communication-Centric Survey of Multi-Agent Systems.”
Alexander (1983) Twenty Lectures (esp Lectures 3-6).
Week 12 — Assembling the social AI
Latour (2005) Reassembling the Social
Brandon Griggs (2025). “Schrödinger’s Chatbot.”
Anthropic (2024) “How We Built Our Multi-Agent Research System.”
Scott, S. V., & Orlikowski, W. J. (2025). Exploring AI-in-the-making: Sociomaterial genealogies of AI performativity. Information and Organization, 35(1), 100558.
Supplemental:
Ronald Stamper. “Knowledge as Action: A Logic of Social Norms and Individual Affordances.” In Social Action and Artificial Intelligence (Gilbert & Heath, eds., 1985).
Mou et al. (2024) “From Individual to Society: A Survey on Social Simulation Driven by LLM Agents.”
Mamie & Rao (2025) “The Society of HiveMind: Multi-Agent Optimization of Foundation Model Swarms.”


Regarding the topic of the article, the central thesis that societal structures must underpin AI development, rather than individual cognition, represents a profoundly correct and necessary reorientation for the field.
Would Harry Collins' "Tacit and Explicit Knowledge" be worth including somewhere in this syllabus? I was never particularly convinced by his suggestion that the *real* kind of irreducibly tacit knowledge is our knowledge of how to interact socially, but it strikes me that the idea might be worth reconsidering in the age of LLMs.
A larger topic area that I would consider even more important for inclusion, would be cognitive specialization and the problems of interdisciplinary communication. Part of the felt strangeness of AI systems is that their "expertise" (such as it is) doesn't result from a process of disciplinary specialization, so their output can't be integrated into society in the same way as the output of human cognitive workers. But reflecting on the question of how societies have traditionally integrated specialized cognitive work by humans can at least provide a starting point for thinking about how they might handle new kinds of cognitive "producers."
(I'm thinking of Peter Galison on scientific trading zones, Annemarie Mol on the body multiple, Elijah Milgram on serial hyperspecialization... but I assume you have your own favorite readings on this topic area.)