I gave a version of this talk at SPSP 2026 in Chicago. It’s been rattling around in my head since, and I think there’s something here worth putting on paper. I could be wrong. Like, super wrong. But I’d rather be wrong out loud on this one. Hey, we’re all wrong sometimes, and some of us are wrong most of the time. It’s okay.

I say this as someone who straddles computational and social science, and who cares a lot about both sides of that line. I think social psychology is standing at the edge of a genuine paradigm shift and doesn’t realize it — not because the field lacks the sophistication, but because it’s looking at large language models and seeing the wrong thing.

Here’s the problem. Psychology has always been caught in a bind. The field can observe complex social phenomena — belief change, trust formation, intergroup conflict — and it can isolate individual mechanisms in controlled experiments. Framing effects matter. Emotional validation matters. Socratic questioning matters. Source credibility matters. We know all of this. What the field has never been able to do is effectively or compellingly explain how these mechanisms combine, interact, and unfold dynamically over time — the way these processes actually play out between real people in real conversations. The statistical models we build describe which variables predict an outcome. They don’t produce the outcome. They don’t capture the adaptive, sequential, context-dependent structure of the process itself. As it turns out, psychological processes are really complicated.

There’s an old and uncomfortable argument, made (to my knowledge) initially, or at least quite sharply, by Gergen (1973), that social psychology is fundamentally a descriptive and historical enterprise — that its findings are reflections of a particular time and place rather than timeless laws. You don’t have to accept that argument in full to notice that it points at something that social psychology really doesn’t like to think about: the field’s tools are better suited to documenting phenomena than to reproducing them. They describe. They predict, in the loosest statistical sense. They explain, yet even the most sober social psychologists often acknowledge that they don’t do this particularly well.

But they don’t do much in the way of creating — and that has constrained the kinds of questions we can ask, and certainly the translational impact of the discipline. This is not a failure of effort or ingenuity. It’s a structural limitation of the tools psychology has had available. Vignettes, confederates, between-subjects designs — these are instruments for isolating variables, not for producing complex, adaptive psychosocial phenomena that combine extremely complicated variables in a dynamic, ecological, and fully interactionist type of fashion.

Other sciences hit this same wall and broke through it with computational modeling. Climate scientists couldn’t understand atmospheric dynamics by studying pressure, temperature, and humidity in isolation. They needed simulations that combined everything and let them perturb the system to see how it responded. Epidemiologists couldn’t explain disease transmission by cataloging individual risk factors. They needed models of how contagion propagates through networks under varying conditions. Models that are capable of reproducing the phenomena under study. In these cases, the modeling capability didn’t just make existing research faster — it opened up entirely new categories of questions that couldn’t even be formulated before.

Social Psychology: Business as usual, but faster

Social psychology is engaging with LLMs in two ways, both perfectly valid.

  1. The first is using LLMs as tools — automating coding, generating stimuli, speeding up analyses. This makes existing work faster. It doesn’t change the science; it changes the workflow.
  2. The second is studying LLMs as social stimuli — how people perceive AI-generated text, whether they trust it, what happens when they form relationships with chatbots. This is legitimate “psychology of technology.” But at its core, it’s studying how people react to a new object in their environment — the same game the social sciences have played for every new communication technology since before the telegraph.

Both of these treat LLMs as something to use or something to study. What I’m describing is a third category entirely: LLMs as something to build with — a medium for constructing functional computational models of the psychosocial processes we want to understand.

This is a really critical distinction. Using an LLM to generate stimuli is doing the same science with a better tool. Business as usual, but faster. Surveying people about their experience with ChatGPT is applying existing paradigms to a new stimulus. Building an LLM that reliably produces X effect on people and then systematically probing that system to understand how it works — this is a different kind of inquiry altogether. It asks questions that the field’s existing methods simply cannot formulate.

Moving from a “psychology of technology” to a “technology of psychology”

A “psychology of technology” mindset asks: how do people respond to or use this new thing? A “technology of psychology” stance inverts the question: how do we use this new thing to produce and investigate the phenomena we’ve been trying to understand all along?

The distinction has clear precedent across the natural sciences. When computational modeling entered climate science, the field didn’t just study computers — it used them to build systems that reproduced atmospheric phenomena, and then experimented on those systems to understand how the atmosphere works. When structural biology got protein folding simulations, it didn’t study the simulations — it used them to produce molecular behaviors that could be systematically perturbed and analyzed. In each case, the technology became a medium for generating the phenomenon of interest as a controllable computational object. And in each case, this didn’t simply accelerate existing research — it created forms of inquiry that hadn’t existed before.

That’s essentially the reframing I’m proposing for LLMs in social psychology. Not as a thing to study or a tool to automate tasks with, but as a computational object capable of producing the social processes we care about — belief change, trust formation, conflict dynamics — in a form we can manipulate, constrain, and take apart.

Here’s what I believe this looks like in more tangible terms. Fine-tune an LLM on thousands of real instances of a social process with a known psychosocial outcome — conversations where beliefs actually changed, where trust formed or broke, where conflict escalated or de-escalated. The model doesn’t learn a single strategy but, rather, it learns (to some degree or another) the full conditional landscape: what works for this person, in this context, at this point in the interaction. It encodes the adaptive sequencing — the when, the how, the in-what-order — that makes the process work.1

Of course, this is only meaningful if the model actually learns. Current work in our lab with LLMs and belief change suggests that it does, and that what it learns is not trivial. The result is not a chatbot that happens to be “persuasive” or that mimics the surface features of effective communication. What we appear to have built is a functional computational model: a system that reliably produces a target psychological outcome across people and contexts (yes, “context” is a mushy term; I’m pretending that that’s not a problem for right now), and it does this through known and well-articulated pathways. As its default behavior, it meets people halfway, reframes misconceptions, elaborates counterpoints, provides illustrative examples… and it does all of this in a fashion that intelligibly tracks as a complex confluence of sequenced behaviors specifically “designed” for the purpose of restructuring a person’s beliefs. We’ve stumbled into developing a computational object that can reproduce a phenomenon of interest, a particular family of psychological outcomes.

My sense is that this is where the new science begins. Once you have a working model, you can do what every other computational modeling science does: systematically take it apart. Ablate capabilities and watch what degrades. Constrain the strategy space and see what changes. Vary inputs and map how the outcome responds. This is the same logic as lesion studies in neuroscience or knockout experiments in genetics — you need a working system that can faithfully reproduce the phenomena of interest before you can break it strategically to understand how it works.

To be clear, these are not questions “about the model” — not what a colleague of mine aptly calls “GPT-ology.” They are questions about mechanisms and processes: about the structure and dynamics of belief change, trust, conflict resolution — asked through a medium that allows you to manipulate the process in ways that have never been possible with human participants. These are still questions about people, not machines.

I’ll be honest that I’m running partly (well… heavily) on intuition here. But my intuition is that this is not “doing social psychology but with AI.” Rather, it feels like a new way of doing social psychology altogether. New questions. New methods. New paths to understanding the phenomena the field has been circling for decades. If Gergen was right that psychology’s tools have limited it to description and history, then this could be a way out — a way to shift from documenting abstract psychosocial processes to engineering and interrogating them.

Losing sleep on social psychology’s behalf

Social psychology has been slow to engage with major technological shifts before. Social media fundamentally reshaped how humans relate to each other, and the field mostly wrote papers about it afterward. Social psychologists were not, by and large, in the room when the design decisions were made. The global COVID-19 pandemic was a massively social phenomenon, both in terms of disease transmission and psychological consequence. It was undoubtedly — unquestionably — the time for social psychology to bring the discipline to bear on an existential threat. They murmured in the back of the room, did little to address the problem head-on, and then satisfied themselves with attempting to piece things together by studying the aftermath. Again, not for lack of effort, but primarily because the sheer flimsiness of social psychology’s “explanatory” power was laid bare.

I see the same pattern forming with generative AI. The literature is growing, but most of it is studying people’s reactions to AI — not using AI to open up new lines of inquiry into psychological processes themselves. The field is treating the most powerful computational modeling platform it has ever had access to as though it were a new kind of survey vignette. There’s a lot of “nibbling around the edges” but no big leaps forward for psychology as an empirical science as a whole.

Meanwhile, these systems are being built right now — their objectives, their values, the social dynamics they produce — by engineers and product teams, largely without social scientists at the table. For the first time, the tools exist for psychologists to be part of that process. Not to study what these systems do to people after the fact, but to define what they should be optimized for and help build it.

What to do about it

If you’re a social psychologist reading this, here is the most concrete thing I can offer. Pick a phenomenon you care about — belief change, trust calibration, empathy, intergroup contact. Find or build a dataset of real instances where that phenomenon occurred and didn’t occur in conversation. Measure the psychological outcomes. Train a model on it. See if it learns to produce the outcome. If it does, you now have a functional computational model you can probe, constrain, and take apart — a working model of a social process, not a model of language.

The technical barrier to entry has dropped enormously. The hard part is not the engineering. The hard part is the psychological thinking: what outcome to target, what training data to use, what questions to ask once you have a working system. That is precisely what social psychologists are trained to do, and I don’t think that that’s going to change anytime soon. Well, I hope not, anyways.

Social psychology has over a century of theoretical sophistication. Now, for the first time, it has a computational platform that can match it. The question is whether the field steps into that — or spends the next decade studying the wreckage from the outside.

I’d rather not watch that happen. Yet again.

  1. The argument above rests on an assumption that deserves scrutiny. A model that produces a given psychological outcome (e.g., belief change) may or may not be using the same mechanisms that humans use. It operates in language space, not in the full richness of human social cognition. When you ablate a capability and the outcome degrades, you’ve learned something about the model’s functional architecture — not automatically about human psychology. The inferential step from “the model does this” to “humans do this” requires validation. The right framing, then, is that LLMs provide a hypothesis generation and testing platform. Build a system that reliably produces an outcome. Probe it to generate precise hypotheses about which mechanisms matter, in what combinations, under what conditions. Then validate those hypotheses in human studies. The model is a map-maker, not the territory. That’s not a limitation unique to this approach — it’s how computational modeling works in every science that uses it. There are big lessons here from agent-based modeling studies, for example. But, at risk of overstating the comparison: climate models aren’t the atmosphere. They’re how we learn about the atmosphere. ↩︎

Leave a Reply

Your email address will not be published. Required fields are marked *