3 Strong, AI Conjectures about Human Nature

LLMs tell us something about human nature, contra Chomsky. We’re not sure yet what they tell us, but there’s more going on here than big tech’s power play.

3 Strong, AI Conjectures about Human Nature
Canonical URL
Do not index
Do not index
What, if anything, does GenAI—especially LLM—tell us about ourselves? Not my usual fare, today, so feel free to skip this one if you aren’t interested in a bit of speculation.

Human Nature and AI

I offer three conjectures—
  1. Human nature exists, is more fixed than fluid (within a range of variability bounded by genetic inheritance), and transcends the particularities of time and place, that is, culture.
    1. Hard as it may be for non-academics or STEM academics to believe that anyone denies these claims, the modern humanities are full of people who deny these claims, going back at least to the 19th century and in some traditions to the medieval period.
    2. This denial is at least partially political since some people think the idea of human nature is at odds with human flourishing. But how human animals could avoid having something like a nature owing to our genetic inheritance molded by evolutionary pressures of selection, which is true of every other animal, is unclear at best.
  1. Language expresses that nature reliably while also forming one mechanism of its variability. Language describes reality in a way that undergirds our shared social realty.
  1. LLM and other deep leaning algorithms may in principle capture aspects of human nature by distilling and compressing our collective social use of language. After all its only inputs beyond raw energy harnessed to compute is something roughly like the socio-linguistic output of billions of language users across scores of natural languages and in every conceivable context of use.
The first two of these conjectures are not original. They derive from a centuries-old debate between—and here I am using shorthand pretty aggressively—an Enlightenment tradition and a skeptical, modern, explicitly counter-Enlightenment tradition.
In case you wonder why I have these views at all, or why I express them publicly here, in my first career, I was a formally trained philosopher focused on interpretation theory (how do we interpret texts?), epistemology (how do we know anything at all?), and religion (what are the sources of ultimate value?). Then I realized I needed to pay my bills and switched to industry AI.
It’s ironic that my conjectures are strongly Enlightenment-derived, and I associate the first two with Noam Chomsky’s embrace of Alexander von Humboldt’s views; but Chomsky fiercely rejects the third, claiming that LLM tells us nothing about human nature at all; rather, LLM may tell us human power systems like capitalism, but nothing deep about ourselves.
So for me the 3 conjectures are a strange place to find myself since, as a philosopher, I held no view about #3 at all but, with respect to #1 and #2, I held the opposite views—
  1. There is no fixed human nature; we are entirely (or almost so) fluid rather than fixed.
  1. Language offers us very little insight about human nature at all.
In other words, LLM isn’t just a powerful tool to create enterprise value—what I usually talk about here—but it implies real scientific claims about what our nature is like as creatures.

What LLM Tells Us about Human Nature

So maybe that brings you to asking why LLM has changed my views so dramatically?
The turning point was a study in Nature—Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns—that shows similarity between LLM internal structures and brain functioning in the language area. The abstract is worth quoting in full—
Contextual embeddings, derived from deep language models (DLMs), provide a continuous vectorial representation of language…We hypothesize that language areas in the human brain, similar to DLMs, rely on a continuous embedding space to represent language…Using stringent zero-shot mapping we demonstrate that brain embeddings in the IFG and the DLM contextual embedding space have common geometric patterns. The common geometric patterns allow us to predict the brain embedding in IFG of a given left-out word based solely on its geometrical relationship to other non-overlapping words in the podcast. Furthermore, we show that contextual embeddings capture the geometry of IFG embeddings better than static word embeddings. The continuous brain embedding space exposes a vector-based neural code for natural language processing in the human brain.
In other words, there are common and predictive geometric structures at work in LLM and in human brain function.
LLM tells us something about human nature and the more we understand them, the more we will learn about language and, hence, human nature. A weak answer is stronger than none at all!
I surmise the three conjectures based on the weak but suggestive evidence of this study that LLM captures something inherent to the way nature evolved human language functioning that is deeply rooted…
Against the skeptical view that we are mere acculturated epiphenomena that supervene on the material, the new AI suggests that the brain’s language facility—and here I return to Chomsky’s views—is the most distinctly human feature of our nature and that in capturing similar functioning and features, LLM offers predictive and, eventually, explanatory leverage toward a truly scientific theory of language and of human nature.
Chomsky is right that now our ability to interpret LLM internal function is weak. But LLM interpretability theory is advancing rapidly, often driven by engineering considerations to control hallucinations. He’s right that engineering isn’t a strong scientific theory. But the history of science shows that our ability to manipulate reality (“mere engineering”) often arrives long before a fully satisfactory theoretical explanation. Maxwell’s equations, special relativity, and quantum mechanics are three obvious examples. There are countless others.
No matter the commercial motivations behind AI and LLM interpretability, if it allows us to understand how LLMs simulate our language faculty so fluently, it will help us understand our language faculty and, hence, our distinctively linguistic, that is, human nature.
That’s what LLMs can tell us about human nature. And the correlations between LLM and human language faculty is what the Nature study begins to suggest. Consider the following.
notion image
Accurately predicting IFG brain embeddings for the unseen words is viable only if the geometry of the brain embedding space matches the geometry of the contextual embedding space. If there are no common geometric patterns among the brain embeddings and contextual embeddings, learning to map one set of words cannot accurately predict the neural activity for a new, nonoverlapping set of words.
In the zero-shot encoding analysis, we successfully predicted brain embeddings in IFG for words not seen during training (Fig. 2A, blue lines) using contextual embeddings extracted from GPT-2. We correlated the predicted brain embeddings with the actual brain embedding in the test fold.
My conjectures assume that LLMs correlate somehow with human language faculty because they’re neuronal structures, however primitive, that have learned all they know by ingesting and compressing something like “human socio-linguistic reality” in the form of trillions of tokens of language use of a nearly endless variety. That LLMs act very roughly human-like in language use—however flawed they otherwise are or that use otherwise is—suggests to me that language captures or depicts reality in a way faithful to how we capture or depict it mentally. That LLMs across different cultures behave similarly suggests that language captures something about our common human nature that transcends culture.
All of this was well understood by the late Enlightenment period, over against more primitive, earlier notions; but in the post-Enlightenment period, the successive decentering of homo sapiens in the cosmos—a process started by Galileo, inherited by Darwin and Nietzsche, accelerated by Freud, and the French post-structuralists, not to mention the ways in which quantum mechanics was (naively, to be sure, from a scientific perspective) taken up by the humanities, which was ripe for misunderstanding quantum uncertainty and the like—meant that old ideas such as “proper study of man is man” had gone badly out of fashion, and a kind of pan-skepticism about the certainties of the Enlightenment period, not least about human nature and language and texts and any notion of objectivity in interpersonal or social communicative rationality, came to totally dominate humanities and social sciences.
Over against that backdrop along comes the AI nerds with their Webfuls-of-data and fancy GPU machines and suddenly with something as “simple” as “predicting the next most probable token”, they’re able to simulate a good deal of human language faculty, however imperfectly, though advancing more rapidly than most can appreciate.

LLMs and Brain Function are…Geometrically Similar?

This post is peripheral to Stardog Labs’ work in general and is already too long, but if you’re interested in these issues, you should read The Platonic Representation Hypothesis
We argue that representations in AI models, particularly deep networks, are converging…We hypothesize that this convergence is driving toward a shared statistical model of reality, akin to Plato's concept of an ideal reality. We term such a representation the platonic representation and discuss several possible selective pressures toward it…
But you should also attend to some of the other evidence that’s emerging—in the vein of the Nature post quoted already—that suggests there is some kind of structural, i.e., geometric, similarity between how the brain processes language and how LLMs do. Here are three recent studies in this area—
  1. Structural Similarities Between Language Models and Neural Response Measurements
    1. Large language models (LLMs) have complicated internal dynamics, but induce representations of words and phrases whose geometry we can study. Human language processing is also opaque, but neural response measurements can provide (noisy) recordings of activation during listening or reading, from which we can extract similar representations of words and phrases. Here we study the extent to which the geometries induced by these representations, share similarities in the context of brain decoding. We find that the larger neural language models get, the more their representations are structurally similar to neural response measurements from brain imaging.
  1. Instruction Tuning Aligns LLMs to the Human Brain
    1. Instruction-tuning is a widely adopted method of finetuning that enables large language models (LLMs) to generate output that more closely resembles human responses to natural language queries, in many cases leading to human-level performance on diverse testbeds. However, it remains unclear whether instruction-tuning truly makes LLMs more similar to how humans process language. We investigate the effect of instruction-tuning on LLM-human similarity in two ways: (1) brain alignment, the similarity of LLM internal representations to neural activity in the human language system, and (2) behavioral alignment, the similarity of LLM and human behavior on a reading task. We assess 25 vanilla and instruction-tuned LLMs across three datasets involving humans reading naturalistic stories and sentences. We discover that instruction-tuning generally enhances brain alignment by an average of 6%, but does not have a similar effect on behavioral alignment. To identify the factors underlying LLM-brain alignment, we compute correlations between the brain alignment of LLMs and various model properties, such as model size, various problem-solving abilities, and performance on tasks requiring world knowledge spanning various domains. Notably, we find a strong positive correlation between brain alignment and model size (r = 0.95), as well as performance on tasks requiring world knowledge (r = 0.81). Our results demonstrate that instruction-tuning LLMs improves both world knowledge representations and brain alignment, suggesting that mechanisms that encode world knowledge in LLMs also improve representational alignment to the human brain.
  1. Testing theory of mind in large language models and humans—this one is different than the others since it focuses on a different (though obviously related) human facility, namely, theory of mind, rather than language facility. However, it shows a similar patter of, well, similarity between LLM and human capacities.
    1. Here we compare human and LLM performance on a comprehensive battery of measurements that aim to measure different theory of mind abilities, from understanding false beliefs to interpreting indirect requests and recognizing irony and faux pas. We tested two families of LLMs (GPT and LLaMA2) repeatedly against these measures and compared their performance with those from a sample of 1,907 human participants. Across the battery of theory of mind tests, we found that GPT-4 models performed at, or even sometimes above, human levels at identifying indirect requests, false beliefs and misdirection, but struggled with detecting faux pas. Faux pas, however, was the only test where LLaMA2 outperformed humans…These findings not only demonstrate that LLMs exhibit behaviour that is consistent with the outputs of mentalistic inference in humans but also highlight the importance of systematic testing to ensure a non-superficial comparison between human and artificial intelligences.
To be clear, I don’t know precisely what these studies are evidence for, but, contra Chomsky, they are evidence for something significant about human nature.
Further, I am perplexed that, after decades of commitment to empirical inquiry, Chomsky appears to be unaware of the research output directly relevant to matters on which he freely comments in what seems to be disinterest or disregard for evidence arising from good science. Perhaps I am failing to give sufficient weight to the truism that everyone gets old eventually?

Why This Matters At All

First, it matters to me because of my past and current interests. If you share any of them, then it probably matters to you.
But if you’re here for more commercial motivations, I think it should matter to you, too, and for a second reason. To the extent that LLMs simulate human language use, then all of your accustomed ways of thinking about how to deal with other language users—from the Gricean maxims, to Dennett’s intentional stance, to good old-fashioned skepticism about any language user’s, whether real or artificial, unvalidated claims—will put you and all AI users in good stead to always be less rather than more taken-in or deceived or disappointed when AI turns out to be, well, at least slightly bullshit.
That shouldn’t surprise anyone who lives on this planet for more than a minute since the vast majority of us are, over any appreciable time period, slightly bullshit!

Voicebox talks for you and your data talks back! Grow your business with faster time to insight.

Stardog Voicebox is a fast, accurate AI Data Assistant that's 100% hallucination-free guaranteed.

Signup