This is Brad DeLong's Grasping Reality—my attempt to make myself, and all of you out there in SubStackLand, smarter by writing where I have Value Above Replacement and shutting up where I do not… Scott Bessent's LLM System Prompt Has a High Weight on the Word "Vermouth": Thing Worth Noting, for 2026-04-16 ThuVermouth, covfefe, & ‘vermin’: What LLMs potentially have to teach Us About the American Right’s Id from Scott Bessent’s “Straits of Vermouth” to Trump’s “oranges of the investigation”. Our...
Vermouth, covfefe, & ‘vermin’: What LLMs potentially have to teach Us About the American Right’s Id from Scott Bessent’s “Straits of Vermouth” to Trump’s “oranges of the investigation”. Our parapraxes are noisy but telling windows into the system prompts running our mental autocompletes…Slips-of-the-tongue are just slips of tongue. But they do reveal what your system prompt is, when you are considered as a Stochastic Parrot Large Language Model. It does tell us a lot about what you are thinking about a lot. As when California Senator Barbara Boxer said that the B-2 bomber “carries a large payroll”. And so a reasonable man would start asking whether TreasSec Scott Bessent has a problem here:
This is not a slip-of-the-tongue I would make. But, then, I only say the word “vermouth” once every three months, not multiple times a day. But let me try to be a bit more sober than this context cries out for, and perhaps a bit more sober than usual here: Slips of the tongue are small things, yes. But, I think, they are not nothing. No, the oil price will not move because the man purportedly in charge of U.S. economic policy appears to have his mind more on cocktails rather than on global value chains. We all mis-speak. We all have words that come to our lips. But they come to our lips at the wrong time because the rest of our mind has moved them to the front of the OIFO—often-in first-out—queue. This is precisely why such slips have always fascinated both psychoanalysts and political analysts. They are the cases in which the brain’s autocomplete function malfunctions in public, and so gives us a glimpse of what is over‑represented in the background process: the “system prompt” in what is right now an unavoidable current metaphor. Sigmund Freud called these “parapraxes”: the moments when repressed, or simply over‑preoccupied, content elbows its way into a place where it does not belong. The Freudian version is that this is, overwhelmingly, about displaced desire and displaced fear—he desire or fear that cannot be named directly still insists on being spoken, and so it is displaced onto something nominally safer. The person who cannot speak directly of sex becomes obsessed with cleanliness; the man who cannot admit his fear of his father spends his life railing against “bureaucrats” or “รฉlites”. Freud was overwrought. And “vermouth” for “Hormuz” is low‑stakes slapstick. But step back. Consider all of the more ominous “slips” we have been hearing for the past decade. The modern American right has developed an entire vocabulary of displacement, a thesaurus of “vermouths” for things it no longer feels quite secure saying out loud: “Urban crime.” “Globalists.” “Woke universities.” These are not analytically useful categories; they are, rather, deniable proxies for older, cruder categories: Black people, Jews, the young and non‑deferential. The fact that the proxies come so readily to the tongue tells you where the attention is, what emotional carga is being carried. Even so, the more revealing parapraxes in Trump world are not the dog whistles that have been carefully field‑tested by consultants, but the pure unedited slips: “the oranges of the investigation”; “Nambia”; “covfefe”; making a point, more than once, of calling political opponents “vermin”. One can, of course, write each one off as mere noise—old man tired, staff sloppy, feed overloaded. But when the noise is systematically concentrated in certain semantic neighborhoods—persecution, dominance, grievance, extermination—you know you are looking at the revealing jitter of an over‑amped system. Here is where the current crop of large language models, oddly enough, are helpful. They are hopeful for thinking about the human mind. For we, like them, are stochastic parrots driven by system prompts. (We hope that our “system prompt” is actually the worthwhile mental processes of a true Turing-class entity, and sometimes we are right.) You may have read about how researchers at Anthropic sandblasted their way into Claude’s internal representation space—for example, the Golden Gate Bridge “feature vector” described at: <https://www.understandingai.org/p/anthropic-decoded-the-vectors-claude> That is how this works on silicon. Turn up the weight on “Golden Gate Bridge” inside the model, and suddenly a question about gardening, or 18th‑century French politics, or rational expectations macroeconomics returns, instead, rhapsodies about suspension cables and rust‑red paint. The content filter on the outside is still doing its best to be helpful. But inside, the feature vector is screaming: bridge bridge bridge bridge bridge. Once you have that picture in your head, it is hard not to see something similar going on in Trumpist cognition. There is some internal representation—call it “white victimhood,” or “stolen status,” or “strongman as daddy”—that has its internal weight dialed up to eleven. Ask about crime policy, and you get stories about marauding “urban” hordes. Ask about trade, you get “they are laughing at us.” Ask about universities, you get accusations that students are being turned into anti‑American radicals. There are many words on the surface, but the internal feature vector is remarkably stable. The “Golden Gate Claude” moment is useful because it lets us imagine, quite concretely, what a massively over‑fit mind looks like. Take the model and, instead of “Golden Gate Bridge,” hard‑wire “Stolen Election.” Every prompt—about vaccines, housing policy, Taylor Swift—gets pulled, like iron filings to a magnet, into the “they stole it from us” narrative. Or define “Deep State Claude,” whose internal representation of a malevolent bureaucratic conspiracy is permanently maxed out. Any inconvenient fact will either be assimilated into the conspiracy or rejected as “fake news.” In such a mind, updating beliefs is no longer possible in any Bayesian sense: the prior is so strong it simply eats the likelihood function for breakfast. That is how you get people who can “know”: That crime is skyrocketing ,when it is not. That coal is making a comeback when it is not. That climate change is a hoax even as their hometown catches fire or washes away. It is not that they lack data. It is that the data is being forced, by an over‑weighted internal narrative, into pre‑assigned slots. Any datum that cannot be so coerced is discarded as “propaganda,” “hoax,” or “globalist lies.” “Golden Gate Claude” does not lose the ability to parse French politics; it simply insists on doing it from the vantage point of a suspension bridge in San Francisco Bay. Moreover, there is yet another way in which the LLM metaphor bites. Modern public-facing LLMs, as you know, are not just the raw predictive model. They are the raw model wrapped in a “safety layer”: a system of guardrails and post‑processing designed to prevent the underlying associations from spilling out where they would get the firm sued, or worse. Most of the time, the wrapper works. Occasionally, somebody finds a “jailbreak” prompt that causes the model to drop the mask and start babbling its unfiltered associations—some of which are disturbing, not because the machine is evil, but because the training data is. Trumpist speech has that feel. There is the public‑relations wrapper—“America First,” “law and order,” “protecting our beautiful suburbs.” And then there are the jailbreak moments: “shithole countries”; “Second Amendment people”; “very fine people on both sides”; casual talk of “vermin” to be rooted out. Those are the times when the internal id‑like model of the world—hierarchical, racialized, violent—leaks through the mask for an instant before the communications shop slams the door. They are the political equivalents of the model suddenly dumping its internal Golden Gate Bridge fan‑fic into a conversation about tulip bulbs. Which brings me, reluctantly, back to poor Scott Bessent and his mental focus on glasses of vermouth, neat. Compared to the true Trumpist parapraxes, this is small beer—or, rather, small cocktail. Yet it fits the same general pattern. It is not that the man definitely has a drinking problem. It is that his tongue, like yours and mine, is cued by what his mind is already circling around. If you say “vermouth” enough, “Hormuz” becomes just close enough in sound and just far enough from the center of attention that the wrong word jumps the gap. We should not over‑psychologize a single mistake. But we should not ignore the general lesson either. Human beings, like large language models, are stochastic parrots with priors: we repeat what we have seen and heard, biased by what we are currently thinking about and what we cannot stop thinking about. Our parapraxes and our obsessions, our “vermouths” and our “oranges,” our “covfefes” and our “vermin,” are not randomly distributed errors. They are, instead, noisy but informative samples from the underlying distribution of mental weights. That is why I find the current wave of LLM interpretability work oddly cheering, rather than depressing. It is fashionable to say that these models teach us nothing about the human mind. I think that is too strong. They are crude, yes. But they provide us with metaphors that are at least directionally right: internal feature vectors, over‑weighted priors, guardrails and jailbreaks, prompts and system prompts. And those metaphors, in turn, help us see that our own slips of the tongue and fixations are not mysterious eruptions of a wholly other “unconscious,” but the visible jitter of a predictive, pattern‑matching system running hot. The more we learn to read LLMs’ mistakes as windows into their inner structure, the more, I suspect, we will also learn to read our own. And that, I guess, is one way in which this powerful, odd, occasionally ridiculous technology might yet make us a little less foolish. If reading this gets you Value Above Replacement, then become a free subscriber to this newsletter. And forward it! And if your VAR from this newsletter is in the three digits or more each year, please become a paid subscriber! I am trying to make you readers—and myself—smarter. Please tell me if I succeed, or how I fail…##scott-bessents-llm-system-prompt-has-a-high-weight-on-the-word-vermouth-thing-worth-noting-for-2026-04-16-thu
|
|
||||||||||



No comments:
Post a Comment