I do not know how to handle the intellectual crisis tsunami now coming down on me from the creation, development, and societal consequences of MAMLMs—Modern Advanced Machine-Learning Models—other than to find a floatation device, hang on for dear life, and start kicking as fast as I can. But I do know some things: As MAMLMs make text-extrusion five times faster, the real crisis moves to reading, filtering, and not going insane. “Prompt whispering” is theater; “context engineering is the work”. Stop treating language models like minds and start treating them like very fancy calculators wired to very large libraries. Your AI is not your friend, therapist, co‑author, or co-pilot; it is a token‑production machine. If you feed it a wise string of tokens, wise tokens will come out. If you feed it tokens from a stupid conversation, stupid tokens will come out…
Share
Share DeLong's Grasping Reality: Economy in the 2000s & Before
IMHO, this is right:
Mike Taylor: Why I Turned Off ChatGPT’s Memory <https://every.to/also-true-for-humans/why-i-turned-off-chatgpt-s-memory>: ‘The argument for turning off memory is… I want unbiased results from ChatGPT, based on context that I carefully curated and put in the prompt, so I know how it made its decision. With memory, anything from your past chats could affect the results in ways that are hard to predict…. The memory feature… lead[s] to unexpected and difficult-to-diagnose problems….
Even a throwaway line in your context window can have a big impact on the results you get from AI. These models are trained to be extremely eager to please, and so you need to manage the context you provide them, lest they get distracted, confused, or obsessed with what’s in there, degrading your results…. Context poisoning… distraction… confusion… clash….
Forgetting is a superpower…. Resetting to a clean slate by starting a new chat session (with memory off) is what lets you understand how ChatGPT makes its decisions. You know exactly what context it’s using because it’s only what you pasted into this prompt, not something from weeks or months ago that might be outdated, irrelevant, or wrong. The context you provide is the only variable, which makes it a true experiment—something you could never do with a human employee who remembers (and resents) the last round of testing. Turn on memory, and you lose that control. Your context becomes a compost heap…
Give a gift subscription
The frame that these things are minds with whom we are having conversations is seductive, deeply misleading, and ultimately very destructive. These are not real agents with intelligence, with anything like beliefs, intentions, or continuity of self.
These are natural language interfaces to databases.
These are stochastic calculator-translators over training data,
When you are using them to access a structured, reliable database, YOU WANT THEM AS DUMB AS POSSIBLE: bare linguistic fluency, and nothing more. You want minimal creativity: bullet‑proof parsing, schema adherence, and predictable translations, not flights of rhetorical fancy.
When you are using them the unstructured database that is the internet, YOU HAVE ONE JOB!
That job is to create an input token chain to the MAMLM GPT LLM that it will judge as similar to the token chains somewhere in the training data that contain the reliable information you are looking for.
Remember: your job is context search, not conversation. You are not chatting.
“Hallucinations” are design bugs, not personality quirks. They are failure modes of retrieval and context management; you counter them with tool use, constrained formats, verification loops, and domain‑specific evaluators, not with folk psychology about “confidence” or “lying”.
Leave a comment
Together all these make up a very hard task, outside of computer code.
(It is not made easier by the fact that every new model iteration overturns roughly half of the previous prompt-engineering rules of thumb. Why? Because the information architects see themselves as in the “building Digital God” business, rather than in the “access to unstructured databases” business. And so they have little idea what they are really doing that might be useful.)
SF/F author Jenny Schwartz has a nice passage in one of her novels, in which her protagonist is offered something that the office-decoration crew lead thinks is a real treasure—an AI-enabled desk:
Jenny Schwartz (2025): Stars Die <https://authorjennyschwartz.com/new-release-stars-die/>: ‘The team boss slapped the desk proudly. “We had it in storage. Maible. Course I tried to hold onto it. It’s quality-like. The other chairs and fittings are generic, but this is the good stuff.”
“Maible?” Vanda scowled at the desk. Why couldn’t life be simple? “I thought it was walnut. I mean, ordinary?” Her tone of voice made ordinary a question and a plea.
The boss answered her question, not her plea. Plainly, it was beyond his imagination that someone mightn’t want a maible desk. “One of the best. Good to see it back in use.”
“Yeah.” She was unsure whether to be excited or appalled.
A cough, hastily superimposed over a snicker, showed that at minimum one of the workers recognized her lack of enthusiasm.
His team’s nonverbal commentary failed to register with the boss. “The Maible Company is producing new cores.” He gazed at Vanda in happy expectancy. If he was a dog he’d have been wagging his tail and barely able to refrain from jumping around begging.
“A new core.” Vanda’s jaw dropped. “You want me to activate the desk?”
Lovingly, he stroked the desk which had been perfectly acceptable as plain old walnut but was now a problem. Not that he saw it that way. “What an opportunity! When the desk was destined for the museum I couldn’t replace the core. But a working desk…”
Vanda folded her arms. “You do remember why maible desks went out of fashion?”
The boss patted the desk again. He seemed to be apologizing to it or reassuring it. “That was just a few weak-minded individuals, and it’s not as if you have to use a neurolink.” He then contradicted his muttered excuse. “Besides, they’ve fixed the problems with the cores.”
“Uh huh.” Vanda had heard that before. It seemed that inventors couldn’t resist installing limited AIs into items. The problem was that each inventor tweaked the limitations differently, and the consequences for anyone interacting closely, or worse, relying upon it, was that the limited AIs developed in unexpected ways.
Maible desks had been abandoned because their users found their worst characteristics, traits like paranoia and obsession, were exaggerated by the AI cores.
On most spaceships limited AIs were locked into non-learning states. They operated via updates rather than being designed to evolve. This was especially important on solo-crewed or small crew spaceships where feedback loops could skew the AI and it, in turn, could amplify attitudes and behaviors in the crew…
Get 75% off a group subscription
It is important to do “context engineering” as Andrej Karpathy calls it, not “prompt whispering”. Fill the context window with just the right mix of instructions, exemplars, retrieved knowledge, tool feedback, and state for the next step. Think in four verbs: write, select, compress, isolate. Productive systems externalize scratchpads and memories (write), pull in only relevant pieces (select), summarize and prune aggressively (compress), and split tasks across separate contexts where interference could hurt (isolate). In my view, we need to learn how to use these things, and use them well. Do not forget the Silicon Law of Attention Conservation. If AI makes you and everyone else write five times faster, the reading and filtering burden explodes unless you push equally hard on filtering, summarization, and institutional workflow redesign. The biggest near‑term win is better attention allocation: customized digests, cross‑source synthesis, structured extraction, and standing queries that keep you in the signal and out of the doomscroll.
And four notes:
Separate “copilot” from “oracle”: Copilots work alongside you inside a workflow (IDE, browser, editor), proposing actions under tight constraints; oracles are free‑form chat systems. You want more of the former and fewer expectations of the latter’s reliability.
Use external scratchpads and state, not just longer chats: For everything, log plans, partial results, and notes to external stores or state objects and selectively pull them back, rather than trying to drag the entire conversation history forward forever.
The context window is a scarce resource: Every stray paragraph has an opportunity cost. Profiling, tracing, and explicit token accounting are part of serious engineering, not premature optimization.
Aim to be a better front‑end to the collective human mind that is the ASI: The real Anthology Super‑Intelligence is the knowledge system of the collective global human mind since the year -3000—books, papers, code, institutions, practices. The productive use of AI is to lower the friction of your plug-in into that system, not to pretend the linear-algebra model weights themselves are the wisdom-locus of Digital God.
Refer a friend
Remember: the scarce resource becomes human attention. In that context, the main use of these tools should be better filtering, summarizing, and organizing of what humanity as an anthology intelligence already knows. And none of the four major frames of “AI”—Digital God Rapture of the Nerds, Platform-Disruption, Platform-Monopoly-Creation, or Your Digital “Best Friend”—are helping you see any of this clearly.
Leave a comment
If reading this gets you Value Above Replacement, then become a free subscriber to this newsletter. And forward it! And if your VAR from this newsletter is in the three digits or more each year, please become a paid subscriber! I am trying to make you readers—and myself—smarter. Please tell me if I succeed, or how I fail…
##subturingbradbot
##voyaging-on-strange-seas-of-thought-not-however-alone-where-i-am-finding-mamlms-useful-this-winter
#voyaging-on-strange-seas-of-thought
#where-i-am-finding-mamlms-useful-this-winter
#mamlms
#context-engineering
#attention-economy
#language-models
#not-digital-god
#stochastic-parrot
#copilot-vs-oracle
#reading-bottleneck
#intellectual-crisis
#anthology-super-intelligence
#asi
#jack-in-to-the-real-asi
#jacking-in-to-an-asi-across-half-a-millennium
No comments:
Post a Comment