This is Brad DeLong's Grasping Reality—my attempt to make myself, and all of you out there in SubStackLand, smarter by writing where I have Value Above Replacement and shutting up where I do not… Wetware-Hardware Centaurs, Not Digital Gods: Wednesday MAMLMsFaster GPUs won’t conjure a world model from out of thin air we’re scaling mimicry, not understanding: that is my guess as to why the MAMLM frontier is spiky, with breathtaking benchmarks...Faster GPUs won’t conjure a world model from out of thin air we’re scaling mimicry, not understanding: that is my guess as to why the MAMLM frontier is spiky, with breathtaking benchmarks, baffling failures, real consequences…Models ace PhD quizzes, then misclick a checkout button and hallucinate a blazer and tie. If these systems are about to become “better than humans at ~everything,” why can’t they keep time on a roasting turkey?Without being built from studs up around a world model—durable representations of time, causality, and goals—frontier MAMLMs systems are high‑speed supersummarizers of the human record and hypercompetent counters at scale, yet brittle in embodied context, interfaces, and long‑horizon tasks. Calling it a “jagged frontier” misidentifies this unevenness, except to the extent it leads to permanent acceptance of centaur workflows where humans supply judgment and guardrails. Or so I guess. I really do not grok things like this. It seems obvious to me that “AI” as currently constituted—Modern Advanced Machine-Learning Models, MAMLMs, relying on scaling laws and bitter lessons—will not be “better than humans at ~everything” just with faster chips and properly-tuned GPUs and software stacks. Without world models, next‑token engines merely (merely!) draw on and summarize the real ASI, the Human Collective Mind Anthology Super‑Intelligence, and excel only where answers are clear or where counting at truly massive scale suffices: fast mimics—useful, but narrow. That seems obvious to me. But not to Helen Toner. And so Helen Toner stands across a vast gulf, among the serried ranks of AI-optimist singulatarians who believe that we are building our cognition betters, and face the “peak horse” problem that the steam engine, the internal-combustion engine, and the electric motor brought to the equine. And yet she is not on Team Artificial Super-Intelligence by 2030, not at all:
And yet I do not see this as confusing at all. What you have here is a function: prompts → next-word-continuations. (OK, next token.) This function answers the question: “What did the Typical Internet S***poster think was the next word in a stream-of-text situation like this one?” And as the prompt evolves recursively, which TIS it is copying from changes. And so all of a sudden we have not the same TIS who was not manning the cash-register, but a different TIS standing at the vending machine wearing a navy-blue blazer and a red tie waiting for a date... Keep reading with a 7-day free trialSubscribe to DeLong's Grasping Reality: Economy in the 2000s & Before to keep reading this post and get 7 days of free access to the full post archives. A subscription gets you:
|
Wetware-Hardware Centaurs, Not Digital Gods: Wednesday MAMLMs
Wednesday, 26 November 2025
Subscribe to:
Post Comments (Atom)



No comments:
Post a Comment