:date: 2026-03-28 08:15
Ever since we got computers to whisper sweet nothings to us, LLMs have turned the sinusoidal hype cycle hill into a hype mesa where maximum hype is going to gobble up all VC money for the foreseeable future. Which is fine. Those nothings are sweet!
But when I sit back and watch the Silicon Valley frenzy to use AI to move up the org chart from feudal lord to god emperor, I sometimes wonder if we're forgetting fundamentals.
Bruce Schneier is one of the world's most respected security experts and I've read his blog for decades now. I was just reading an article he contributed to called The Promptware Kill Chain and it is mostly sensible stuff.
However, this jumped out at me.
The fundamental issue lies in the architecture of LLMs themselves. Unlike traditional computing systems that strictly separate executable code from user data, LLMs process all input—whether it is a system command, a user’s email, or a retrieved document—as a single, undifferentiated sequence of tokens. There is no architectural boundary to enforce a distinction between trusted instructions and untrusted data. Consequently, a malicious instruction embedded in a seemingly harmless document is processed with the same authority as a system command.
For the 25 years I've been a qualified computer security scapegoat, the main threat in "traditional computing systems" has been exactly that strictly separating executable code from user data is fucking hard!
To me, the most salient property of a Von Neumann architecture is:
Memory that stores data and instructions
What kind of common practical computing device uses a Von Neumann architecture? All of them!
The OG computer security exploit is surely the buffer overflow writing "data" into executable memory. This most excellent feature is one of the primary reasons people are afraid of programming in C.
One of the most famous XKCD comics of all time is this illustration of the concept manifesting in SQL.
How are the authors of this article about LLMs making a contrast when the similarity is so fundamental?
Oh well, what do I know? Let's leave it to the "experts". For now, feel free to have fun with prompt injections, which it appears will plague LLM development for the foreseeable future.