The fundamental issue lies in the architecture of LLMs themselves. Unlike traditional computing systems that strictly separate executable code from user data, LLMs process all input—whether it is a system command, a user’s email, or a retrieved document—as a single, undifferentiated sequence of tokens. There is no architectural boundary to enforce a distinction between trusted instructions and untrusted data. Consequently, a malicious instruction embedded in a seemingly harmless document is processed with the same authority as a system command.
Trial By Fire
2026-04-13 19:33
Here is what the top of my chimney looked like when it was brand new just after I installed it last fall.
And here is what it looked like after just one use.
How is that possible? Well, the new wood stove was lit with a single spark on 2025-10-24. That fire burned uninterrupted for 171 days (= 5 months and 20 days) until last night when I finally let it burn out completely. I’m still using it, but now I will just let it go out at night and relight it in the mornings.
I am extremely pleased that the chimney did not collapse during the several blizzards with 50mph+ winds and sudden feet of snow. I was pretty sure that my design would not fail but during many nights of high winds I was mentally preparing for a huge chunk of the house to detach.
Just feeding this thing dozens of trees over the last half a year has been a huge project. Hell, just clearing snow off of the wood every day was almost a full time job.
One thing we didn’t have to feed was the propane tank. Last winter we filled it three times; this year zero. We last filled it in September and will get at least another month out of it. Probably a good time to stop using that commodity anyway.
I wish I could say that it was cozy and comfortable in the house all winter, but it was not. It was often quite cold and harsh. What I can say is that it was infinitely more cozy and comfortable than it was the previous year without the stove.
Von Neumann Prompts
2026-03-28 08:15
Ever since we got computers to whisper sweet nothings to us, LLMs have turned the sinusoidal hype cycle hill into a hype mesa where maximum hype is going to gobble up all VC money for the foreseeable future. Which is fine. Those nothings are sweet!
But when I sit back and watch the Silicon Valley frenzy to use AI to move up the org chart from feudal lord to god emperor, I sometimes wonder if we’re forgetting fundamentals.
Bruce Schneier is one of the world’s most respected security experts and I’ve read his blog for decades now. I was just reading an article he contributed to called The Promptware Kill Chain and it is mostly sensible stuff.
However, this jumped out at me.
For the 25 years I’ve been a qualified computer security scapegoat, the main threat in "traditional computing systems" has been exactly that strictly separating executable code from user data is fucking hard!
To me, the most salient property of a Von Neumann architecture is:
Memory that stores data and instructions
What kind of common practical computing device uses a Von Neumann architecture? All of them!
The OG computer security exploit is surely the buffer overflow writing "data" into executable memory. This most excellent feature is one of the primary reasons people are afraid of programming in C.
One of the most famous XKCD comics of all time is this illustration of the concept manifesting in SQL.
How are the authors of this article about LLMs making a contrast when the similarity is so fundamental?
Oh well, what do I know? Let’s leave it to the "experts". For now, feel free to have fun with prompt injections, which it appears will plague LLM development for the foreseeable future.
Spring Skate Skiing On Wild Snow
2026-03-25 19:52
When you have a proper winter (i.e. snow is on the ground for more than three weeks leading to a spring thaw) it is common for some of the best skiing to come at the tail end of it. The reason is that at the beginning, the snow is light and fluffy and possibly deep. Skiing through that can be quite a hard slog. However, after some maple syrup weather (i.e. below freezing at night and above during the day), snow on the ground starts to change. It settles and compacts as the sharp edges of the crystals are replaced with smoother melt water re-ice holding things together. At some point it may become possible to ski on top of the snow.
Here in the central UP we are finally arriving at that point. This has been such a crazy year with not only a lot of snow falling out of the sky but with also the more important snow related weather that I always stress is essential for a proper winter: cold temperatures. We’ve had meters of snow and more importantly for what’s on the ground, very little above freezing air temperatures the entire winter. But in the last week we’ve been getting some warm sunny afternoons and finally the snow is starting to get baked down into something usable.
Yesterday I tried to take some video of me exploring it but it’s harder to get acceptable video than it is to ski 10km. Apparently. Today I had another go at it and this time I think I have a video that shows quite a bit about how the landscape is here and what our winter has been like.
During these rare days, my ability to get around the forest with this kind of snow present is unmatched. To see what that looks like and really what my winter has been like generally, check it out.
I’m still working out how to film this stuff and it’s not easy. Today I learned that my mission of scouting out trails was a bad one to film because I was looking around a lot. If you think minutes 25 to 36 will make you motion sick — like they did to me — sorry about that. Just skip that part. But where I’m actually moving on clear snow it should be fine. The Gopro stabilization is actually pretty impressive.
Purely Human Thoughts About Our Robot Friends
2026-03-08 21:46
Today I read an essay by Dario Amodei, the co-founder and chief shaman at Anthropic, the company behind the robot friend, Claude.
In this essay, he covers a lot and tries to thread the needle between personally having $7 billion and talking to normal humans like he’s a normal human too. He’s mostly wants to tell us how he hopes his company is not intentionally eschatological — they’re not trying to end humanity! Heavens no!
I am actually not really much of an AI doomer and I don’t really care as much about the obliteration of Nerddom as one might suppose. But that doesn’t mean muddled thinking escapes my attention. Instead of writing a coherent tight response to this article, it was more fun for me to go do something else and just leave you with my own collection of muddled thinking — notes I took while reading this essay. Enjoy!
His essay is crazy long! 21.7k words — 10x longer than this (too long) post. Bro, does that even fit in context?
"Humanity is about to be handed almost unimaginable power…"
I think that horse left the barn with industrial power. And literacy. And torture.
He defines "powerful AI" with (many things including this):
"In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields: biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc."
I am not Nobel-prize-winner-smarter than normal people but the fact that I have tons of successful experience in biology, programming, engineering, writing, etc. shows that if this stuff were important, my life would be more important. But I can assure you, nobody (statistically) gives a fuck about these things in real human interactions.
It says later:
"Everyone having a superintelligent genius in their pocket is an amazing advance and will lead to an incredible creation of economic value and improvement in the quality of human life."
And I’m sorry, as someone who has enjoyed the benefits of slightly elevated literacy and STEM nerdery I think the author is over aggrandizing these traits more than normal humans would. Consider that the industrial, scientific, and computer revolutions have been driven by a modest percentage of humans doing actual nerd work. Scaling nerdery does not necessarily scale quality of life. The large growth in the percentage of college education in my lifetime has not had commensurate beneficial effects. Keeps people out of trouble I guess.
"[Fancy AI]…would have a fairly good shot at taking over the world (either militarily or in terms of influence and control) and imposing its will on everyone else…"
My fellow humans, the interesting thing about AI is that it has no will. "Imposing its will" is like imagining my toaster imposing its will on my toast or my table saw imposing its will on a sheet of plywood. The illusion that AI has a will is really it bumbling around the artifacts left in the wake of humans having a will that they, the humans, were trying to impose. If a parrot says, "I want to impose my will on you," it is still a parrot’s will and that utterance’s message was not part of it.
"We now know that [fancy AI is] a process where many things can go wrong."
I agree with this but it’s in the same way that a toaster can electricute you if you if you drop it in a sink full of water or a table saw can cut off fingers. It is a distraction to imagine the AI cleverly scheming to get "what it wants" — it doesn’t want anything!
It is a bit unnerving that keeping this in mind is not a disciplined habit for the CEO of a big company responsible for keeping one of the major bots under control.
I’m reminded of people who misunderstand evolution who often speak of what evolution "wants" or "the goal of evolution". Wrong!
All of this reminds me of old timey science fiction. Even the subtitles "I’m sorry, Dave" and "Player Piano" allude to it. I’m not the only one pushing back on this sci-fi credulity. These brilliant Kiwi philosophers had all this figured out at least 16 years ago: "Finally robotic beings rule the world."
"…[fancy AIs] could conclude that they are playing a video game and that the goal of the video game is to defeat all other players (i.e., exterminate humanity)."
Finally! Someone getting close to realizing where true AI progress and threats can be measured: NPCs who still are not even close to convincing! Seriously, NPCs have guns (all of the guns!) and are doing their level best to kill me. Show me a good NPC and I will start worrying. Seriously, when AI starts hollowing out PvP lobbies, that is a metric you can use to chart the apocalypse.
"But I agree that a lot of very weird and unpredictable things can go wrong…"
True enough. Something tells me that him not being wealthier than the entire population of, say Dayton Ohio, is not one of those unpredictable things he is worried about. You may object to me carping about this dude’s pathological wealth but to me that is exactly one of those weird and unpredictable things that can go wrong and already has. The failure mode is not hard to predict — we’re living it!
"I suspect the situation is not unlike with humans, who are raised with a set of fundamental values (“Don’t harm another person”): many of them follow those values, but in any human there is some probability that something goes wrong, due to a mixture of inherent properties such as brain architecture (e.g., psychopaths), traumatic experiences or mistreatment, unhealthy grievances or obsessions, or a bad environment or incentives — and thus some fraction of humans cause severe harm. The concern is that there is some risk (far from a certainty, but some risk) that AI becomes a much more powerful version of such a person, due to getting something wrong about its very complex training process."
I have an important objection to this line of thinking. One of the key concepts that enables the whole parlor trick of Turing Test passing bots is that they are trained on an absolutely enormous corpus of human culture. This means we should not worry so much about some particular pathological human wierdos but rather we should worry about the general global human. A psycho killer may be a valuable boogie man that 99.99999% of humanity can use as a foil to calibrate disgust.
Sometimes you meet a person who is quirky. They have a weird sense of humor or they created some esoteric programming language or they’re several sigmas out there in their enjoyment of skiing or something like that. That is not what AI does. Left unspecified, its natural style is best described as "generic human". Are generic humans slightly racist bumptious dipshits? Sure, but they’re not generally genocidal monsters.
AI is like a wig. Might look like really impressive hair. Might be great for some special circumstances like a Commonwealth courtroom or a porn shoot or especially those two combined. But it’s not real hair. And analogously, if you encourage "Claude to think of itself as a particular type of person (an ethical but balanced and thoughtful person), and even encourage… Claude to confront the existential questions associated with its own existence in a curious but graceful manner," well, you may get the appearance of those things but you won’t get those things any more than a toupee gives you real hair. Could be enough for most situations, but it’s just important to keep in mind what we’re really dealing with here.
"We believe that a feasible goal for 2026 is to train Claude in such a way that it almost never goes against the spirit of its constitution."
Good luck with that. Red team says: hold my beer.
Later he correctly says, "But all models can be jailbroken…"
He uses the phrase "rapid efforts" — what an odd thing to say! As an athlete I’ve done hard efforts for achieving rapid speeds but a "rapid effort" could almost sound like an effort that is over quickly and therefore easier than, say, a "prolonged effort". Just an odd choice of words. Did no robot friends check his essay? (Bet you wish one checked this post!)
The whole big company AI thing is a little boring to me. It’s like worrying about what new sandwich McDonald’s is developing. Should the government make laws that limit the amount of poison that can be put in the sandwich? I know a McDonald’s sandwich will be eaten by a lot of people but I also know I won’t be one of them. Wake me up when I can make my own sandwich. Here is my post on locally hosted LLM research.. Seriously folks, nothing wholesome will happen until this technology is controlled by real people and not billionaires. And if you think that (non-billionaire) you control this technology now, then that’s probably the scariest AI related error you should be working on.
What’s my doom scenario? Whenever I talk to a robot friend and it looks something up on the internet, I feel a sense of dread. Not that the world will end because of Skynet AI risk, but because it is conclusive proof that the www is now even more fucked up than it was before. The public aspect of my web pages has been an utter failure through no fault of mine. (My website is still useful to me.) Erosion of the conceptual underpinnings of the www has taken its toll. Is the www’s entire shaky foundation ready to fold completely? People burned the Library of Alexandria too. These things happen.
He seems to be worried about "disturbed loners", especially those of us with leet STEM skillz. I hope he’s thinking about the non-loners who are also going to be more disturbed (than me) once they don’t have an income. I think I’m dealing with it pretty well!
I find his whole discussion about molecular biology (leading to bioterrorism) as typical outsider cluelessness. Having worked in the trenches of real molecular biology battles, the real world in biotech is very different than the impression a Michael Chricton novel imparts. Remember that our recent Plague was so devastating mostly from a lack of the most basic industrial engineering fundamentals (e.g.). To say, "…mRNA vaccines which can be designed to respond to a particular virus or variant…" is fatuous. Can not an attenuated virus vaccine do the same? Perhaps model your answer on the vaccine that tamed the 1958 flu outbreak — a vaccine that was developed quicker than Moderna’s mRNA C19 vaccine.
He goes on to say, "The reason I haven’t focused on cyber as much as biology is that (1) cyberattacks are much less likely to kill people, certainly not at the scale of biological attacks, and (2) the offense-defense balance may be more tractable in cyber…"
Okie dokie. If you use a clever AI attack to convince a bunch of people to go and murder all their neighbors, well, that tends to be very nasty.
Although he admits there are other dangers, he says, "…biology is currently the most serious vector of attack,…" To which I say, go ahead and use that magical AI to improve my health one tiny bit and then I’ll think about taking bioterrorism claims more seriously than having my fashy neighbors lynch me for my heretical views on the pseudoepigraphical nature of the PR materials of a famous Turkish wellness guru. That’s a real threat that I actually must worry about.
He has a whole paragraph on how AI companies are themselves a risky entity, and it is good he sees the irony. As he pointed out that AI companies could subtly brainwash their user base, I was wondering if this article might have been written by an AI trying to brainwash us. It is by the article’s own logic that this kind of article is the most likey vector of such an attack at this time.
"The world needs to understand the dark potential of powerful AI in the hands of autocrats, and to recognize that certain uses of AI amount to an attempt to permanently steal their freedom and impose a totalitarian state from which they can’t escape. I would even argue that in some cases, large-scale surveillance with powerful AI, mass propaganda with powerful AI, and certain types of offensive uses of fully autonomous weapons should be considered crimes against humanity."
This is true enough. But does he not see that he is the autocrat here? I’m sure he does.
"This could also lead to a world of “geographic inequality,” where an increasing fraction of the world’s wealth is concentrated in Silicon Valley, which becomes its own economy running at a different speed than the rest of the world and leaving it behind."
Dude, this happened 20 years ago. Blame Steve Jobs specifically. This guy needs to read my post Companies Repudiating Their Own Worthless Products. And it is geniuses like this who are dreaming up definitions to the I in AI.
He keeps talking about the need to prevent autocracy. Do billionaires not watch any news? Maybe he can’t imagine it from the perspective of a little person who is not a billionaire.
The working title of this post was: A Memo From A Liege Lord To His Serfs.
"…companies should think about how to take care of their employees."
Lol. He forgot to add, "…if they are shareholders to whom they have a fiduciary duty." Where does this guy think he is? The 1950s?
"…while all the above private actions can be helpful, ultimately a macroeconomic problem this large will require government intervention."
What he meant to say instead of "government intervention" is "guillotines".
"We simply need to break the link between the generation of economic value and self-worth and meaning."
Ya, good luck with that. Even the world’s leading practitioners at doing just this (ahem) are not going to get through to normal people until it is way too late. And those of us who have severed the tie are still in danger of starving.
Mighty big of him to be stepping in to make decisions about how to downsize enterprise salarymen. Cool. If I had a small cadre of geniuses at my disposal (Anthropic’s headcount of 2500 should be plenty), my goal would be to use them to create a fully open source AI system and put Anthropic out of business. Or, won’t that be possible? I may be a knucklehead but your bots are geniuses, right?
Your move, Dario.
Snow Mobility Scooters
2026-02-13 21:44
When I was an 8 year old kid, my dad bought two snowmobiles. We lived in Alaska so that seemed sane. I rode on the back of his many times. I have memories of falling off the back of his snowmobile, rolling down into a creek (frozen), and scrambling back up as fast as I could so I wouldn’t be passed by when he realized I was gone and he came back to look for me. Slightly less sane, sometimes he figured he’d ride one and I’d ride the other. And we did that. I rode around the Alaskan wilderness driving a snowmobile as a second grader. That’s kind of cool, kind of crazy, and kind of funny all at once.
I understand snowmobiles well enough. They are, in fact, a critical piece of equipment in my main sport. The only way I could possibly ski this trail like this is because it was groomed with a snowmobile. It would not be absurd for me to own one for utilitarian purposes.
But generally speaking, they are damn obnoxious. First off, you may think the steering is done with the front skis - wrong. The skis attempt to keep the front end from diving into the ground. What steers is what I call the "rudders" which are vertical steel fins on the skis. And it’s not hard to use your imagination to think of the destruction this pair of giant knives can do to the landscape. This is why asphalt roads have concrete sections at snowmobile trail crossings.
The area I live in is a very popular location for snowmobile tourism. I think the locals are relatively reasonable, but the tourists can be pretty stupid. I love seeing a clan of them stopped to smoke cigarettes by the propane tank at the Wetmore gas station. Fortunately every person you ever see on a snowmobile is scrupulously sober.
In the winter, they’re by far the main source of noise. For some insane reason, they like to drive around in the middle of the night. When I did a lot of skiing out into the timberlands last year, if ever I’d get a decent track established, it could be counted on to be destroyed by snowmobiles. They will go out of their way just to wreck my tracks. By the end of the season I was skiing with one foot on each side of stumps I knew to be buried in the snow. I can be obnoxious too.
Sometimes when I’m driving to the grocery store, I’ll tune the radio to static and crank up the volume; then I’ll roll down all the windows and say, "Look at me! I’m snowmobiling!" Seriously, have a look at this (silent dashcam) footage I took driving home today and, after the first 10 seconds, tell me, what exactly could I possibly be missing by not being on a snowmobile?
Much of that is a solid 50mph. Not exactly Swedish rally racing, but faster than I drive those roads in the summer (the snow is actually smoother than the dirt, and the entire road is lined with padding). Mya was with me and appreciated that we were not snowmobiling.
Ah, but my Subaru can’t go into the serious deep snow back country — or something like that. As someone who totally can and does explore the winter forest in detail, I can confidently say, snowmobiles have surprisingly similar limitations to our car. They are quite unstable in very deep snow and to plow into very deep low density snow like we currently have is incredibly sketchy. I know what’s on the forest floor and I advise against driving a snowmobile over it!
So what’s going on in the first 10 seconds of that video? I saw a lot of snowmobiles today but this knucklehead put on a perfect demonstration of the kind of thing I see. Right in front of me — left side of the road — this dude just rolls the snowmobile over onto its side while sitting there doing nothing! It’s kind of sad, but also very hilarious too. I should have stopped and said, hey, if you want to sell that thing very cheap, I could use one for grooming ski trails.
For older posts and RSS feed see the blog archives.
Chris X Edwards © 1999-2026