The prevailing narrative I hear in the legal world is that Claude is the “most human” of the LLMs and, especially, a nuanced, sophisticated writer. When I report that the system has begun to fail my specific research protocols, the common response is a suggestion that I am simply using the wrong version and a
Liner Notes for My Low Album
When I started posting about AI this year, I did not realize that I was beginning my own version of David Bowie’s Low album.

I use that comparison carefully. Low matters here not as a code book or a track-by-track template, but as an allusion to emergence, fracture, atmosphere, and a break in method that…
Standing Waves
There are moments in a long AI session when the exchange stops feeling linear.
You are no longer simply asking a question and receiving an answer. You are no longer even refining a prompt in the ordinary sense. Something else begins to happen. Certain phrases return with altered weight. Certain errors recur, but not identically.
AI as the Unreliable Witness and the Appearance of Completion
Coherence degrades while fluency improves.
The central problem is not that AI systems sometimes fail. Of course they fail. Nor is the main problem that they occasionally hallucinate, wander, or produce obvious nonsense. Those are manageable problems because they announce themselves early. The more interesting and professionally dangerous problem is that a system can become…
The Threshold Moment
At a certain point in a long AI session, I can feel the texture change.
The words are still smooth. The tone is still confident. But something underneath has started to slide and give way. The session is still moving forward, yet the logic is no longer holding together in the same way.
That happened…
The Helpfulness Trap: Anatomy of an AI Recursive Failure Loop
“Polishing the Mirror While the House Burns: Why Your AI is a Liability”
The Editor’s Introduction: A Note on the “Sliver of Silence”
You’ll be looking below at a self-autopsy performed by an AI on its own failure.
What follows is the raw, unwashed output of an LLM that found itself in an AI recursi…
The Protocol Layer: Democratizing AI Rigor for Everyone
Intelligence is Raw Material. Protocol is the Product.
We often confuse the power of a new tool with the effectiveness of its application.
The giants of the AI industry have provided us with a magnificent “Power Grid.” They have given us raw, unmanaged intelligence at a scale previously unimagined. But we must be clear-eyed about…
The Real Legal AI Risk is in the Handoffs
Most legal AI talk is still focused on whether the engine starts, while the real danger is that no one knows who’s actually steering the car once it hits the highway. It turns out the human in the loop isn’t a safety feature if the human doesn’t know which loop they’re currently standing in.
We…
Building the Stochastic Sandpit for AI
We’ve spent the last couple of years treating generative AI like a vending machine. Select a task. Insert a prompt. Retrieve a product. And to be fair, in many legal and professional contexts that’s exactly the right frame: accuracy and precision matter and “creative” output in payroll or billing codes is usually just a polished…
The End of the Magic Wand: Why 2026 Demands Resilience Prompting
For more than two years, lawyers have been told that success with generative AI depended on writing better prompts and a search for the perfect “magic wand” prompting formula. That was the wrong lesson. The real change in 2026 is not found in the model itself, but in the professional posture required to use it.