The prevailing narrative I hear in the legal world is that Claude is the “most human” of the LLMs and, especially, a nuanced, sophisticated writer. When I report that the system has begun to fail my specific research protocols, the common response is a suggestion that I am simply using the wrong version and a
Featured
Standing Waves
There are moments in a long AI session when the exchange stops feeling linear.
You are no longer simply asking a question and receiving an answer. You are no longer even refining a prompt in the ordinary sense. Something else begins to happen. Certain phrases return with altered weight. Certain errors recur, but not identically.
AI as the Unreliable Witness and the Appearance of Completion
Coherence degrades while fluency improves.
The central problem is not that AI systems sometimes fail. Of course they fail. Nor is the main problem that they occasionally hallucinate, wander, or produce obvious nonsense. Those are manageable problems because they announce themselves early. The more interesting and professionally dangerous problem is that a system can become…
The Threshold Moment
At a certain point in a long AI session, I can feel the texture change.
The words are still smooth. The tone is still confident. But something underneath has started to slide and give way. The session is still moving forward, yet the logic is no longer holding together in the same way.
That happened…
Fresh Voices at Three: What Listening Taught Us About AI, LegalTech, and the Next Generation

When Tom and I started the Fresh Voices series on The Kennedy-Mighell Report podcast, we had a pretty simple idea.
A lot of the most interesting work in legal tech seemed to be coming from people who were newer to the field, earlier in their careers, or just not as widely known yet as…
What Scarcity Taught Computing, and AI Might Need to Relearn
“A larger context window can create the feeling that a cognitive problem has been solved, when sometimes all that has happened is that disorder has become harder to notice.”

I was in Silicon Valley recently for the initial meeting of the University of Michigan Law School AI Advisory Council. With a little free time around…
The Intelligence Bureaucracy
Why the OpenAI Hiring Surge Signals a Crisis of Professional Control
The management problem in AI is no longer whether the models are improving. They are. The management problem is whether the working surface is becoming more dependable or less.
That is why the recent OpenAI hiring story on its plan to nearly double its…
The Protocol Layer: Democratizing AI Rigor for Everyone
Intelligence is Raw Material. Protocol is the Product.
We often confuse the power of a new tool with the effectiveness of its application.
The giants of the AI industry have provided us with a magnificent “Power Grid.” They have given us raw, unmanaged intelligence at a scale previously unimagined. But we must be clear-eyed about…
Playing the Guardrails: Turning AI Hallucination into a Musical Instrument

Most people use AI the way the system is designed to be used: ask a question, get a synthesis, and leave with answer. Keep it brief, transactional, and clean. We treat hallucination as a bug to be patched and drift as a signal to reboot.
This is exactly backward.
As popularized in AI discourse by…
Vibe Coding and the Control Plane
Many friends and colleagues in the legal technology world have been telling me I need to start vibe coding. My answer is that in vibe coding, you are intentionally surrendering the control plane. That is not a tradeoff I am willing to make.
Let me explain why that is a principle, not a preference, and…